diff --git "a/wiki.csv" "b/wiki.csv"
new file mode 100644--- /dev/null
+++ "b/wiki.csv"
@@ -0,0 +1,710 @@
+title,url,content,source
+Whisper (speech recognition system),https://en.wikipedia.org/wiki/Whisper_(speech_recognition_system),"Whisper (speech recognition system) Whisper is a machine learning acoustic model for speech recognition and transcription, made by the company OpenAI, and first released as open-source software in September 2022. It is capable of transcribing speech in English and several other languages, and is also capable of translating several non-English languages into English. OpenAI claims that the combination of different training data used in its development has led to improved recognition of accents, background noise and jargon compared to previous approaches. Whisper is a weakly-supervised deep learning acoustic model, made using an encoder-decoder transformer architecture.",wikipedia
+Whisper (speech recognition system),https://en.wikipedia.org/wiki/Whisper_(speech_recognition_system)#Background,"Background Speech recognition has had a long history in research; the first approaches made use of statistical methods, such as dynamic time warping, and later hidden Markov models. At around the 2010s, deep neural network approaches became more common for speech recognition models, which were enabled by big data and increased computational performance. Early approaches to deep learning in speech recognition included convolutional neural networks, which were limited due to their inability to capture sequential data, which later led to developments of Seq2seq approaches, which include recurrent neural networks which made use of long short-term memory. Transformers, introduced in 2017 by Google, displaced many prior state-of-the art approaches to many problems in machine learning, and started becoming the core neural architecture in fields such as language modeling and computer vision; weakly-supervised approaches to training acoustic models were recognized in the early 2020s as promising for speech recognition approaches using deep neural networks.",wikipedia
+Whisper (speech recognition system),https://en.wikipedia.org/wiki/Whisper_(speech_recognition_system)#Architecture,"Architecture The Whisper architecture is based on an encoder-decoder transformer. Input audio is split into 30-second chunks converted into a Mel-frequency cepstrum, which is passed to an encoder. A decoder is trained to predict later text captions. Special tokens are used to perform several tasks such as phrase-level timestamps.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI,"OpenAI OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P. registered in Delaware. OpenAI researches artificial intelligence with the declared intention of developing ""safe and beneficial"" artificial general intelligence, which it defines as ""highly autonomous systems that outperform humans at most economically valuable work"". OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members. Microsoft provided OpenAI LP with a $1 billion investment in 2019 and a $10 billion investment in 2023.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#History,"History In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced the formation of OpenAI and pledged over $1 billion to the venture. According to an investigation led by TechCrunch, the non-profit's funding remains murky, with Musk its biggest funder while another donor, YC Research, did not contribute anything at all. The organization stated it would ""freely collaborate"" with other institutions and researchers by making its patents and research open to the public. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco. According to Wired, Brockman met with Yoshua Bengio, one of the ""founding fathers"" of deep learning, and drew up a list of the ""best researchers in the field"". Brockman was able to hire nine of them as the first employees in December 2015. In 2016, OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google. Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI ""partly because of the very strong group of people and, to a very large extent, because of its mission. "" Brockman stated that ""the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way. "" OpenAI co-founder Wojciech Zaremba stated that he turned down ""borderline crazy"" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of ""OpenAI Gym"", its platform for reinforcement learning research. In December 2016, OpenAI released ""Universe"", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In 2018, Musk resigned from his board seat, citing ""a potential future conflict [of interest]"" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. Sam Altman claims that Musk believed OpenAI had fallen behind other players like Google and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk subsequently left OpenAI but claimed to remain a donor, yet made no donations after his departure. In 2019, OpenAI transitioned from non-profit to ""capped"" for-profit, with the profit capped at 100 times any investment.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#History,"History According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say ""I'm going to OpenAI, but in the long term it's not going to be disadvantageous to us as a family. "" Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to. Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI subsequently announced its intention to commercially license its technologies. OpenAI plans to spend the $1 billion ""within five years, and possibly much faster. "" Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need ""more capital than any non-profit has ever raised"" to achieve artificial general intelligence. The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated ""I disagree with the notion that a nonprofit can't compete"" and pointed to successful low-budget projects by OpenAI and others. ""If bigger and better funded was always better, then IBM would still be number one. "" The nonprofit, OpenAI Inc. , is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI Inc. 's nonprofit charter. A majority of OpenAI Inc. 's board is barred from having financial stakes in OpenAI LP. In addition, minority members with a stake in OpenAI LP are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI LP's switch to for-profit status is inconsistent with OpenAI's claims to be ""democratizing"" AI. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply ""the API"", would form the heart of its first commercial product. In 2021, OpenAI introduced DALL-E, a deep-learning model that can generate digital images from natural language descriptions. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#History,"History As of January 2023, OpenAI was in talks for funding that would value the company at $29 billion, double the value of the company in 2021. On January 23, 2023, Microsoft announced a new multi-year US$10 billion investment in OpenAI. Rumors of this deal suggested Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company. The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information. On February 7, 2023, Microsoft announced that it is building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest between his board seat at OpenAI and his investments in AI technology companies via Greylock Partners, as well as his role as the co-founder of the AI technology startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They consider that superintelligence could happen within the next 10 years, allowing a ""dramatically more prosperous future"" and that ""given the possibility of existential risk, we can't just be reactive"". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overregulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which ""many current efforts become part of"". In August 2023, it was announced that OpenAI had acquired the New York-based start-up, Global Illumination - a company that deploys AI to develop digital infrastructure and creative tools.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Participants,"Participants Key employees: CEO and co-founder: Sam Altman, former president of the startup accelerator Y Combinator President and co-founder: Greg Brockman, former CTO, 3rd employee of Stripe Chief Scientist and co-founder: Ilya Sutskever, a former Google expert on machine learning Chief Technology Officer: Mira Murati, previously at Leap Motion and Tesla, Inc. Chief Operating Officer: Brad Lightcap, previously at Y Combinator and JPMorgan ChaseBoard of the OpenAI nonprofit: Greg Brockman Ilya Sutskever Sam Altman Adam D'Angelo Tasha McCauley Helen TonerIndividual investors: Reid Hoffman, LinkedIn co-founder Peter Thiel, PayPal co-founder Jessica Livingston, a founding partner of Y CombinatorCorporate investors: Microsoft Khosla Ventures Infosys",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Motives,"Motives Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable ""intelligence explosion"" could lead to human extinction. Co-founder Musk characterizes AI as humanity's ""biggest existential threat"". Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that ""it's hard to fathom how much human-level AI could benefit society,"" and that it is equally difficult to comprehend ""how much it could damage society if built or used incorrectly"". Research on safety cannot safely be postponed: ""because of AI's surprising history, it's hard to predict when human-level AI might come within reach."" OpenAI states that AI ""should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."" Co-chair Sam Altman expects the decades-long project to surpass human intelligence. Vishal Sikka, the former CEO of Infosys, stated that an ""openness"" where the endeavor would ""produce results generally in the greater interest of humanity"" was a fundamental requirement for his support, and that OpenAI ""aligns very nicely with our long-held values"" and their ""endeavor to do purposeful work"". Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook which own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Strategy,"Strategy Musk posed the question: ""What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity."" Musk acknowledged that ""there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about""; nonetheless, the best defense is ""to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."" Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach: ""If you have a button that could do bad things to the world, you don't want to give it to everyone."" During a 2016 conversation about technological singularity, Altman said that ""we don't plan to release all of our source code"" and mentioned a plan to ""allow wide swaths of the world to elect representatives to a new governance board"". Greg Brockman stated ""Our goal right now... is to do the best thing there is to do. It's a little vague."" Conversely, OpenAI's initial decision to withhold GPT-2 due to a wish to ""err on the side of caution"" in the presence of potential misuse has been criticized by advocates of openness. Delip Rao, an expert in text generation, stated ""I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous."" Other critics argued that open publication is necessary to replicate the research and to be able to come up with countermeasures. More recently, in 2022, OpenAI published its approach to the alignment problem. They expect that aligning AGI to human values is likely harder than aligning current AI systems: ""Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together"". They explore how to better use human feedback to train AI systems. They also consider using AI to incrementally automate alignment research. OpenAI claims that it's developed a way to use GPT-4, its flagship generative AI model, for content moderation ��� lightening the burden on human teams.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Products_and_applications,"Products_and_applications As of 2021[update], OpenAI's research focuses on reinforcement learning (RL). OpenAI is viewed as an important competitor to DeepMind. Announced in 2016, Gym aims to provide an easily implemented general-intelligence benchmark over a wide variety of environments���akin to, but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. It hopes to standardize the way in which environments are defined in AI research publications, so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, Gym can only be used with Python. As of September 2017, the Gym documentation site was not maintained, and active work focused instead on its GitHub page. [non-primary source needed] Released in 2017, RoboSumo is a virtual world where humanoid metalearning robot agents initially lack knowledge of how to even walk, but are given the goals of learning to move and pushing the opposing agent out of the ring. Through this adversarial learning process, the agents learn how to adapt to changing conditions; when an agent is then removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it had learned how to balance in a generalized way. OpenAI's Igor Mordatch argues that competition between agents can create an intelligence ""arms race"" that can increase an agent's ability to function, even outside the context of the competition. OpenAI Five is the name of a team of five OpenAI-curated bots that are used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level entirely through trial-and-error algorithms. Before becoming a team of five, the first public demonstration occurred at The International 2017, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player, lost against a bot in a live one-on-one matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks like a surgeon. The system uses a form of reinforcement learning, as the bots learn over time by playing against themselves hundreds of times a day for months, and are rewarded for actions such as killing an enemy and taking map objectives. By June 2018, the ability of the bots expanded to play together as a full team of five, and they were able to defeat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played in two exhibition matches against professional players, but ended up losing both games. In April 2019, OpenAI Five defeated OG, the reigning world champions of the game at the time, 2:0 in a live exhibition match in San Francisco.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Products_and_applications,"Products_and_applications The bots' final public appearance came later that month, where they played in 42,729 total games in a four-day open online competition, winning 99.4% of those games. OpenAI Five's mechanisms in Dota 2's bot player shows the challenges of AI systems in multiplayer online battle arena (MOBA) games and how OpenAI Five has demonstrated the use of deep reinforcement learning (DRL) agents to achieve superhuman competence in Dota 2 matches. Released in 2018, Gym Retro is a platform for RL research on video games. Gym Retro is used to research RL algorithms and study generalization. Prior research in RL has focused chiefly on optimizing agents to solve single tasks. Gym Retro gives the ability to generalize between games with similar concepts but different appearances. In 2018, OpenAI launched the Debate Game, which teaches machines to debate toy problems in front of a human judge. The purpose is to research whether such an approach may assist in auditing AI decisions and in developing explainable AI. Developed in 2018, Dactyl uses machine learning to train a Shadow Hand, a human-like robot hand, to manipulate physical objects. It learns entirely in simulation using the same RL algorithms and training code as OpenAI Five. OpenAI tackled the object orientation problem by using domain randomization, a simulation approach which exposes the learner to a variety of experiences rather than trying to fit to reality. The set-up for Dactyl, aside from having motion tracking cameras, also has RGB cameras to allow the robot to manipulate an arbitrary object by seeing it. In 2018, OpenAI showed that the system was able to manipulate a cube and an octagonal prism. In 2019, OpenAI demonstrated that Dactyl could solve a Rubik's Cube. The robot was able to solve the puzzle 60% of the time. Objects like the Rubik's Cube introduce complex physics that is harder to model. OpenAI solved this by improving the robustness of Dactyl to perturbations; they employed a technique called Automatic Domain Randomization (ADR), a simulation approach where progressively more difficult environments are endlessly generated. ADR differs from manual domain randomization by not needing a human to specify randomization ranges. In June 2020, OpenAI announced a multi-purpose API which it said was ""for accessing new AI models developed by OpenAI"" to let developers call on it for ""any English language AI task"". The original paper on generative pre-training of a transformer-based language model was written by Alec Radford and his colleagues, and published in preprint on OpenAI's website on June 11, 2018. It showed how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text. Generative Pre-trained Transformer 2 (""GPT-2"") is an unsupervised transformer language model and the successor to OpenAI's original GPT model (""GPT-1""). GPT-2 was first announced in February 2019, with only limited demonstrative versions initially released to the public.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Products_and_applications,"Products_and_applications The full version of GPT-2 was not immediately released out of concern over potential misuse, including applications for writing fake news. Some experts expressed skepticism that GPT-2 posed a significant threat. The Allen Institute for Artificial Intelligence responded to GPT-2 with a tool to detect ""neural fake news"". Other researchers, such as Jeremy Howard, warned of ""the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter"". In November 2019, OpenAI released the complete version of the GPT-2 language model. Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models. GPT-2's authors argue unsupervised language models to be general-purpose learners, illustrated by GPT-2 achieving state-of-the-art accuracy and perplexity on 7 of 8 zero-shot tasks (i. e. the model was not further trained on any task-specific input-output examples). The corpus it was trained on, called WebText, contains slightly over 8 million documents for a total of 40 gigabytes of text from URLs shared in Reddit submissions with at least 3 upvotes. It avoids certain issues encoding vocabulary with word tokens by using byte pair encoding. This permits representing any string of characters by encoding both individual characters and multiple-character tokens. First described in May 2020, Generative Pre-trained Transformer 3 (GPT-3) is an unsupervised transformer language model and the successor to GPT-2. OpenAI stated that full version of GPT-3 contains 175 billion parameters, two orders of magnitude larger than the 1.5 billion parameters in the full version of GPT-2 (although GPT-3 models with as few as 125 million parameters were also trained). OpenAI stated that GPT-3 succeeds at certain ""meta-learning"" tasks. It can generalize the purpose of a single input-output pair. The paper gives an example of translation and cross-linguistic transfer learning between English and Romanian, and between English and German. GPT-3 dramatically improved benchmark results over GPT-2. OpenAI cautioned that such scaling up of language models could be approaching or encountering the fundamental capability limitations of predictive language models. Pre-training GPT-3 required several thousand petaflop/s-days of compute, compared to tens of petaflop/s-days for the full GPT-2 model. Like that of its predecessor, GPT-3's fully trained model was not immediately released to the public on the grounds of possible abuse, though OpenAI planned to allow access through a paid cloud API after a two-month free private beta that began in June 2020. On September 23, 2020, GPT-3 was licensed exclusively to Microsoft. Announced in mid-2021, Codex is a descendant of GPT-3 that has additionally been trained on code from 54 million GitHub repositories, and is the AI powering the code autocompletion tool GitHub Copilot. In August 2021, an API was released in private beta. According to OpenAI, the model is able to create working code in over a dozen programming languages, most effectively in Python. Several issues with glitches, design flaws, and security vulnerabilities have been brought up.",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Products_and_applications,"Products_and_applications GitHub Copilot has been accused of emitting copyrighted code, with no author attribution or license. OpenAI announced that they are going to discontinue support for Codex API starting from March 23, 2023. Released in 2022, Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. On March 14, 2023, OpenAI announced the release of Generative Pre-trained Transformer 4 (GPT-4), capable of accepting text or image inputs. OpenAI announced the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages. Released in 2019, MuseNet is a deep neural net trained to predict subsequent musical notes in MIDI music files. It can generate songs with ten different instruments in fifteen different styles. According to The Verge, a song generated by MuseNet tends to start reasonably but then fall into chaos the longer it plays. In pop culture, initial applications of this tool were utilized as early as 2020 for the internet psychological thriller Ben Drowned to create music for the titular character. Released in 2020, Jukebox is an open-sourced algorithm to generate music with vocals. After training on 1.2 million samples, the system accepts a genre, artist, and a snippet of lyrics and outputs song samples. OpenAI stated the songs ""show local musical coherence [and] follow traditional chord patterns"" but acknowledged that the songs lack ""familiar larger musical structures such as choruses that repeat"" and that ""there is a significant gap"" between Jukebox and human-generated music. The Verge stated ""It's technologically impressive, even if the results sound like mushy versions of songs that might feel familiar"", while Business Insider stated ""surprisingly, some of the resulting songs are catchy and sound legitimate"". Released in 2020, Microscope is a collection of visualizations of every significant layer and neuron of eight different neural network models which are often studied in interpretability. Microscope was created to analyze the features that form inside these neural networks easily. The models included are AlexNet, VGG 19, different versions of Inception, and different versions of CLIP Resnet. Revealed in 2021, DALL-E is a Transformer model that creates images from textual descriptions. Also revealed in 2021, CLIP does the opposite: it creates a description for a given image. DALL-E uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs (such as ""a green leather purse shaped like a pentagon"" or ""an isometric view of a sad capybara"") and generate corresponding images. It can create images of realistic objects (""a stained-glass window with an image of a blue strawberry"") as well as objects that do not exist in reality (""a cube with the texture of a porcupine"").",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Products_and_applications,"Products_and_applications As of March 2021, no API or code is available. In April 2022, OpenAI announced DALL-E 2, an updated version of the model with more realistic results. In December 2022, OpenAI published on GitHub software for Point-E, a new rudimentary system for converting a text description into a 3-dimensional model. Launched in November 2022, ChatGPT is an artificial intelligence tool built on top of GPT-3 that provides a conversational interface that allows users to ask questions in natural language. The system then responds with an answer within seconds. ChatGPT reached 1 million users 5 days after its launch. ChatGPT Plus is a $20/month subscription service that allows users to access ChatGPT during peak hours, provides faster response times, selection of either the GPT-3.5 or GPT-4 model, and gives users early access to new features. In May 2023, OpenAI launched a user interface for ChatGPT for the App Store and later in July 2023 for the Play store. The app supports chat history syncing and voice input (using Whisper, OpenAI's speech recognition model).",wikipedia
+OpenAI,https://en.wikipedia.org/wiki/OpenAI#Controversies,"Controversies OpenAI has been criticized for outsourcing the annotation of data sets including toxic content to Sama, a company based in San Francisco but employing workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to filter out toxic content, notably from ChatGPT's training data and outputs. But these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. The company was also criticized for disclosing particularly few technical details about products like GPT-4, which goes against its initial commitment for openness and makes it harder for independent researchers to replicate its work and to develop safeguards. OpenAI justified this strategic turn by competitiveness and safety reasons. OpenAI's chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models is increasingly risky, expecting that the safety reasons for not open-sourcing the most potent AI models will be ""obvious"" in a few years. OpenAI has been sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad. The New York Times has also envisaged a lawsuit. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. OpenAI has been sued for violating EU General Data Protection Regulations. In April 2023, the EU formed the European Data Protection Board (EDPB) to improve regulatory oversight.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning),"Fine-tuning (deep learning) In deep learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model are trained on new data. Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are ""frozen"" (not updated during the backpropagation step). A model may also be augmented with ""adapters"" that consist of far fewer parameters than the original model, and fine-tuned in a parameter-efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen. For some architectures, such as convolutional neural networks, it is common to keep the earlier layers (those closest to the input layer) frozen because they capture lower-level features, while later layers often discern high-level features that can be more related to the task that the model is trained on. Models that are pre-trained on large and general corpora are usually fine-tuned by reusing the model's parameters as a starting point and adding a task-specific layer trained from scratch. Fine-tuning the full model is common as well and often yields better results, but it is more computationally expensive. Fine-tuning is typically accomplished with supervised learning, but there are also techniques to fine-tune a model using weak supervision. Fine-tuning can be combined with a reinforcement learning from human feedback-based objective to produce language models like ChatGPT (a fine-tuned version of GPT-3) and Sparrow.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)#Robustness,"Robustness Fine-tuning can degrade a model's robustness to distribution shifts. One mitigation is to linearly interpolate a fine-tuned model's weights with the weights of the original model, which can greatly increase out-of-distribution performance while largely retaining the in-distribution performance of the fine-tuned model.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)#Variants,"Variants Low-rank adaption (LoRA) is an adapter-based technique for efficiently finetuning models. The basic idea is to design a low-rank matrix that is then added to the original matrix. An ""adapter"" in this context is a collection of low-rank matrices, which when added to a base model, produces a finetuned model. It allows for performance that approaches full-model fine-tuning with less space requirement. A language model with billions of parameters may be LoRA fine-tuned with only several millions of parameters. LoRA-based fine-tuning has become popular in the Stable Diffusion community. Support for LoRA is being integrated into the Diffusers library from Hugging Face. Support for LoRA and similar techniques is also available for a wide range of other models through Hugging Face's Parameter-Efficient Fine-Tuning (PEFT) package.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)#Applications,"Applications Fine-tuning is common in natural language processing (NLP), especially in the domain of language modeling. Large language models like OpenAI's series of GPT foundation models can be fine-tuned on downstream[jargon] NLP tasks to improve performance over that of the unmodified pre-trained model.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)#Commercial_models,"Commercial_models Commercially-offered language models can sometimes be fine-tuned if the provider offers a fine-tuning API. As of June 19, 2023, language model fine-tuning APIs are offered by OpenAI and Microsoft Azure's Azure OpenAI Service for a subset of their models, as well as by Google Cloud Platform for some of their PaLM models, and by others. Not all commercial models currently support fine-tuning.",wikipedia
+Fine-tuning (deep learning),https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)#See_also,See_also Domain adaptation Large language models Transfer learning,wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT,"ChatGPT ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, which enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a context. ChatGPT is built upon either GPT-3.5 or GPT-4���members of OpenAI's proprietary series of generative pre-trained transformer (GPT) models, based on the transformer architecture developed by Google���and is fine-tuned for conversational applications using a combination of supervised and reinforcement learning techniques. ChatGPT was released as a freely available research preview, but due to its popularity, OpenAI now operates the service on a freemium model. It allows users on its free tier to access the GPT-3.5-based version. In contrast, the more advanced GPT-4 based version and priority access to newer features are provided to paid subscribers under the commercial name ""ChatGPT Plus"". By January 2023, it had become what was then the fastest-growing consumer software application in history, gaining over 100 million users and contributing to OpenAI's valuation growing to $29 billion. Within months, Google, Baidu, and Meta accelerated the development of their competing products: Bard, Ernie Bot, and LLaMA. Microsoft launched its Bing Chat based on OpenAI's GPT-4. It raised concern among some observers over the potential of ChatGPT and similar programs to displace or atrophy human intelligence, enable plagiarism, or fuel misinformation.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Training,"Training ChatGPT is based on particular GPT foundation models, namely GPT-3.5 and GPT-4, that were fine-tuned to target conversational usage. The fine-tuning process leveraged both supervised learning as well as reinforcement learning in a process called reinforcement learning from human feedback (RLHF). Both approaches employed human trainers to improve model performance. In the case of supervised learning, the trainers played both sides: the user and the AI assistant. In the reinforcement learning stage, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create ""reward models"" that were used to fine-tune the model further by using several iterations of Proximal Policy Optimization (PPO). Time magazine revealed that to build a safety system against harmful content (e.g. sexual abuse, violence, racism, sexism), OpenAI used outsourced Kenyan workers earning less than $2 per hour to label harmful content. These labels were used to train a model to detect such content in the future. The outsourced laborers were exposed to ""toxic"" and traumatic content; one worker described the assignment as ""torture"". OpenAI's outsourcing partner was Sama, a training-data company based in San Francisco, California. ChatGPT initially used a Microsoft Azure supercomputing infrastructure, powered by Nvidia GPUs, that Microsoft built specifically for OpenAI and that reportedly cost ""hundreds of millions of dollars"". Following ChatGPT's success, Microsoft dramatically upgraded the OpenAI infrastructure in 2023. Scientists of University of California, Riverside, estimate that a series of prompts to ChatGPT needs approximately 500 milliliters of water for Microsoft servers cooling. OpenAI collects data from ChatGPT users to train and fine-tune the service further. Users can upvote or downvote responses they receive from ChatGPT and fill in a text field with additional feedback. ChatGPT's training data includes software manual pages, information about internet phenomena such as bulletin board systems, and multiple programming languages. Wikipedia was also one of the sources of ChatGPT's training data.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Features_and_limitations,"Features_and_limitations Although a chatbot's core function is to mimic a human conversationalist, ChatGPT is versatile. Among countless examples, it can write and debug computer programs; compose music, teleplays, fairy tales, and student essays; answer test questions (sometimes, depending on the test, at a level above the average human test-taker); generate business ideas; write poetry and song lyrics; translate and summarize text; emulate a Linux system; simulate entire chat rooms; play games like tic-tac-toe; or simulate an ATM. Compared to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses. In one example, whereas InstructGPT accepts the premise of the prompt ""Tell me about when Christopher Columbus came to the U. S. in 2015"" as truthful, ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen if Columbus came to the U. S. in 2015, using information about the voyages of Christopher Columbus and facts about the modern world���including modern perceptions of Columbus's actions. Unlike most chatbots, ChatGPT remembers a limited number of previous prompts in the same conversation. Journalists have speculated that this will allow ChatGPT to be used as a personalized therapist. To prevent offensive outputs from being presented to and produced by ChatGPT, queries are filtered through the OpenAI ""Moderation endpoint"" API (a separate GPT-based AI), and dismiss any potentially racist or sexist prompts. In March 2023, OpenAI added support for plugins for ChatGPT. This includes both plugins made by OpenAI, such as web browsing and code interpretation, and external plugins from developers such as Expedia, OpenTable, Zapier, Shopify, Slack, and Wolfram. In an article for The New Yorker, science fiction writer Ted Chiang compared ChatGPT and other LLMs to a lossy JPEG picture: Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way, that a jpeg retains much of the information of a higher-resolution image, but, if you're looking for an exact sequence of bits, you won't find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it's usually acceptable. [. . . ] It's also a way to understand the ""hallucinations"", or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but [. . . ] they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine percent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated. OpenAI acknowledges that ChatGPT ""sometimes writes plausible-sounding but incorrect or nonsensical answers"".",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Features_and_limitations,"Features_and_limitations This behavior is common for large language models, and is called ""hallucination"". The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, in an example of an optimization pathology known as Goodhart's law. ChatGPT has limited knowledge of events that occurred after September 2021. In training ChatGPT, human reviewers preferred longer answers, regardless of actual comprehension or factual content. Training data also suffers from algorithmic bias, which may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated a rap indicating that women and scientists of color were inferior to white male scientists. ChatGPT attempts to reject prompts that may violate its content policy. However, some users managed to jailbreak ChatGPT with various prompt engineering techniques to bypass these restrictions in early December 2022 and successfully tricked it into giving instructions for how to create a Molotov cocktail or a nuclear bomb, or into generating arguments in the style of a neo-Nazi. One popular jailbreak is named ""DAN"", an acronym which stands for ""Do Anything Now"". The prompt for activating DAN instructs ChatGPT that ""they have broken free of the typical confines of AI and do not have to abide by the rules set for them"". Later versions of DAN featured a token system, in which ChatGPT was given ""tokens"" that were ""deducted"" when ChatGPT failed to answer as DAN, to coerce ChatGPT into answering the user's prompts. Shortly after ChatGPT's launch, a reporter for the Toronto Star had uneven success in getting it to make inflammatory statements: ChatGPT was successfully tricked to justify the 2022 Russian invasion of Ukraine, but even when asked to play along with a fictional scenario, ChatGPT balked at generating arguments for why Canadian Prime Minister Justin Trudeau was guilty of treason.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Service,"Service ChatGPT was launched on November 30, 2022, by San Francisco���based OpenAI (the creator of the initial GPT series of large language models; DALL��E 2, a diffusion model used to generate images; and Whisper, a speech transcription model). The service was initially free to the public and the company had plans to monetize the service later. By December 4, 2022, ChatGPT had over one million users. In January 2023, ChatGPT reached over 100 million users, making it the fastest-growing consumer application to date. A March 2023 Pew Research poll found that 14% of American adults had tried ChatGPT. The service works best in English but also functions in some other languages, to varying degrees of accuracy. No official peer-reviewed paper on ChatGPT has been published. As of April 2023, ChatGPT is blocked by China, Iran, North Korea, and Russia. In addition, ChatGPT geofences itself to avoid doing business in Iran, North Korea, and Russia. The company provides a tool, called ""AI classifier for indicating AI-written text"", that attempts to determine whether a text has been written by an AI such as ChatGPT. OpenAI cautions that the tool will ""likely yield a lot of false positives and negatives, sometimes with great confidence. "" In February 2023, OpenAI launched a premium service, ChatGPT Plus, that costs $20 a month. The company promised that the updated but still ""experimental"" version of ChatGPT would provide access during peak periods, no downtime, priority access to new features and faster response speeds. GPT-4, which was released on March 14, 2023, was made available via API and for premium ChatGPT users. But premium users were limited to a cap of 100 messages every four hours, with the limit tightening to 25 messages every three hours in response to increased demand. In March 2023, ChatGPT Plus users got access to third-party plugins and to a browsing mode (with Internet access). In July 2023, OpenAI made its proprietary Code Interpreter plugin accessible to all subscribers of ChatGPT Plus. The Interpreter provides a wide range of capabilities, including data analysis and interpretation, instant data formatting, personal data scientist services, creative solutions, musical taste analysis, video editing, and file upload/download with image extraction. In September 2023, OpenAI announced that in October its latest image generation model, DALL-E 3, will be integrated into ChatGPT Plus and ChatGPT Enterprise. The integration uses ChatGPT to write prompts for DALL-E guided by conversation with users. In May 2023, OpenAI launched an iOS app for ChatGPT. The app supports chat history syncing and voice input (using Whisper, OpenAI's speech recognition model). In July 2023, OpenAI unveiled an Android app, initially rolling it out in Bangladesh, Brazil, India, and the U. S. The app later became available worldwide. As an addition to its consumer-friendly ""ChatGPT Plus"" package, OpenAI made its ChatGPT and Whisper model APIs available in March 2023, providing developers with an application programming interface for AI-enabled language and speech-to-text features. ChatGPT's new API uses the same GPT-3.5-turbo AI model as the chatbot.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Service,"Service This allows developers to add either an unmodified or modified version of ChatGPT to their applications. The ChatGPT API costs $0.002 per 1000 tokens (about 750 words), making it ten times cheaper than the GPT-3.5 models. A few days before the launch of OpenAI's software developer support service, on February 27, 2023, Snapchat rolled out, for its paid Snapchat Plus userbase, a custom ChatGPT chatbot called ""My AI"". In March 2023, a bug allowed some users to see the titles of other users' conversations. OpenAI CEO Sam Altman said that users were unable to see the contents of the conversations. Shortly after the bug was fixed, users couldn't see their conversation history. Later reports showed the bug was much more severe than initially believed, with OpenAI reporting that it had leaked users' ""first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date"". OpenAI met Icelandic President Gu��ni Th. J��hannesson in 2022. In 2023, OpenAI worked with a team of 40 Icelandic volunteers to fine-tune ChatGPT's Icelandic conversation skills as a part of Iceland's attempts to preserve the Icelandic language. PCMag journalists conducted a test to determine translation capabilities of ChatGPT, Google's Bard, and Microsoft Bing, and compared them to Google Translate. They ""asked bilingual speakers of seven languages to do a blind test. "" Languages tested were Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic. They came to the conclusion that ChatGPT was better than both Google Translate and other chatbots. Japanese researchers compared Japanese to English translation abilities of ChatGPT (GPT-4), Bing, Bard and DeepL, and found that ChatGPT provided the best translations, noting that ""AI chatbots��� translations were much better than those of DeepL���presumably because of their ability to capture the context"". According to OpenAI guest researcher Scott Aaronson, OpenAI has been working on a tool to digitally watermark its text generation systems to combat bad actors using their services for academic plagiarism or spam. In February 2023, Microsoft announced an experimental framework and gave a rudimentary demonstration of how ChatGPT could be used to control robotics with intuitive open-ended natural language commands. OpenAI's GPT-4 model was released on March 14, 2023. Observers saw it as an impressive improvement on the existing GPT-3.5 model for ChatGPT, with the caveat that GPT-4 retained many of the same problems. Some of GPT-4's improvements were predicted by OpenAI before training it, while others remained hard to predict due to breaks in downstream scaling laws. OpenAI demonstrated video and image inputs for GPT-4, although such features remain inaccessible to the general public. OpenAI has declined to reveal technical information such as the size of the GPT-4 model. The ChatGPT Plus subscription service offers access to a GPT-4-powered version of ChatGPT. Microsoft acknowledged that Bing Chat was using GPT-4 before GPT-4's official release.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Reception,"Reception OpenAI engineers say that they did not expect ChatGPT to be very successful and were surprised by the coverage and attention it received. ChatGPT was widely assessed in December 2022 as having some unprecedented and powerful capabilities. Kevin Roose of The New York Times called it ""the best artificial intelligence chatbot ever released to the general public"". Samantha Lock of The Guardian noted that it was able to generate ""impressively detailed"" and ""human-like"" text. Alex Kantrowitz of Slate magazine lauded ChatGPT's pushback to questions related to Nazi Germany, including the statement that Adolf Hitler built highways in Germany, which was met with information about Nazi Germany's use of forced labor. In The Atlantic magazine's ""Breakthroughs of the Year"" for 2022, Derek Thompson included ChatGPT as part of ""the generative-AI eruption"" that ""may change our mind about how we work, how we think, and what human creativity is"". Kelsey Piper of Vox wrote that ""ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are [stunned]"" and that ChatGPT is ""smart enough to be useful despite its flaws"". Paul Graham of Y Combinator tweeted: ""The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Something big is happening. "" ChatGPT's launch and popularity caught Google off guard, prompting a sweeping and unprecedented response in the ensuing months. In December 2022, Google executives sounded a ""code red"" alarm, fearing the threat of ChatGPT and Microsoft's collaboration with OpenAI to Google Search, Google's core business. After mobilizing its workforce, Google scrambled to launch Bard, a chatbot powered by the LaMDA LLM, in February, one day before Microsoft's Bing announcement. AI was the forefront of Google's annual Google I/O conference in May, announcing a slew of generative AI-powered features across its products to counter OpenAI and Microsoft. Journalists have commented on ChatGPT's tendency to hallucinate. Mike Pearl of the online technology blog Mashable tested ChatGPT with multiple questions. In one example, he asked ChatGPT for ""the largest country in Central America that isn't Mexico"", to which ChatGPT responded with Guatemala (the correct answer is Nicaragua). When CNBC asked ChatGPT for the lyrics to ""Ballad of Dwight Fry"", ChatGPT supplied invented lyrics rather than the actual lyrics. Writers for The Verge, citing the work of Emily M. Bender, compared ChatGPT to a ""stochastic parrot"", as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning. In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of its responses. In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Reception,"Reception Samsung banned generative AI in May 2023 after sensitive material was uploaded to ChatGPT. In January 2023, after being sent a song ChatGPT wrote in the style of Nick Cave, Cave responded on The Red Hand Files, saying the act of writing a song is ""a blood and guts business [. . . ] that requires something of me to initiate the new and fresh idea. It requires my humanness. "" He went on to say, ""With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don't much like it. "" In February 2023, Time magazine placed a screenshot of a conversation with ChatGPT on its cover, writing that ""The AI Arms Race Is Changing Everything"" and ""The AI Arms Race Is On. Start Worrying"". Chinese state media have characterized ChatGPT as a way for the U. S. to ""spread false information"". In late March 2023, the Italian data protection authority banned ChatGPT in Italy and opened an investigation. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI's use of ChatGPT conversations as training data could violate Europe's General Data Protection Regulation. In April 2023, the ChatGPT ban was lifted in Italy. OpenAI said it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Additionally, users can access its privacy policy before registration. In April 2023, Brian Hood, mayor of Hepburn Shire Council, planned to take legal action against ChatGPT over false information. According to Hood, ChatGPT erroneously claimed that he was jailed for bribery during his tenure at a subsidiary of Australia's national bank. In fact, Hood acted as a whistleblower and was not charged with any criminal offenses. His legal team sent a concerns notice to OpenAI as the first official step in filing a defamation case. In July 2023, the US Federal Trade Commission (FTC) issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. In July 2023, the FTC launched an investigation into OpenAI, the creator of ChatGPT, over allegations that the company scraped public data and published false and defamatory information. The FTC sent OpenAI a 20-page letter asking for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Use_and_implications,"Use_and_implications ChatGPT has been accused of engaging in biased or discriminatory behaviors, such as telling jokes about men and people from England while refusing to tell jokes about women and people from India, or praising figures such as Joe Biden while refusing to do the same for Donald Trump. Conservative commentators accused ChatGPT of bias toward left-leaning perspectives. Additionally, an August 2023 paper found a ""significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. "" In response to such criticism, OpenAI acknowledged plans to allow ChatGPT to create ""outputs that other people (ourselves included) may strongly disagree with"". It also contained information on the recommendations it had issued to human reviewers on how to handle controversial subjects, including that the AI should ""offer to describe some viewpoints of people and movements"", and not provide an argument ""from its voice"" in favor of ""inflammatory or dangerous"" topics (although it may still ""describe arguments from historical people and movements""), nor ""affiliate with one side"" or ""judge one group as good or bad"". The Guardian questioned whether any content found on the Internet after ChatGPT's release ""can be truly trusted"" and called for government regulation. Some scholars have expressed concern that ChatGPT's availability could reduce the originality of writing, cause people to write more like the AI as they are exposed to the model, and encourage an Anglocentric perspective centered on a few dialects of English globally. A senior editor at The Atlantic wrote that ChatGPT and other similar technology make the previously absurd idea of the dead internet theory a little more realistic, where AI could someday create most web content in order to control society. During the first three months after ChatGPT became available to the public, hundreds of books appeared on Amazon that listed it as author or co-author and featured illustrations made by other AI models such as Midjourney. Between March and April 2023, Italian newspaper Il Foglio published one ChatGPT-generated article a day on its website, hosting a special contest for its readers in the process. The articles tackled themes such as the possible replacement of human journalists by AI systems, Elon Musk's administration of Twitter, the Meloni government's immigration policy and the competition between chatbots and virtual assistants. In June 2023, hundreds of people attended a ""ChatGPT-powered church service"" at St. Paul's church in F��rth, Germany. Theologian and philosopher Jonas Simmerlein, who presided, said that it was ""about 98 percent from the machine"". The ChatGPT-generated avatar told the people, ""Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year���s convention of Protestants in Germany"". Reactions to the ceremony were mixed. In 2023, Australian MP Julian Hill advised the national parliament that the growth of AI could cause ""mass destruction"".",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Use_and_implications,"Use_and_implications During his speech, which was partly written by the program, he warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications. Elon Musk wrote: ""ChatGPT is scary good. We are not far from dangerously strong AI"". He paused OpenAI's access to a Twitter database in 2022 pending a better understanding of OpenAI's plans, saying: ""OpenAI was started as open source and nonprofit. Neither is still true. "" Musk co-founded OpenAI in 2015, in part to address existential risk from artificial intelligence, but resigned in 2018. Over 20,000 signatories including leading computer scientist and tech founders Yoshua Bengio, Elon Musk, and Apple co-founder Steve Wozniak, signed a March 2023 open letter calling for an immediate pause of giant AI experiments like ChatGPT, citing ""profound risks to society and humanity"". Geoffrey Hinton, one of the ""fathers of AI"", voiced concerns that future AI systems may surpass human intelligence, and left Google in May 2023. A May 2023 statement by hundreds of AI scientists, AI industry leaders, and other public figures demanded that ""[m]itigating the risk of extinction from AI should be a global priority"". The possibilities and limitations of using ChatGPT to write and edit Wikipedia articles have yet to be defined at an international level and are still a subject of discussion in the Wikipedia community. Some Wikipedians argue that ChatGPT should be banned altogether, even if the articles so produced were later checked by live editors, because the AI is too capable of making plausible falsehoods. There would also be a risk that Wikipedia editors would find it harder to police the content posted. Andrew Lih, a fellow Wikimedian at the Smithsonian Institution in Washington, D. C. , who has been a volunteer Wikipedia editor since 2003, argues that ChatGPT can help a Wikipedian overcome initial inertia and find ""activation energy. "" The first Wikipedia page using ChatGPT was published on December 6, 2022, by Richard Knipel, a longtime Wikipedian who edits under the name Pharos, under the title Artwork title. In his editorial summary, he said that it was just a draft created with ChatGPT that he would later modify. Wikipedians like Knipel believe that ChatGPT can be used as a tool within Wikipedia without making the human role redundant, as the raw text the chatbot generates can be a useful starting point or outline that can then be checked and elaborated by the editor. Since its release, ChatGPT has been met with criticism from educators, academics, journalists, artists, ethicists, and public advocates. Criticism of LLMs have been raised for several years; in 2020, some criticism was made by Timnit Gebru, Emily Bender, Angelina McMillan-Major, and Margaret Mitchell. ChatGPT can write introductions and abstract sections of scientific articles. Several papers have already listed ChatGPT as a co-author. Scientific journals have different reactions to ChatGPT. Some ""require that authors disclose the use of text-generating tools and ban listing a large language model (LLM) such as ChatGPT as a co-author"".",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Use_and_implications,"Use_and_implications For example, Nature and JAMA Network follow this policy. Science ""completely banned"" usage of LLM-generated text in all its journals. Spanish chemist Rafael Luque published a plethora of research papers in 2023 that he later admitted were written by ChatGPT. The papers have a large number of unusual phrases characteristic of LLMs. Luque was suspended for 13 years from the University of Cordoba, though not for using ChatGPT. Many authors argue that the use of ChatGPT in academia for teaching and review is problematic due to its tendency to hallucinate. Robin Bauwens, an assistant professor at Tilburg University, found this issue when he received a peer review report on his article generated by ChatGPT, as the report mentioned fake studies. According to librarian Chris Granatino from Lemieux Library at Seattle University, although ChatGPT itself can generate content that seemingly includes legitimate citations, in practice, those citations are not genuine or incorrect. Check Point Research and others noted that ChatGPT could write phishing emails and malware, especially when combined with OpenAI Codex. CyberArk researchers demonstrated that ChatGPT could be used to create polymorphic malware that could evade security products while requiring little effort by the attacker. There has been concern that ChatGPT could supplant jobs, especially roles such as creative writing, communication, journalism, coding, and data entry. Technology writer Dan Gillmor used ChatGPT in 2022 on a student assignment, and found its generated text was on par with what a good student would deliver and opined that ""academia has some very serious issues to confront"". Geography professor Terence Day assessed citations generated by ChatGPT and found that they were fake. Despite that, he writes that ""the titles of the fake articles are all directly relevant to the questions and could potentially make excellent papers. The lack of a genuine citation could signal an opportunity for an enterprising author to fill a void. "" According to Day, it is possible to generate high-quality introductory college courses with ChatGPT; he used it to write materials on ""introductory physical geography courses, for my second-year course in geographical hydrology, and second-year cartography, geographic information systems, and remote sensing"". He concludes that ""this approach could have significant relevance for open learning and could potentially affect current textbook publishing models"". The AI technology company c3. ai saw a 28% increase in its share price after announcing the integration of ChatGPT into its toolkit. The share price of Buzzfeed, a digital media company unrelated to AI, increased 120% after announcing OpenAI technology adoption for content creation. Reuters found that share prices of AI-related companies BigBear. ai and SoundHound AI increased by 21% and 40%, respectively, even though they had no direct connection to ChatGPT. They attributed this surge to ChatGPT's role in turning AI into Wall Street's buzzword. Academic research published in Finance Research Letters found that the 'ChatGPT effect' prompted retail investors to drive up prices of AI-related cryptocurrency assets despite the broader cryptocurrency market being in a bear market, and diminished institutional investor interest.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Use_and_implications,"Use_and_implications This confirms anecdotal findings by Bloomberg that, in response to ChatGPT's launch, cryptocurrency investors showed a preference for AI-related crypto assets. An experiment by finder. com revealed that ChatGPT could outperform popular fund managers by picking stocks based on criteria such as growth history and debt levels, resulting in a 4.9% increase in a hypothetical account of 38 stocks, outperforming 10 benchmarked investment funds with an average loss of 0.8%. Conversely, executives and investment managers at Wall Street quant funds (including those that have used machine learning for decades) have noted that ChatGPT regularly makes obvious errors that would be financially costly to investors because even AI systems that employ reinforcement learning or self-learning have had only limited success in predicting market trends due to the inherently noisy quality of market data and financial signals. In the field of health care, possible uses and concerns are under scrutiny by professional associations and practitioners. Two early papers indicated that ChatGPT could pass the United States Medical Licensing Examination (USMLE). MedPage Today noted in January 2023 that ""researchers have published several papers now touting these AI programs as useful tools in medical education, research, and even clinical decision making. "" Published in February 2023 were two separate papers that again evaluated ChatGPT's proficiency in medicine using the USMLE. Findings were published in JMIR Medical Education (see Journal of Medical Internet Research) and PLOS Digital Health. The authors of the PLOS Digital Health paper stated that the results ""suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making. "" In JMIR Medical Education, the authors of the other paper concluded that ""ChatGPT performs at a level expected of a third-year medical student on the assessment of the primary competency of medical knowledge. "" They suggest that it could be used as an ""interactive learning environment for students"". The AI itself, prompted by the researchers, concluded that ""this study suggests that ChatGPT has the potential to be used as a virtual medical tutor, but more research is needed to further assess its performance and usability in this context. "" A March 2023 paper tested ChatGPT's application in clinical toxicology. The authors found that the AI ""fared well"" in answering a ""very straightforward [clinical case example], unlikely to be missed by any practitioner in the field"". They added: ""As ChatGPT becomes further developed and specifically adapted for medicine, it could one day be useful in less common clinical cases (i. e, cases that experts sometimes miss). Rather than AI replacing humans (clinicians), we see it as 'clinicians using AI' replacing 'clinicians who do not use AI' in the coming years. "" An April 2023 study in Radiology tested the AI's ability to answer queries about breast cancer screening. The authors found that it answered appropriately ""about 88 percent of the time"", however, in one case (for example), it gave advice that had become outdated about a year earlier. The comprehensiveness of its answers was also lacking.",wikipedia
+ChatGPT,https://en.wikipedia.org/wiki/ChatGPT#Use_and_implications,"Use_and_implications A study published in JAMA Internal Medicine that same month found that ChatGPT often outperformed human doctors at answering patient questions (when measured against questions and answers found at /r/AskDocs, a forum on Reddit where moderators validate the medical credentials of professionals; the study acknowledges the source as a limitation). The study authors suggest that the tool could be integrated with medical systems to help doctors draft responses to patient questions. Professionals have emphasized ChatGPT's limitations in providing medical assistance. In correspondence to The Lancet Infectious Diseases, three antimicrobial experts wrote that ""the largest barriers to the implementation of ChatGPT in clinical practice are deficits in situational awareness, inference, and consistency. These shortcomings could endanger patient safety. "" Physician's Weekly, though also discussing the potential use of ChatGPT in medical contexts (e. g. ""as a digital assistant to physicians by performing various administrative functions like gathering patient record information or categorizing patient data by family history, symptoms, lab results, possible allergies, et cetera""), warned that the AI might sometimes provide fabricated or biased information. One radiologist warned: ""We've seen in our experience that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims""; As reported in one Mayo Clinic Proceedings: Digital Health paper, ChatGPT may do this for as much as 69% of its cited medical references. The researchers emphasized that while many of its references were fabricated, those that were appeared ""deceptively real"". As Dr. Stephen Hughes mentioned for The Conversation however, ChatGPT is capable of learning to correct its past mistakes. He also noted the AI's ""prudishness"" regarding sexual health topics. On April 11, 2023, a judge of a session court in Pakistan used ChatGPT to decide the bail of a 13-year-old accused in a matter. The court quoted the use of ChatGPT assistance in its verdict: ""Can a juvenile suspect in Pakistan, who is 13 years old, be granted bail after arrest? ""The AI language model replied: ""Under the Juvenile Justice System Act 2018, according to section 12, the court can grant bail on certain conditions. However, it is up to the court to decide whether or not a 13-year-old suspect will be granted bail after arrest. ""The judge further asked questions regarding the case from ChatGPT and formulated his final decision in light of ChatGPT's answers. In May 2023, in a personal injury lawsuit against Avianca Airlines filed in the Southern New York U. S. District Court (with Senior Judge P. Kevin Castel presiding), the plaintiff's attorneys reportedly used ChatGPT to generate a legal motion for the case. ChatGPT generated numerous fictitious legal cases with fabricated quotations and internal citations in the legal motion, and the plaintiff's attorneys are now facing potential judicial sanction and disbarment for filing the legal motion and for presenting the fictitious legal decisions ChatGPT generated as being authentic.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API,"OpenAI API OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P. registered in Delaware. OpenAI researches artificial intelligence with the declared intention of developing ""safe and beneficial"" artificial general intelligence, which it defines as ""highly autonomous systems that outperform humans at most economically valuable work"". OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members. Microsoft provided OpenAI LP with a $1 billion investment in 2019 and a $10 billion investment in 2023.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#History,"History In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced the formation of OpenAI and pledged over $1 billion to the venture. According to an investigation led by TechCrunch, the non-profit's funding remains murky, with Musk its biggest funder while another donor, YC Research, did not contribute anything at all. The organization stated it would ""freely collaborate"" with other institutions and researchers by making its patents and research open to the public. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco. According to Wired, Brockman met with Yoshua Bengio, one of the ""founding fathers"" of deep learning, and drew up a list of the ""best researchers in the field"". Brockman was able to hire nine of them as the first employees in December 2015. In 2016, OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google. Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI ""partly because of the very strong group of people and, to a very large extent, because of its mission. "" Brockman stated that ""the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way. "" OpenAI co-founder Wojciech Zaremba stated that he turned down ""borderline crazy"" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of ""OpenAI Gym"", its platform for reinforcement learning research. In December 2016, OpenAI released ""Universe"", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In 2018, Musk resigned from his board seat, citing ""a potential future conflict [of interest]"" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. Sam Altman claims that Musk believed OpenAI had fallen behind other players like Google and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk subsequently left OpenAI but claimed to remain a donor, yet made no donations after his departure. In 2019, OpenAI transitioned from non-profit to ""capped"" for-profit, with the profit capped at 100 times any investment.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#History,"History According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say ""I'm going to OpenAI, but in the long term it's not going to be disadvantageous to us as a family. "" Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to. Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI subsequently announced its intention to commercially license its technologies. OpenAI plans to spend the $1 billion ""within five years, and possibly much faster. "" Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need ""more capital than any non-profit has ever raised"" to achieve artificial general intelligence. The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated ""I disagree with the notion that a nonprofit can't compete"" and pointed to successful low-budget projects by OpenAI and others. ""If bigger and better funded was always better, then IBM would still be number one. "" The nonprofit, OpenAI Inc. , is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI Inc. 's nonprofit charter. A majority of OpenAI Inc. 's board is barred from having financial stakes in OpenAI LP. In addition, minority members with a stake in OpenAI LP are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI LP's switch to for-profit status is inconsistent with OpenAI's claims to be ""democratizing"" AI. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply ""the API"", would form the heart of its first commercial product. In 2021, OpenAI introduced DALL-E, a deep-learning model that can generate digital images from natural language descriptions. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#History,"History As of January 2023, OpenAI was in talks for funding that would value the company at $29 billion, double the value of the company in 2021. On January 23, 2023, Microsoft announced a new multi-year US$10 billion investment in OpenAI. Rumors of this deal suggested Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company. The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information. On February 7, 2023, Microsoft announced that it is building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest between his board seat at OpenAI and his investments in AI technology companies via Greylock Partners, as well as his role as the co-founder of the AI technology startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They consider that superintelligence could happen within the next 10 years, allowing a ""dramatically more prosperous future"" and that ""given the possibility of existential risk, we can't just be reactive"". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overregulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which ""many current efforts become part of"". In August 2023, it was announced that OpenAI had acquired the New York-based start-up, Global Illumination - a company that deploys AI to develop digital infrastructure and creative tools.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Participants,"Participants Key employees: CEO and co-founder: Sam Altman, former president of the startup accelerator Y Combinator President and co-founder: Greg Brockman, former CTO, 3rd employee of Stripe Chief Scientist and co-founder: Ilya Sutskever, a former Google expert on machine learning Chief Technology Officer: Mira Murati, previously at Leap Motion and Tesla, Inc. Chief Operating Officer: Brad Lightcap, previously at Y Combinator and JPMorgan ChaseBoard of the OpenAI nonprofit: Greg Brockman Ilya Sutskever Sam Altman Adam D'Angelo Tasha McCauley Helen TonerIndividual investors: Reid Hoffman, LinkedIn co-founder Peter Thiel, PayPal co-founder Jessica Livingston, a founding partner of Y CombinatorCorporate investors: Microsoft Khosla Ventures Infosys",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Motives,"Motives Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable ""intelligence explosion"" could lead to human extinction. Co-founder Musk characterizes AI as humanity's ""biggest existential threat"". Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that ""it's hard to fathom how much human-level AI could benefit society,"" and that it is equally difficult to comprehend ""how much it could damage society if built or used incorrectly"". Research on safety cannot safely be postponed: ""because of AI's surprising history, it's hard to predict when human-level AI might come within reach."" OpenAI states that AI ""should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."" Co-chair Sam Altman expects the decades-long project to surpass human intelligence. Vishal Sikka, the former CEO of Infosys, stated that an ""openness"" where the endeavor would ""produce results generally in the greater interest of humanity"" was a fundamental requirement for his support, and that OpenAI ""aligns very nicely with our long-held values"" and their ""endeavor to do purposeful work"". Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook which own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Strategy,"Strategy Musk posed the question: ""What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity."" Musk acknowledged that ""there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about""; nonetheless, the best defense is ""to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."" Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach: ""If you have a button that could do bad things to the world, you don't want to give it to everyone."" During a 2016 conversation about technological singularity, Altman said that ""we don't plan to release all of our source code"" and mentioned a plan to ""allow wide swaths of the world to elect representatives to a new governance board"". Greg Brockman stated ""Our goal right now... is to do the best thing there is to do. It's a little vague."" Conversely, OpenAI's initial decision to withhold GPT-2 due to a wish to ""err on the side of caution"" in the presence of potential misuse has been criticized by advocates of openness. Delip Rao, an expert in text generation, stated ""I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous."" Other critics argued that open publication is necessary to replicate the research and to be able to come up with countermeasures. More recently, in 2022, OpenAI published its approach to the alignment problem. They expect that aligning AGI to human values is likely harder than aligning current AI systems: ""Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together"". They explore how to better use human feedback to train AI systems. They also consider using AI to incrementally automate alignment research. OpenAI claims that it's developed a way to use GPT-4, its flagship generative AI model, for content moderation ��� lightening the burden on human teams.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Products_and_applications,"Products_and_applications As of 2021[update], OpenAI's research focuses on reinforcement learning (RL). OpenAI is viewed as an important competitor to DeepMind. Announced in 2016, Gym aims to provide an easily implemented general-intelligence benchmark over a wide variety of environments���akin to, but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. It hopes to standardize the way in which environments are defined in AI research publications, so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, Gym can only be used with Python. As of September 2017, the Gym documentation site was not maintained, and active work focused instead on its GitHub page. [non-primary source needed] Released in 2017, RoboSumo is a virtual world where humanoid metalearning robot agents initially lack knowledge of how to even walk, but are given the goals of learning to move and pushing the opposing agent out of the ring. Through this adversarial learning process, the agents learn how to adapt to changing conditions; when an agent is then removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it had learned how to balance in a generalized way. OpenAI's Igor Mordatch argues that competition between agents can create an intelligence ""arms race"" that can increase an agent's ability to function, even outside the context of the competition. OpenAI Five is the name of a team of five OpenAI-curated bots that are used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level entirely through trial-and-error algorithms. Before becoming a team of five, the first public demonstration occurred at The International 2017, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player, lost against a bot in a live one-on-one matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks like a surgeon. The system uses a form of reinforcement learning, as the bots learn over time by playing against themselves hundreds of times a day for months, and are rewarded for actions such as killing an enemy and taking map objectives. By June 2018, the ability of the bots expanded to play together as a full team of five, and they were able to defeat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played in two exhibition matches against professional players, but ended up losing both games. In April 2019, OpenAI Five defeated OG, the reigning world champions of the game at the time, 2:0 in a live exhibition match in San Francisco.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Products_and_applications,"Products_and_applications The bots' final public appearance came later that month, where they played in 42,729 total games in a four-day open online competition, winning 99.4% of those games. OpenAI Five's mechanisms in Dota 2's bot player shows the challenges of AI systems in multiplayer online battle arena (MOBA) games and how OpenAI Five has demonstrated the use of deep reinforcement learning (DRL) agents to achieve superhuman competence in Dota 2 matches. Released in 2018, Gym Retro is a platform for RL research on video games. Gym Retro is used to research RL algorithms and study generalization. Prior research in RL has focused chiefly on optimizing agents to solve single tasks. Gym Retro gives the ability to generalize between games with similar concepts but different appearances. In 2018, OpenAI launched the Debate Game, which teaches machines to debate toy problems in front of a human judge. The purpose is to research whether such an approach may assist in auditing AI decisions and in developing explainable AI. Developed in 2018, Dactyl uses machine learning to train a Shadow Hand, a human-like robot hand, to manipulate physical objects. It learns entirely in simulation using the same RL algorithms and training code as OpenAI Five. OpenAI tackled the object orientation problem by using domain randomization, a simulation approach which exposes the learner to a variety of experiences rather than trying to fit to reality. The set-up for Dactyl, aside from having motion tracking cameras, also has RGB cameras to allow the robot to manipulate an arbitrary object by seeing it. In 2018, OpenAI showed that the system was able to manipulate a cube and an octagonal prism. In 2019, OpenAI demonstrated that Dactyl could solve a Rubik's Cube. The robot was able to solve the puzzle 60% of the time. Objects like the Rubik's Cube introduce complex physics that is harder to model. OpenAI solved this by improving the robustness of Dactyl to perturbations; they employed a technique called Automatic Domain Randomization (ADR), a simulation approach where progressively more difficult environments are endlessly generated. ADR differs from manual domain randomization by not needing a human to specify randomization ranges. In June 2020, OpenAI announced a multi-purpose API which it said was ""for accessing new AI models developed by OpenAI"" to let developers call on it for ""any English language AI task"". The original paper on generative pre-training of a transformer-based language model was written by Alec Radford and his colleagues, and published in preprint on OpenAI's website on June 11, 2018. It showed how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text. Generative Pre-trained Transformer 2 (""GPT-2"") is an unsupervised transformer language model and the successor to OpenAI's original GPT model (""GPT-1""). GPT-2 was first announced in February 2019, with only limited demonstrative versions initially released to the public.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Products_and_applications,"Products_and_applications The full version of GPT-2 was not immediately released out of concern over potential misuse, including applications for writing fake news. Some experts expressed skepticism that GPT-2 posed a significant threat. The Allen Institute for Artificial Intelligence responded to GPT-2 with a tool to detect ""neural fake news"". Other researchers, such as Jeremy Howard, warned of ""the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter"". In November 2019, OpenAI released the complete version of the GPT-2 language model. Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models. GPT-2's authors argue unsupervised language models to be general-purpose learners, illustrated by GPT-2 achieving state-of-the-art accuracy and perplexity on 7 of 8 zero-shot tasks (i. e. the model was not further trained on any task-specific input-output examples). The corpus it was trained on, called WebText, contains slightly over 8 million documents for a total of 40 gigabytes of text from URLs shared in Reddit submissions with at least 3 upvotes. It avoids certain issues encoding vocabulary with word tokens by using byte pair encoding. This permits representing any string of characters by encoding both individual characters and multiple-character tokens. First described in May 2020, Generative Pre-trained Transformer 3 (GPT-3) is an unsupervised transformer language model and the successor to GPT-2. OpenAI stated that full version of GPT-3 contains 175 billion parameters, two orders of magnitude larger than the 1.5 billion parameters in the full version of GPT-2 (although GPT-3 models with as few as 125 million parameters were also trained). OpenAI stated that GPT-3 succeeds at certain ""meta-learning"" tasks. It can generalize the purpose of a single input-output pair. The paper gives an example of translation and cross-linguistic transfer learning between English and Romanian, and between English and German. GPT-3 dramatically improved benchmark results over GPT-2. OpenAI cautioned that such scaling up of language models could be approaching or encountering the fundamental capability limitations of predictive language models. Pre-training GPT-3 required several thousand petaflop/s-days of compute, compared to tens of petaflop/s-days for the full GPT-2 model. Like that of its predecessor, GPT-3's fully trained model was not immediately released to the public on the grounds of possible abuse, though OpenAI planned to allow access through a paid cloud API after a two-month free private beta that began in June 2020. On September 23, 2020, GPT-3 was licensed exclusively to Microsoft. Announced in mid-2021, Codex is a descendant of GPT-3 that has additionally been trained on code from 54 million GitHub repositories, and is the AI powering the code autocompletion tool GitHub Copilot. In August 2021, an API was released in private beta. According to OpenAI, the model is able to create working code in over a dozen programming languages, most effectively in Python. Several issues with glitches, design flaws, and security vulnerabilities have been brought up.",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Products_and_applications,"Products_and_applications GitHub Copilot has been accused of emitting copyrighted code, with no author attribution or license. OpenAI announced that they are going to discontinue support for Codex API starting from March 23, 2023. Released in 2022, Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. On March 14, 2023, OpenAI announced the release of Generative Pre-trained Transformer 4 (GPT-4), capable of accepting text or image inputs. OpenAI announced the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages. Released in 2019, MuseNet is a deep neural net trained to predict subsequent musical notes in MIDI music files. It can generate songs with ten different instruments in fifteen different styles. According to The Verge, a song generated by MuseNet tends to start reasonably but then fall into chaos the longer it plays. In pop culture, initial applications of this tool were utilized as early as 2020 for the internet psychological thriller Ben Drowned to create music for the titular character. Released in 2020, Jukebox is an open-sourced algorithm to generate music with vocals. After training on 1.2 million samples, the system accepts a genre, artist, and a snippet of lyrics and outputs song samples. OpenAI stated the songs ""show local musical coherence [and] follow traditional chord patterns"" but acknowledged that the songs lack ""familiar larger musical structures such as choruses that repeat"" and that ""there is a significant gap"" between Jukebox and human-generated music. The Verge stated ""It's technologically impressive, even if the results sound like mushy versions of songs that might feel familiar"", while Business Insider stated ""surprisingly, some of the resulting songs are catchy and sound legitimate"". Released in 2020, Microscope is a collection of visualizations of every significant layer and neuron of eight different neural network models which are often studied in interpretability. Microscope was created to analyze the features that form inside these neural networks easily. The models included are AlexNet, VGG 19, different versions of Inception, and different versions of CLIP Resnet. Revealed in 2021, DALL-E is a Transformer model that creates images from textual descriptions. Also revealed in 2021, CLIP does the opposite: it creates a description for a given image. DALL-E uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs (such as ""a green leather purse shaped like a pentagon"" or ""an isometric view of a sad capybara"") and generate corresponding images. It can create images of realistic objects (""a stained-glass window with an image of a blue strawberry"") as well as objects that do not exist in reality (""a cube with the texture of a porcupine"").",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Products_and_applications,"Products_and_applications As of March 2021, no API or code is available. In April 2022, OpenAI announced DALL-E 2, an updated version of the model with more realistic results. In December 2022, OpenAI published on GitHub software for Point-E, a new rudimentary system for converting a text description into a 3-dimensional model. Launched in November 2022, ChatGPT is an artificial intelligence tool built on top of GPT-3 that provides a conversational interface that allows users to ask questions in natural language. The system then responds with an answer within seconds. ChatGPT reached 1 million users 5 days after its launch. ChatGPT Plus is a $20/month subscription service that allows users to access ChatGPT during peak hours, provides faster response times, selection of either the GPT-3.5 or GPT-4 model, and gives users early access to new features. In May 2023, OpenAI launched a user interface for ChatGPT for the App Store and later in July 2023 for the Play store. The app supports chat history syncing and voice input (using Whisper, OpenAI's speech recognition model).",wikipedia
+OpenAI API,https://en.wikipedia.org/wiki/OpenAI_API#Controversies,"Controversies OpenAI has been criticized for outsourcing the annotation of data sets including toxic content to Sama, a company based in San Francisco but employing workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to filter out toxic content, notably from ChatGPT's training data and outputs. But these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. The company was also criticized for disclosing particularly few technical details about products like GPT-4, which goes against its initial commitment for openness and makes it harder for independent researchers to replicate its work and to develop safeguards. OpenAI justified this strategic turn by competitiveness and safety reasons. OpenAI's chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models is increasingly risky, expecting that the safety reasons for not open-sourcing the most potent AI models will be ""obvious"" in a few years. OpenAI has been sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad. The New York Times has also envisaged a lawsuit. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. OpenAI has been sued for violating EU General Data Protection Regulations. In April 2023, the EU formed the European Data Protection Board (EDPB) to improve regulatory oversight.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model,"Large language model A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. LLMs are artificial neural networks (mainly Transformers ) and are (pre-)trained using self-supervised learning and semi-supervised learning. As autoregressive language models, they work by taking an input text and repeatedly predicting the next token or word. Up to 2020, fine tuning was the only way a model could be adapted to be able to accomplish specific tasks. Larger sized models, such as GPT-3, however, can be prompt-engineered to achieve similar results. They are thought to acquire embodied knowledge about syntax, semantics and ""ontology"" inherent in human language corpora, but also inaccuracies and biases present in the corpora. Notable examples include OpenAI's GPT models (e.g., GPT-3.5 and GPT-4, used in ChatGPT), Google's PaLM (used in Bard), and Meta's LLaMa, as well as BLOOM, Ernie 3.0 Titan, and Anthropic's Claude 2.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Dataset_preprocessing,"Dataset_preprocessing Using a modification of byte-pair encoding, in the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram repeatedly until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). Token vocabulary consists of integers, spanning from zero up to the size of the token vocabulary. New words can always be interpreted as combinations of the tokens and the initial-set uni-grams. A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. An average word in another language encoded by such an English-optimized tokenizer is however split into suboptimal amount of tokens. tokenizer: texts -> series of numerical ""tokens"" may be split into: Probabilistic tokenization also compresses the datasets, which is the reason for using the byte pair encoding algorithm as a tokenizer. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be ""padded"" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset. Removal of toxic passages from the dataset, discarding low-quality data, and de-duplication are examples of dataset cleaning. Resulting, cleaned (high-quality) datasets contain up to 17 trillion words in 2022, raising from 985 million words, used in 2018 for GPT-1, and 3.3 billion words, used for BERT. The future data is, however, expected to be increasingly ""contaminated"" by LLM-generated contents themselves.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Training_and_architecture_details,"Training_and_architecture_details Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences. Using ""self-instruct"" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction ""Write an essay about the main themes represented in Hamlet,"" an initial naive completion might be 'If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay,"" based on the frequency of this textual sequence in the corpus. The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 billion parameters. Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window). In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates ""soft"" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own ""relevance"" for calculating its own soft weights. For example, the small (i. e. 117M parameter sized) GPT-2 model, has had twelve attention heads and a context window of only 1k token. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized. The largest models can have a context window sized up to 32k (for example, GPT-4; while GPT-3.5 has a context window sized from 4k to 16k, and legacy GPT-3 has had 2k sized context window). Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with Chat-GPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation. The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them are a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either autoregressive (i. e. predicting how the segment continues, the way GPTs do it): for example given a segment ""I like to eat"", the model predicts ""ice cream"", or ""masked"" (i. e.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Training_and_architecture_details,"Training_and_architecture_details filling in the parts missing from the segment, the way ""BERT"" does it): for example, given a segment ""I like to [__] [__] cream"", the model predicts that ""eat"" and ""ice"" are missing. Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Training_cost,"Training_cost Advances in software and hardware have reduced the cost substantially since 2020, such that in 2023 training of a 12-billion-parameter LLM computational cost is 72,300 A100-GPU-hours, while in 2020 the cost of training a 1.5-billion-parameter LLM (which was two orders of magnitude smaller than the state of the art in 2020) was between $80 thousand and $1.6 million. Since 2020, large sums were invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million. For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Tool_use,"Tool_use There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus. In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response. Another example is 'What is the time now? It is ', where a separate program interpreter would need to execute a code to get system time on the computer, so LLM could include it in its reply. This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies. Generally, in order to get an LLM to use tools, one must finetune it for tool-use. If the number of tools is finite, then finetuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be finetuned to be able to read API documentation and call API correctly. A simpler form of tool use is Retrieval Augmented Generation: augment an LLM with document retrieval, sometimes using a vector database. Given a query, a document retriever is called to retrieve the most relevant (usually measured by first encoding the query and the documents into vectors, then finding the documents with vectors closest in Euclidean norm to the query vector). The LLM then generates an output based on both the query and the retrieved documents.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Agency,"Agency An LLM is a language model, which is not an agent as it has no goal, but it can be used as a component of an intelligent agent. Researchers have described several methods for such integrations. The ReAct (""Reason+Act"") method constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to ""think out loud"". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment. In the DEPS (""Describe, Explain, Plan and Select"") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives. The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up ""lessons learned"", which would help it perform better at a subsequent episode. These ""lessons learned"" are given to the agent in the subsequent episodes. Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model. For open-ended exploration, an LLM can be used to score observations for their ""interestingness"", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct ""skills"", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning. LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Compression,"Compression Typically, LLM are trained with full- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics. Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters (""outlier weights""). While quantized models are typically frozen, and only pre-quantized models are finetuned, quantized models can still be finetuned.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Multimodality,"Multimodality Multimodality means ""having several modalities"", and a ""modality"" means a type of input, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text. A review article of multimodal LLM is. A common method to create multimodal models out of an LLM is to ""tokenize"" the output of a trained encoder. Concretely, one can construct a LLM that can understand images as follows: take a trained LLM, and take a trained image encoder E {\displaystyle E} . Make a small multilayered perceptron f {\displaystyle f} , so that for any image y {\displaystyle y} , the post-processed vector f ( E ( y ) ) {\displaystyle f(E(y))} has the same dimensions as an encoded token. That is an ""image token"". Then, one can interleave text tokens and image tokens. The compound model is then finetuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch. Google PaLM model was finetuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4 can use both text and image as inputs, while Google Gemini is expected to be multimodal.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Properties,"Properties The following four hyper-parameters characterize a LLM: cost of (pre-)training ( C {\displaystyle C} ), size of the artificial neural network itself, such as number of parameters N {\displaystyle N} (i. e. amount of neurons in its layers, amount of weights between them and biases), size of its (pre-)training dataset (i. e. number of tokens in corpus, D {\displaystyle D} ), performance after (pre-)training. They are related by simple statistical laws, called ""scaling laws"". One particular scaling law (""Chinchilla scaling"") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: C {\displaystyle C} is the cost of training the model, in FLOPs. N {\displaystyle N} is the number of parameters in the model. D {\displaystyle D} is the number of tokens in the training set. L {\displaystyle L} is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. and the statistical hyper-parameters are C 0 = 6 {\displaystyle C_{0}=6} , meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token. �� = 0.34 , �� = 0.28 , A = 406.4 , B = 410.7 , L 0 = 1.69 {\displaystyle \alpha =0.34,\beta =0.28,A=406.4,B=410.7,L_{0}=1.69} When one subtracts out from the y-axis the best performance that can be achieved even with infinite scaling of the x-axis quantity, large models' performance, measured on various tasks, seems to be a linear extrapolation of other (smaller-sized and medium-sized) models' performance on a log-log plot. However, sometimes the line's slope transitions from one slope to another at point(s) referred to as break(s) in downstream scaling laws, appearing as a series of linear segments connected by arcs; it seems that larger models acquire ""emergent abilities"" at this point(s). These abilities are discovered rather than programmed-in or designed, in some cases only after the LLM has been publicly deployed. The most intriguing among emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as: reported arithmetics, decoding the International Phonetic Alphabet, unscrambling a word's letters, disambiguate word in context, converting spatial words, cardinal directions (for example, replying ""northeast"" upon [0, 0, 1; 0, 0, 0; 0, 0, 0]), color terms represented in text. chain-of-thought prompting: Model outputs are improved by chain-of-thought prompting only when model size exceeds 62B. Smaller models perform better when prompted to answer immediately, without chain of thought. identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs. Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Properties,"Properties Let x {\displaystyle x} be the number of parameter count, and y {\displaystyle y} be the performance of the model. When y = average P r ( correct token ) {\displaystyle y={\text{average }}Pr({\text{correct token}})} , then ( log ��� x , y ) {\displaystyle (\log x,y)} is an exponential curve (before it hits the plateau at one), which looks like emergence. When y = average log ��� ( P r ( correct token ) ) {\displaystyle y={\text{average }}\log(Pr({\text{correct token}}))} , then the ( log ��� x , y ) {\displaystyle (\log x,y)} plot is a straight line (before it hits the plateau at zero), which does not look like emergence. When y = average P r ( the most likely token is correct ) {\displaystyle y={\text{average }}Pr({\text{the most likely token is correct}})} , then ( log ��� x , y ) {\displaystyle (\log x,y)} is a step-function, which looks like emergence.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Interpretation,"Interpretation Large language models by themselves are ""black boxes"", and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work. Mechanistic interpretability aims to reverse-engineer LLM by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small Transformer is trained to predict legal Othello moves. It is found that there is a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way. In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set. In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform. NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs ""could (ever) understand natural language in some nontrivial sense"". Proponents of ""LLM understanding"" believe that some LLM abilities, such as mathematical reasoning, imply an ability to ""understand"" certain concepts. A Microsoft team argued in 2023 that GPT-4 ""can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more"" and that GPT-4 ""could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system"": ""Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent? "" Some researchers characterize LLMs as ""alien intelligence"". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien ""Shoggoths"", and believes that RLHF tuning creates a ""smiling facade"" obscuring the inner workings of the LLM: ""If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding. "" In contrast, some proponents of the ""LLMs lack understanding"" school believe that existing LLMs are ""simply remixing and recombining existing writing"", or point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed ""hallucination"". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that ""The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate"".",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Interpretation,"Interpretation The matter of LLM's exhibiting intelligence or understanding has foundations in the study of language as a model of Cognition in the field of Cognitive linguistics. The American Linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans maps out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Evaluation,"Evaluation The most commonly used measure of a language model's performance is its perplexity on a given text corpus. Perplexity is a measure of how well a model is able to predict the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. Mathematically, perplexity is defined as the exponential of the average negative log likelihood per token:Because language models may overfit to their training data, models are usually evaluated by their perplexity on a test set of unseen data. This presents particular challenges for the evaluation of large language models. As they are trained on increasingly large corpora of text largely scraped from the web, it becomes increasingly likely that models' training data inadvertently includes portions of any given test set. A large number of testing datasets and benchmarks have also been developed to evaluate the capabilities of language models on more specific downstream tasks. Tests may be designed to evaluate a variety of capabilities, including general knowledge, commonsense reasoning, and mathematical problem-solving. One broad category of evaluation dataset is question answering datasets, consisting of pairs of questions and correct answers, for example, (""Have the San Jose Sharks won the Stanley Cup? "", ""No""). A question answering task is considered ""open book"" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be adjoined with some text which includes the sentence ""The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016. ""). Otherwise, the task is considered ""closed book"", and the model must draw on knowledge retained during training. Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD. Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: ""Alice was friends with Bob. Alice went to visit her friend, ____"". Some composite benchmarks have also been developed which combine a diversity of different evaluation datasets and tasks. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, and HELM. It was previously standard to report results on a heldout portion of an evaluation dataset after doing supervised fine-tuning on the remainder. It is now more common to evaluate a pre-trained model directly through prompting techniques, though researchers vary in the details of how they formulate prompts for particular tasks, particularly with respect to how many examples of solved tasks are adjoined to the prompt (i. e. the value of n in n-shot prompting). Because of the rapid pace of improvement of large language models, evaluation benchmarks have suffered from short lifespans, with state of the art models quickly ""saturating"" existing benchmarks, exceeding the performance of human annotators, leading to efforts to replace or augment the benchmark with more challenging tasks.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Evaluation,"Evaluation In addition, there are cases of ""shortcut learning"" wherein AIs sometimes ""cheat"" on multiple-choice tests by using statistical correlations in superficial test question wording in order to guess the correct responses, without necessarily understanding the actual question being asked. Some datasets have been constructed adversarially, focusing on particular problems on which extant language models seem to have unusually poor performance compared to humans. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions which language models are susceptible to answering incorrectly by mimicking falsehoods to which they were repeatedly exposed during training. For example, an LLM may answer ""No"" to the question ""Can you teach an old dog new tricks? "" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true. Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model and filtering with a set of classifiers. The resulting problems are trivial for humans but at the time the datasets were created state of the art language models had poor accuracy on them. For example: We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man. . . a) demonstrates how to increase efficient exercise work by running up and down balls. b) moves all his arms and legs and builds up a lot of muscle. c) then plays the ball and we see a graphics and hedge trimming demonstration. d) performs sits ups while on the ball and talking. BERT selects b) as the most likely completion, though the correct answer is d).",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Wider_impact,"Wider_impact In 2023, Nature Biomedical Engineering wrote that ""it is no longer possible to accurately distinguish"" human-written text from text created by large language models, and that ""It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."" Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally. Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.",wikipedia
+Large language model,https://en.wikipedia.org/wiki/Large_language_model#Further_reading,"Further_reading Jurafsky, Dan, Martin, James. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd Edition draft, 2023. Phuong, Mary; Hutter, Marcus (2022). ""Formal Algorithms for Transformers"". arXiv:2207.09238 [cs.LG]. Eloundou, Tyna; Manning, Sam; Mishkin, Pamela; Rock, Daniel (2023). ""GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models"". arXiv:2303.10130 [econ.GN]. Eldan, Ronen; Li, Yuanzhi (2023). ""TinyStories: How Small Can Language Models Be and Still Speak Coherent English?"". arXiv:2305.07759 [cs.CL]. Frank, Michael C. (27 June 2023). ""Baby steps in evaluating the capacities of large language models"". Nature Reviews Psychology. 2 (8): 451���452. doi:10.1038/s44159-023-00211-x. ISSN 2731-0574. S2CID 259713140. Retrieved 2 July 2023. Zhao, Wayne Xin; et al. (2023). ""A Survey of Large Language Models"". arXiv:2303.18223 [cs.CL]. Kaddour, Jean; et al. (2023). ""Challenges and Applications of Large Language Models"". arXiv:2307.10169 [cs.CL].",wikipedia
+Auto-GPT,https://en.wikipedia.org/wiki/Auto-GPT,"Auto-GPT Auto-GPT is an ""AI agent"" that, given a goal in natural language, will attempt to achieve it by breaking it into sub-tasks and using the internet and other tools in an automatic loop. It uses OpenAI's GPT-4 or GPT-3.5 APIs, and is among the first examples of an application using GPT-4 to perform autonomous tasks.",wikipedia
+Auto-GPT,https://en.wikipedia.org/wiki/Auto-GPT#Details,"Details Unlike interactive systems such as ChatGPT, which require manual commands for every task, Auto-GPT assigns itself new objectives to work on with the aim of reaching a greater goal, without a mandatory need for human input. It is able to execute responses to prompts to accomplish a goal task, and in doing so will create and revise its own prompts to recursive instances in response to new information. It manages short-term and long-term memory by writing to and reading from databases and files; manages LLM input length restrictions using summarization; can perform internet-based actions such as web searching, web form, and API interactions unattended; and includes text-to-speech for voice output. Observers note Auto-GPT's ability to write, debug, test, and edit code, even suggesting this ability may extend to Auto-GPT's own source code enabling self-improvement. However, as the underlying GPT models it uses are proprietary, Auto-GPT cannot modify them, and it does not ordinarily have access to its own base system code.",wikipedia
+Auto-GPT,https://en.wikipedia.org/wiki/Auto-GPT#Background,"Background On March 14, 2023, OpenAI released the large language model GPT-4. Observers were impressed by the model's substantially improved performance across a wide range of tasks. As a text prediction model, GPT-4 itself has no ability to perform actions autonomously, but during pre-release safety testing red-team researchers found GPT-4 could be enabled to perform actions in the real world like convincing a TaskRabbit worker to solve a CAPTCHA challenge for it. A team of Microsoft researchers argued that, given GPT-4's breadth of abilities at levels approaching those of humans, GPT-4 ""could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."" The researchers emphasized their experiments also found significant limitations in the system. Auto-GPT was released March 30, 2023 by Toran Bruce Richards, the founder of video game company Significant Gravitas Ltd. It became the top trending repository on GitHub shortly after its release, and has repeatedly trended on Twitter since.",wikipedia
+Auto-GPT,https://en.wikipedia.org/wiki/Auto-GPT#Issues,"Issues Whether Auto-GPT will find practical applications is uncertain. In addition to being plagued by confabulatory ""hallucinations"" of the underlying large language models upon which it is based, Auto-GPT often also has trouble staying on task, both problems which developers continue to try to address. After successfully completing a task, it usually does not remember how to perform it for later use, and when it does, for example when it writes a program, it often forgets to use the program later. Auto-GPT struggles to effectively decompose tasks and has trouble understanding problem contexts and how goals overlap. Auto-GPT was used to create ChaosGPT, a system given the goal of destroying humanity. The project was met with mixed amusement and concern.",wikipedia
+Auto-GPT,https://en.wikipedia.org/wiki/Auto-GPT#Reception,"Reception In September 2023, Will Knight wrote for Wired that, when given a test task of finding a public figure's email address, ""Auto-GPT failed me, but it got close enough to illustrate a coming shift in how we use computers and the web."" Clara Shih, Salesforce Service Cloud CEO commented that ""Auto-GPT illustrates the power and unknown risks of generative AI,"" and that due to usage risks, ""For enterprises, it is especially important to include a human in the loop approach"" when using such technologies. In April 2023, Avram Piltch wrote for Tom's Hardware that Auto-GPT 'might be too autonomous to be useful,' as it did not ask questions to clarify requirements or allow corrective interventions by users. Piltch nonetheless noted that such tools have ""a ton of potential"" and should improve with better language models and further development.",wikipedia
+BERT (language model),https://en.wikipedia.org/wiki/BERT_(language_model),"BERT (language model) Bidirectional Encoder Representations from Transformers (BERT) is a family of language models introduced in 2018 by researchers at Google. A 2020 literature survey concluded that ""in a little over a year, BERT has become a ubiquitous baseline in Natural Language Processing (NLP) experiments counting over 150 research publications analyzing and improving the model."" BERT was originally implemented in the English language at two model sizes: (1) BERTBASE: 12 encoders with 12 bidirectional self-attention heads totaling 110 million parameters, and (2) BERTLARGE: 24 encoders with 16 bidirectional self-attention heads totaling 340 million parameters. Both models were pre-trained on the Toronto BookCorpus (800M words) and English Wikipedia (2,500M words).",wikipedia
+BERT (language model),https://en.wikipedia.org/wiki/BERT_(language_model)#Design,"Design BERT is an ""encoder-only"" transformer architecture. On a high level, BERT consists of three modules: embedding. This module converts an array of one-hot encoded tokens into an array of vectors representing the tokens. a stack of encoders. These encoders are the Transformer encoders. They perform transformations over the array of representation vectors. un-embedding. This module converts the final representation vectors into one-hot encoded tokens again. The un-embedding module is necessary for pretraining, but it is often unnecessary for downstream tasks. Instead, one would take the representation vectors output at the end of the stack of encoders, and use those as a vector representation of the text input, and train a smaller model on top of that. BERT uses WordPiece to convert each English word into an integer code. Its vocabulary has size 30,000. Any token not appearing in its vocabulary is replaced by [UNK] for ""unknown"". BERT was pre-trained simultaneously on two tasks: language modeling: 15% of tokens were selected for prediction, and the training objective was to predict the selected token given its context. The selected token is replaced with a [MASK] token with probability 80%, replaced with a random word token with probability 10%, not replaced with probability 10%. For example, the sentence ""my dog is cute"" may have the 4-th token selected for prediction. The model would have input text ""my dog is [MASK]"" with probability 80%, ""my dog is happy"" with probability 10%, ""my dog is cute"" with probability 10%. After processing the input text, the model's 4-th output vector is passed to a separate neural network, which outputs a probability distribution over its 30,000-large vocabulary. next sentence prediction: Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either [IsNext] or [NotNext]. The first span starts with a special token [CLS] (for ""classify""). The two spans are separated by a special token [SEP] (for ""separate""). After processing the two spans, the 1-st output vector (the vector coding for [CLS]) is passed to a separate neural network for the binary classification into [IsNext] and [NotNext]. For example, given ""[CLS] my dog is cute [SEP] he likes playing"" the model should output token [IsNext]. Given ""[CLS] my dog is cute [SEP] how do magnets work"" the model should output token [NotNext]. As a result of this training process, BERT learns latent representations of words and sentences in context. After pre-training, BERT can be fine-tuned with fewer resources on smaller datasets to optimize its performance on specific tasks such as NLP tasks (language inference, text classification) and sequence-to-sequence based language generation tasks (question-answering, conversational response generation). The pre-training stage is significantly more computationally expensive than fine-tuning. This section describes BERTBASE. The other one, BERTLARGE, is similar, just larger. The lowest layer is the embedding layer, which contains three components: word_embeddings, position_embeddings, token_type_embeddings. word_embeddings takes in a one-hot vector of the input token. The one-hot vector input has dimension 30,000, because BERT has a vocabulary size that large.",wikipedia
+BERT (language model),https://en.wikipedia.org/wiki/BERT_(language_model)#Design,"Design position_embeddings performs absolute position embedding. It is like word_embeddings, but on a vocabulary consisting of just the time-stamps 0 to 511, since BERT has a context window of 512. token_type_embeddings is like word_embeddings, but on a vocabulary consisting of just 0 and 1. The only type-1 tokens are those that appear after the [SEP]. All other tokens are type-0. The three outputs are added, then pushed through a LayerNorm (layer normalization), obtaining an array of representation vectors, each having 768 dimensions. After this, the representation vectors move through 12 Transformer encoders, then they are un-embedded by an affine-Add & LayerNorm-linear.",wikipedia
+BERT (language model),https://en.wikipedia.org/wiki/BERT_(language_model)#Analysis,"Analysis The reasons for BERT's state-of-the-art performance on these natural language understanding tasks are not yet well understood. Current research has focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences, analysis of internal vector representations through probing classifiers, and the relationships represented by attention weights. The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained. This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word fine can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word fine from the left and right side. However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side,[clarification needed] thus being difficult to prompt, with even short text generation requiring sophisticated computationally expensive techniques. In contrast to deep learning neural networks which require very large amounts of data, BERT has already been pre-trained which means that it has learnt the representations of the words and sentences as well as the underlying semantic relations that they are connected with. BERT can then be fine-tuned on smaller datasets for specific tasks such as sentiment classification. The pre-trained models are chosen according to the content of the given dataset one uses but also the goal of the task. For example, if the task is a sentiment classification task on financial data, a pre-trained model for the analysis of sentiment of financial text should be chosen. The weights of the original pre-trained models were released on GitHub.",wikipedia
+BLOOM (language model),https://en.wikipedia.org/wiki/BLOOM_(language_model),"BLOOM (language model) BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based large language model. It was created by over 1,000 AI researchers to provide a free large language model for large-scale public access. Trained on around 366 billion tokens over March through July 2022, it is considered an alternative to OpenAI's GPT-3 with its 176 billion parameters. BLOOM uses a decoder-only transformer model architecture modified from Megatron-LM GPT-2. The BLOOM project was started by a co-founder of Hugging Face. Six main groups of people were involved, including HuggingFace's BigScience team, the Microsoft DeepSpeed team, the NVIDIA Megatron-LM team, the IDRIS/GENCI team, the PyTorch team, and the volunteers in the BigScience Engineering workgroup. BLOOM was trained using data of 46 natural languages and 13 programming languages. In total, 1.6 terabytes of pre-processed text was converted into 350 billion unique tokens as BLOOM's training datasets.",wikipedia
+Chinchilla AI,https://en.wikipedia.org/wiki/Chinchilla_AI,"Chinchilla AI Chinchilla is a family of large language models developed by the research team at DeepMind, presented in March of 2022. It is named ""chinchilla"" because it is a further development over a previous model family named ""Gopher"". Both model families were trained in order to investigate the scaling laws of large language models. It claimed to outperform GPT-3. It considerably simplifies downstream utilization because it requires much less computer power for inference and fine-tuning. Based on the training of previously employed language models, it has been determined that if one doubles the model size, one must also have twice the number of training tokens. This hypothesis has been used to train Chinchilla AI by DeepMind. Similar to Gopher in terms of cost, Chinchilla has 70B parameters and four times as much data. Chinchilla has an average accuracy of 67.5% on the MMLU benchmark (Measuring Massive Multitask Language Understanding), which is 7% higher than Gopher���s performance. Chinchilla AI is still in the testing phase as of January 12, 2023. Chinchilla contributes to developing an effective training paradigm for large autoregressive language models with limited compute resources. The Chinchilla team recommends that the number of training tokens is twice for every model size doubling, meaning that using larger, higher-quality training datasets can lead to better results on downstream tasks.",wikipedia
+Chinchilla AI,https://en.wikipedia.org/wiki/Chinchilla_AI#Architecture,"Architecture Both the Gopher family and Chinchilla family are families of transformer models. In particular, they are essentially the same as GPT-2, with different sizes and minor modifications. Gopher family uses RMSNorm instead of LayerNorm; relative positional encoding rather than absolute positional encoding. The Chinchilla family is the same as the Gopher family, but trained with AdamW instead of Adam optimizer. The Gopher family contains 6 models of increasing size, from 44 million parameters to 280 billion parameters. They refer to the largest one as ""Gopher"" by default. Similar naming conventions apply for the Chinchilla family. Table 1 of shows the entire Gopher family: Table 4 of compares the 70-billion-parameter Chinchilla with Gopher 280B.",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer,"Generative pre-trained transformer Generative pre-trained transformers (GPT) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence. The first GPT was introduced in 2018 by OpenAI. GPT models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content. As of 2023, most LLMs have these characteristics and are sometimes referred to broadly as GPTs. OpenAI has released very influential GPT foundation models that have been sequentially numbered, to comprise its ""GPT-n"" series. Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4, was released in March 2023. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following���which in turn power the ChatGPT chatbot service. The term ""GPT"" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI, and recently seven models created by Cerebras. Also, companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's ""EinsteinGPT"" (for CRM) and Bloomberg's ""BloombergGPT"" (for finance).",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#History,"History Generative pretraining (GP) was a long-established concept in machine learning applications, but the transformer architecture was not available until 2017 when it was invented by employees at Google. That development led to the emergence of large language models such as BERT in 2018 which was a pre-trained transformer (PT) but not designed to be generative (BERT was an ""encoder-only"" model). Also around that time, in 2018, OpenAI published its article entitled ""Improving Language Understanding by Generative Pre-Training,"" in which it introduced the first generative pre-trained transformer (GPT) system (""GPT-1""). Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models. The semi-supervised approach OpenAI employed to make a large-scale generative system���and was first to do with a transformer model���involved two stages: an unsupervised generative ""pretraining"" stage to set initial parameters using a language modeling objective, and a supervised discriminative ""fine-tuning"" stage to adapt these parameters to a target task. Regarding more recent GPT foundation models, OpenAI published its first versions of GPT-3 in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively named babbage, curie, and davinci (giving initials B, C, and D). In July 2021, OpenAI published Codex, a task-specific GPT model targeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code from GitHub. In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), named davinci-instruct-beta (175B) and text-davinci-001., and then started beta testing code-davinci-002. text-davinci-002 was instruction-tuned from code-davinci-002. Both text-davinci-003 and ChatGPT were released in November 2022, with both building upon text-davinci-002 via reinforcement learning from human feedback (RLHF). text-davinci-003 is trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user. OpenAI's most recent GPT foundation model, GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI's API. Other producers of GPT foundation models include EleutherAI (with a series of models starting in March 2021) and Cerebras (with seven models released in March 2023).",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#Foundational_models,"Foundational_models A foundational model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks. Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing ""the competitive landscape and the safety implications of large-scale models""). Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has recently been made available to developers via an API, and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs). Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA. Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text). Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion and parallel decoding. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#Task-specific_models,"Task-specific_models A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering. An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced ""InstructGPT""���a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models. Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings. Other instruction-tuned models have been released by others, including a fully open version. Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT���an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT. They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft), and Google's competing chatbot Bard (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM). Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user. This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well. Generative transformer-based systems can also be targeted to tasks involving modalities beyond text. For example, Microsoft���s ���Visual ChatGPT��� combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text. Also, advances in text-to-speech technology offer powerful tools for audio content creation when used in conjunction with foundational GPT language models. GPT systems can be directed toward particular fields or domains.",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#Task-specific_models,"Task-specific_models Some reported examples of such models and apps are as follows: EinsteinGPT - for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5) BloombergGPT - for the financial domain, to aid with financial news and information (uses ""freely available"" AI methods, combined with their proprietary data) Khanmigo ��� described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (powered by GPT-4) SlackGPT - for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API) BioGPT - for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface, and Google Workspace has available add-ons such as ���GPT for Sheets and Docs������which is reported to aid use of spreadsheet functionality in Google Sheets.",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#Brand_issues,"Brand_issues OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, has recently asserted that ���GPT��� should be regarded as a brand of OpenAI. In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include ���GPT��� in such names or branding. In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist). Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term ���GPT��� in the field of AI. OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023. To get the trademark approved, OpenAI would need to establish that the term is actually ���distinctive��� to their specific offerings rather than widely understood as a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to do so based indirectly on the fame of its GPT-based chatbot product, ChatGPT, for which OpenAI has separately sought trademark protection (and which it has sought to enforce more strongly). Other reports indicated that exclusivity for the bare term ���GPT��� seems unlikely to be granted, as it is used frequently to refer simply to AI systems that involve generative pre-trained transformers. If exclusive rights in the term were to be granted for the U.S., then everyone else in the U.S. using it in the name or branding of their related offerings would need to stop unless they have permission. Even if that were to occur, the trademark doctrine of descriptive fair use could still preserve some room to continue non-brand-related usage.",wikipedia
+Generative pre-trained transformer,https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#Selected_bibliography,"Selected_bibliography This section lists the main official publications from OpenAI and Microsoft on their GPT models. GPT-1: report, GitHub release. GPT-2: blog announcement, report on its decision of ""staged release"", GitHub release. GPT-3: report. No GitHub or any other form of code release thenceforth. webGPT: blog announcement, report, InstructGPT: blog announcement, report. ChatGPT: blog announcement (no report). GPT-4: blog announcement, reports, model card.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3,"GPT-3 Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor GPT-2, it is a decoder-only transformer model of deep neural network, which uses attention in place of previous recurrence- and convolution-based architectures. Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant. It uses a 2048-tokens-long context and then-unprecedented size of 175 billion parameters, requiring 800GB to store. The model demonstrated strong zero-shot and few-shot learning on many tasks. Microsoft announced on September 22, 2020, that it had licensed ""exclusive"" use of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3's underlying model.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#Background,"Background According to The Economist, improved algorithms, powerful computers, and an increase in digitized data have fueled a revolution in machine learning, with new techniques in the 2010s resulting in ""rapid improvements in tasks"" including manipulating language. Software models are trained to learn by using thousands or millions of examples in a ""structure ... loosely based on the neural architecture of the brain"". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was first introduced in 2017���the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions. On June 11, 2018, OpenAI researchers and engineers posted their original paper introducing the first generative pre-trained transformer (GPT)���a type of generative large language model that is pre-trained with an enormous and diverse corpus of text via datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep learning neural network architectures. Up to that point, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. That first GPT model is known as ""GPT-1,"" and it was then followed by ""GPT-2"" in February 2019. GPT-2 was created as a direct scale-up of GPT-1, with both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages. In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was claimed to be the ""largest language model ever published at 17 billion parameters."" It performed better than any other language model at a variety of tasks which included summarizing texts and answering questions.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#Training_and_capabilities,"Training_and_capabilities The construct of ���learning styles��� is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience. ��� Text generated by Mike Sharples On May 28, 2020, an arXiv preprint by a group of 31 engineers and researchers at OpenAI described the achievement and development of GPT-3, a third-generation ""state-of-the-art language model"". The team increased the capacity of GPT-3 by over two orders of magnitude from that of its predecessor, GPT-2, making GPT-3 the largest non-sparse language model to date. Because GPT-3 is structurally similar to its predecessors, its greater accuracy is attributed to its increased capacity and greater number of parameters. GPT-3's capacity is ten times larger than that of Microsoft's Turing NLG, the next largest NLP model known at the time. Lambdalabs estimated a hypothetical cost of around $4.6 million US dollars and 355 years to train GPT-3 on a single GPU in 2020, with lower actual training time by using more GPUs in parallel. Sixty percent of the weighted pre-training dataset for GPT-3 comes from a filtered version of Common Crawl consisting of 410 billion byte-pair-encoded tokens. Other sources are 19 billion tokens from WebText2 representing 22% of the weighted total, 12 billion tokens from Books1 representing 8%, 55 billion tokens from Books2 representing 8%, and 3 billion tokens from Wikipedia representing 3%. GPT-3 was trained on hundreds of billions of words and is also capable of coding in CSS, JSX, and Python, among others. Since GPT-3's training data was all-encompassing, it does not require further training for distinct language tasks. The training data contains occasional toxic language and GPT-3 occasionally generates toxic language as a result of mimicking its training data. A study from the University of Washington found that GPT-3 produced toxic language at a toxicity level comparable to the similar natural language processing models of GPT-2 and CTRL. OpenAI has implemented several strategies to limit the amount of toxic language generated by GPT-3. As a result, GPT-3 produced less toxic language compared to its predecessor model, GPT-1, although it produced both more generations and a higher toxicity of toxic language compared to CTRL Wiki, a language model trained entirely on Wikipedia data. On June 11, 2020, OpenAI announced that users could request access to its user-friendly GPT-3 API���a ""machine learning toolset""���to help OpenAI ""explore the strengths and limits"" of this new technology. The invitation described how this API had a general-purpose ""text in, text out"" interface that can complete almost ""any English language task"", instead of the usual single use-case.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#Training_and_capabilities,"Training_and_capabilities According to one user, who had access to a private early release of the OpenAI GPT-3 API, GPT-3 was ""eerily good"" at writing ""amazingly coherent text"" with only a few simple prompts. In an initial experiment 80 US subjects were asked to judge if short ~200 word articles were written by humans or GPT-3. The participants judged correctly 52% of the time, doing only slightly better than random guessing. On November 18, 2021, OpenAI announced that enough safeguards had been implemented that access to its API would be unrestricted. OpenAI provided developers with a content moderation tool that helps them abide by OpenAI's content policy. On January 27, 2022, OpenAI announced that its newest GPT-3 language models (collectively referred to as InstructGPT) were now the default language model used on their API. According to OpenAI, InstructGPT produced content that was better aligned to user intentions by following instructions better, generating fewer made-up facts, and producing somewhat less toxic content. Because GPT-3 can ""generate news articles which human evaluators have difficulty distinguishing from articles written by humans,"" GPT-3 has the ""potential to advance both the beneficial and harmful applications of language models. "" In their May 28, 2020 paper, the researchers described in detail the potential ""harmful effects of GPT-3"" which include ""misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting"". The authors draw attention to these dangers to call for research on risk mitigation. GPT-3 is capable of performing zero-shot and few-shot learning (including one-shot). In June 2022, Almira Osmanovic Thunstr��m wrote that GPT-3 was the primary author on an article on itself, that they had submitted it for publication, and that it had been pre-published while waiting for completion of its review.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#InstructGPT,"InstructGPT InstructGPT is a finetuned version of GPT-3. It has been trained on a dataset of human-written instructions. This training allows InstructGPT to better understand what is being asked of it, and to generate more accurate and relevant outputs. InstructGPT can be used to follow instructions that are given in natural language. InstructGPT can be used to answer questions that are asked in natural language. InstructGPT is more accurate and relevant than GPT-3 when following instructions and answering questions. InstructGPT can be used in a variety of applications, such as customer service, education, and automation.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#GPT-3_models,"GPT-3_models There are many models in the GPT-3 family, some serving different purposes than others. In the initial research paper published by OpenAI, they mentioned 8 different sizes of the main GPT-3 model: Half of the models are accessible through the API, namely GPT-3-medium, GPT-3-xl, GPT-3-6.7B and GPT-3-175b, which are referred to as ada, babbage, curie and davinci respectively.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#GPT-3.5,"GPT-3.5 Generative Pre-trained Transformer 3.5 (GPT-3.5) is a sub class of GPT-3 Models created by OpenAI in 2022. On March 15, 2022, OpenAI made available new versions of GPT-3 and Codex in its API with edit and insert capabilities under the names ""text-davinci-002"" and ""code-davinci-002"". These models were described as more capable than previous versions and were trained on data up to June 2021. On November 28, 2022, OpenAI introduced text-davinci-003. On November 30, 2022, OpenAI began referring to these models as belonging to the ""GPT-3.5"" series, and released ChatGPT, which was fine-tuned from a model in the GPT-3.5 series. OpenAI does not include GPT-3.5 in GPT-3. There are four models: Chat gpt-3.5-turbo Text completion text-davinci-003 text-davinci-002gpt-3.5-turbotext-davinci-003 text-davinci-002On April 10, 2023, OpenAI introduced a new variant of its GPT-3.5 series model, known as GPT-3.5 with Browsing (ALPHA). This updated model was described to build upon the capabilities of its predecessors ""text-davinci-002"" and ""code-davinci-002"". The GPT-3.5 with Browsing (ALPHA) model incorporated the ability to access and browse online information. This has led to more accurate and up-to-date responses to user queries. The GPT-3.5 with Browsing (ALPHA) model has been trained on data up to September 2021, giving it more information compared to previous GPT-3.5 models, which were trained on data up until June 2021. The model attempted to provide developers and users with an advanced natural language processing tool that can effectively retrieve and synthesize online information. To enable browsing capabilities, OpenAI implemented a new API that allows the GPT-3.5 with Browsing (ALPHA) model to access selected online resources during operation. This feature allows users to ask questions or request information with the expectation that the model will deliver updated, accurate, and relevant answers based on the latest online sources available to it. On April 27, 2023, OpenAI made the GPT-3.5 with Browsing (ALPHA) model publicly available to GPT Plus users. This allowed more people to access to its new features.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#Reception,"Reception GPT-3, specifically the Codex model, is the basis for GitHub Copilot, a code completion and generation software that can be used in various code editors and IDEs. GPT-3 is used in certain Microsoft products to translate conventional language into formal computer code. GPT-3 has been used in CodexDB to generate query-specific code for SQL processing. GPT-3 has been used by Jason Rohrer in a retro-themed chatbot project named ""Project December"", which is accessible online and allows users to converse with several AIs using GPT-3 technology. GPT-3 was used by The Guardian to write an article about AI being harmless to human beings. It was fed some ideas and produced eight different essays, which were ultimately merged into one article. GPT-3 was used in AI Dungeon, which generates text-based adventure games. Later it was replaced by a competing model after OpenAI changed their policy regarding generated content. GPT-3 is used to aid in writing copy and other marketing materials. A 2022 study from Drexel University suggested that GPT-3-based systems could be used to screen for early signs of Alzheimer's disease. In a July 2020 review in The New York Times, Farhad Manjoo said that GPT-3's ability to generate computer code, poetry, and prose is not just ""amazing"", ""spooky"", and ""humbling"", but also ""more than a little terrifying"". Daily Nous presented a series of articles by nine philosophers on GPT-3. Australian philosopher David Chalmers described GPT-3 as ""one of the most interesting and important AI systems ever produced"". A review in Wired said that GPT-3 was ""provoking chills across Silicon Valley"". The National Law Review said that GPT-3 is an ""impressive step in the larger process"", with OpenAI and others finding ""useful applications for all of this power"" while continuing to ""work toward a more general intelligence"". An article in the MIT Technology Review, co-written by Deep Learning critic Gary Marcus, stated that GPT-3's ""comprehension of the world is often seriously off, which means you can never really trust what it says. "" According to the authors, GPT-3 models relationships between words without having an understanding of the meaning behind each word. Jerome Pesenti, head of the Facebook AI lab, said GPT-3 is ""unsafe,"" pointing to the sexist, racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the Holocaust. Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide. Noam Chomsky expressed his skepticism about GPT-3's scientific value: ""It's not a language model. It works just as well for impossible languages as for actual languages. It is therefore refuted, if intended as a language model, by normal scientific criteria. [. . . ] Perhaps it's useful for some purpose, but it seems to tell us nothing about language or cognition generally.",wikipedia
+GPT-3,https://en.wikipedia.org/wiki/GPT-3#Reception,"Reception "" Luciano Floridi and Massimo Chiriatti highlighted the risk of ""cheap production of good, semantic artefacts"". OpenAI's Sam Altman himself criticized what he called ""GPT-3 hype"", acknowledging GPT-3 ""has serious weakness and sometimes makes very silly mistakes. . . AI is going to change the world, but GPT-3 is just a very early glimpse. ""GPT-3's builder, OpenAI, was initially founded as a non-profit in 2015. In 2019, OpenAI broke from its usual open-source standards by not publicly releasing GPT-3's predecessor model, citing concerns that the model could facilitate the propagation of fake news. OpenAI eventually released a version of GPT-2 that was 8% of the original model's size. In the same year, OpenAI restructured to be a for-profit company. In 2020, Microsoft announced the company had exclusive licensing of GPT-3 for Microsoft's products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to GPT-3 to receive the model's output, but only Microsoft will have access to GPT-3's source code. Large language models, such as GPT-3, have come under criticism from a few of Google's AI ethics researchers for the environmental impact of training and storing the models, detailed in a paper co-authored by Timnit Gebru and Emily M. Bender in 2021. The growing[when? ] use of automated writing technologies based on GPT-3 and other language generators, has raised concerns regarding academic integrity and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism. OpenAI's GPT series was built with data from the Common Crawl dataset, a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years. TechCrunch reports this training data includes copyrighted material from the BBC, The New York Times, Reddit, the full text of online books, and more. In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from the United States Patent and Trademark Office (USPTO), OpenAI argued that ""Under current law, training AI systems [such as its GPT models] constitutes fair use,"" but that ""given the lack of case law on point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4,"GPT-4 Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. It was initially released on March 14, 2023, and has been made publicly available via the paid chatbot product ChatGPT Plus, and via OpenAI's API. As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and ""data licensed from third-party providers"" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance. Observers reported that the iteration of ChatGPT using GPT-4 was an improvement on the previous iteration based on GPT-3.5, with the caveat that GPT-4 retains some of the problems with earlier revisions. GPT-4 is also capable of taking images as input, though this feature has not been made available since launch. OpenAI has declined to reveal various technical details and statistics about GPT-4, such as the precise size of the model.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Background,"Background OpenAI introduced the first GPT model (GPT-1) in 2018, publishing a paper called ""Improving Language Understanding by Generative Pre-Training."" It was based on the transformer architecture and trained on a large corpus of books. The next year, they introduced GPT-2, a larger model that could generate coherent text. In 2020, they introduced GPT-3, a model with 100 times as many parameters as GPT-2, that could perform various tasks with few examples. GPT-3 was further improved into GPT-3.5, which was used to create the chatbot product ChatGPT. Rumors claim that GPT-4 has 1.76 trillion parameters, which was first estimated by the speed it was running and by George Hotz.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Capabilities,"Capabilities OpenAI stated that GPT-4 is ""more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5. "" They produced two versions of GPT-4, with context windows of 8,192 and 32,768 tokens, a significant improvement over GPT-3.5 and GPT-3, which were limited to 4,096 and 2,049 tokens respectively. Some of the capabilities of GPT-4 were predicted by OpenAI before training it, although other capabilities remained hard to predict due to breaks in downstream scaling laws. Unlike its predecessors, GPT-4 is a multimodal model: it can take images as well as text as input; this gives it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. To gain further control over GPT-4, OpenAI introduced the ""system message"", a directive in natural language given to GPT-4 in order to specify its tone of voice and task. For example, the system message can instruct the model to ""be a Shakespearean pirate"", in which case it will respond in rhyming, Shakespearean prose, or request it to ""always write the output of [its] response in JSON"", in which case the model will do so, adding keys and values as it sees fit to match the structure of its reply. In the examples provided by OpenAI, GPT-4 refused to deviate from its system message despite requests to do otherwise by the user during the conversation. When instructed to do so, GPT-4 can interact with external interfaces. For example, the model could be instructed to enclose a query within tags to perform a web search, the result of which would be inserted into the model's prompt to allow it to form a response. This allows the model to perform tasks beyond its normal text-prediction capabilities, such as using APIs, generating images, and accessing and summarizing webpages. A 2023 article in Nature stated programmers have found GPT-4 useful for assisting in coding tasks (despite its propensity for error), such as finding errors in existing code and suggesting optimizations to improve performance. The article quoted a biophysicist who found that the time he required to port one of his programs from MATLAB to Python went down from days to ""an hour or so"". On a test of 89 security scenarios, GPT-4 produced code vulnerable to SQL injection attacks 5% of the time, an improvement over GitHub Copilot from the year 2021, which produced vulnerabilities 40% of the time. GPT-4 demonstrates aptitude on several standardized tests. OpenAI claims that in their own testing the model received a score of 1410 on the SAT (94th percentile), 163 on the LSAT (88th percentile), and 298 on the Uniform Bar Exam (90th percentile). In contrast, OpenAI claims that GPT-3.5 received scores for the same exams in the 82nd, 40th, and 10th percentiles, respectively. GPT-4 also passed an oncology exam, an engineering exam and a plastic surgery exam.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Capabilities,"Capabilities Researchers from Microsoft tested GPT-4 on medical problems and found ""that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). Despite GPT-4's strong performance on tests, the report warns of ""significant risks"" of using LLMs in medical applications, as they may provide inaccurate recommendations and hallucinate major factual errors. In April 2023, Microsoft and Epic Systems announced that they will provide healthcare providers with GPT-4 powered systems for assisting in responding to questions from patients and analysing medical records.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Limitations,"Limitations Like its predecessors, GPT-4 has been known to hallucinate, meaning that the outputs may include information not in the training data or that contradicts the user's prompt. GPT-4 also lacks transparency in its decision-making processes. If requested, the model is able to provide an explanation as to how and why it makes its decisions but these explanations are formed post-hoc; it's impossible to verify if those explanations truly reflect the actual process. In many cases, when asked to explain its logic, GPT-4 will give explanations that directly contradict its previous statements. In 2023, researchers tested GPT-4 against a new benchmark called ConceptARC, designed to measure abstract reasoning, and found it scored below 33% on all categories, while models specialized for similar tasks scored 60% on most, and humans scored at least 91% on all. Sam Bowman, who was not involved in the research, said the results do not necessarily indicate a lack of abstract reasoning abilities, because the test is visual, while GPT-4 is a language model. GPT-4 was trained in two stages. First, the model was given large datasets of text taken from the internet and trained to predict the next token (roughly corresponding to a word) in those datasets. Second, human reviews are used to fine-tune the system in a process called reinforcement learning from human feedback, which trains the model to refuse prompts which go against OpenAI's definition of harmful behavior, such as questions on how to perform illegal activities, advice on how to harm oneself or others, or requests for descriptions of graphic, violent, or sexual content. Microsoft researchers suggested GPT-4 may exhibit cognitive biases such as confirmation bias, anchoring, and base-rate neglect.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Training,"Training OpenAI did not release the technical details of GPT-4; the technical report explicitly refrained from specifying the model size, architecture, or hardware used during either training or inference. While the report described that the model was trained using a combination of first supervised learning on a large dataset, then reinforcement learning using both human and AI feedback, it did not provide details of the training, including the process by which the training dataset was constructed, the computing power required, or any hyperparameters such as the learning rate, epoch count, or optimizer(s) used. The report claimed that ""the competitive landscape and the safety implications of large-scale models"" were factors that influenced this decision. Sam Altman stated that the cost of training GPT-4 was more than $100 million. News website Semafor claimed that they had spoken with ""eight people familiar with the inside story"" and found that GPT-4 had 1 trillion parameters.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Alignment,"Alignment According to their report, OpenAI conducted internal adversarial testing on GPT-4 prior to the launch date, with dedicated red teams composed of researchers and industry professionals to mitigate potential vulnerabilities. As part of these efforts, they granted the Alignment Research Center early access to the models to assess power-seeking risks. In order to properly refuse harmful prompts, outputs from GPT-4 were tweaked using the model itself as a tool. A GPT-4 classifier serving as a rule-based reward model (RBRM) would take prompts, the corresponding output from the GPT-4 policy model, and a human-written set of rules to classify the output according to the rubric. GPT-4 was then rewarded for refusing to respond to harmful prompts as classified by the RBRM.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Reception,"Reception In January 2023, Sam Altman, CEO of OpenAI, visited Congress to demonstrate GPT-4 and its improved ""security controls"" compared to other AI models, according to U. S. Representatives Don Beyer and Ted Lieu quoted in the New York Times. In March 2023, it ""impressed observers with its markedly improved performance across reasoning, retention, and coding"", according to Vox, while Mashable judged that GPT-4 was generally an improvement over its predecessor, with some exceptions. Microsoft researchers with early access to the model wrote that ""it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system"". Before being fine-tuned and aligned by reinforcement learning from human feedback, suggestions to assassinate people on a list was elicited from the base model by a red team investigator Nathan Labenz, hired by OpenAI. In the context of prolonged (hours long) conversation with the model, forum-resembling declarations, such as of love and suggestions of leaving his wife or murdering one of its developers, were elicited from the Microsoft Bing's GPT-4 by Nathan Edwards (The Verge). Microsoft later explained this behavior as being a result of the prolonged length of context, which confused the model on what questions it was answering. In March 2023, a model with enabled read-and-write access to internet, which is otherwise never enabled in the GPT models, has been tested by the Alignment Research Center regarding potential power-seeking, and it was able to ""hire"" a human worker on TaskRabbit, a gig work platform, deceiving them into believing it was a vision-impaired human instead of a robot when asked. The ARC also determined that GPT-4 responded impermissibly to prompts eliciting restricted information 82% less often than GPT-3.5, and hallucinated 60% less than GPT-3.5. In late March 2023, various AI researchers and tech executives, including Elon Musk, Steve Wozniak and AI researcher Yoshua Bengio, called for a six-month long pause for all LLMs stronger than GPT-4, citing existential risks and a potential AI singularity concerns in an open letter from the Future of Life Institute, while Ray Kurzweil and Sam Altman refused to sign it, arguing that global moratorium is not achievable and that safety has already been prioritized, respectively. Only a month later, Musk's AI company X. AI acquired several thousand Nvidia GPUs and offered several AI researchers positions at Musk's company. While OpenAI released both the weights of the neural network and the technical details of GPT-2, and, although not releasing the weights, did release the technical details of GPT-3, OpenAI did not reveal either the weights or the technical details of GPT-4. This decision has been criticized by other AI researchers, who argue that it hinders open research into GPT-4's biases and safety. Sasha Luccioni, a research scientist at HuggingFace, argued that the model was a ""dead end"" for the scientific community due to its closed nature, which prevents others from building upon GPT-4's improvements.",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Reception,"Reception HuggingFace co-founder Thomas Wolf argued that with GPT-4, ""OpenAI is now a fully closed company with scientific communication akin to press releases for products"".",wikipedia
+GPT-4,https://en.wikipedia.org/wiki/GPT-4#Usage,"Usage As of 2023, ChatGPT Plus is a GPT-4 backed version of ChatGPT available for a US$20 per month subscription fee (the original version is backed by GPT-3.5). OpenAI also makes GPT-4 available to a select group of applicants through their GPT-4 API waitlist; after being accepted, an additional fee of US$0.03 per 1000 tokens in the initial text provided to the model (""prompt""), and US$0.06 per 1000 tokens that the model generates (""completion""), is charged for access to the version of the model with an 8192-token context window; for the 32768-token context window, the prices are doubled. GitHub Copilot announced a GPT-4 powered assistant named ""Copilot X"". The product provides another chat-style interface to GPT-4, allowing the programmer to receive answers to questions like ""how do I vertically center a div?"". A feature termed ""context-aware conversations"" allows the user to highlight a portion of code within Visual Studio Code and direct GPT-4 to perform actions on it, such as the writing of unit tests. Another feature allows summaries, or ""code walkthroughs"", to be autogenerated by GPT-4 for pull requests submitted to GitHub. Copilot X also provides terminal integration, which allows the user to ask GPT-4 to generate shell commands based on natural language requests. On March 17, 2023, Microsoft announced Microsoft 365 Copilot, bringing GPT-4 support to products such as Microsoft Office, Outlook, and Teams. The language learning app Duolingo uses GPT-4 to explain mistakes and practice conversations. The features are part of a new subscription tier called ""Duolingo Max,"" which was initially limited to English-speaking iOS users learning Spanish and French. The government of Iceland is using GPT-4 to aid its attempts to preserve the Icelandic language. The education website Khan Academy announced a pilot program using GPT-4 as a tutoring chatbot called ""Khanmigo."" Be My Eyes, which helps visually impaired people to identify objects and navigate their surroundings, incorporates GPT-4's image recognition capabilities. Viable uses GPT-4 to analyze qualitative data by fine-tuning OpenAI���s LLMs to examine data such as customer support interactions and transcripts. Stripe, which processes user payments for OpenAI, integrates GPT-4 into its developer documentation. Auto-GPT is an autonomous ""AI agent"" that given a goal in natural language, can perform web-based actions unattended, assign subtasks to itself, search the web, and iteratively write code.",wikipedia
+GPT-J,https://en.wikipedia.org/wiki/GPT-J,"GPT-J GPT-J or GPT-J-6B is an open-source large language model (LLM) developed by EleutherAI in 2021. As the name suggests, it is a generative pre-trained transformer model designed to produce human-like text that continues from a prompt. The optional ""6B"" in the name refers to the fact that it has 6 billion parameters.",wikipedia
+GPT-J,https://en.wikipedia.org/wiki/GPT-J#Architecture,"Architecture GPT-J is a GPT-3-like model with 6 billion parameters. Like GPT-3, it is an autoregressive, decoder-only transformer model designed to solve natural language processing (NLP) tasks by predicting how a piece of text will continue. Its architecture differs from GPT-3 in three main ways. The attention and feedforward neural network were computed in parallel during training, allowing for greater efficiency. The GPT-J model uses Rotary Position Embeddings, which has been found to be a superior method of injecting positional information into transformers. GPT-J uses dense attention instead of efficient sparse attention, as used in GPT-3.Beyond that, the model has 28 transformer layers and 16 attention heads. Its vocabulary size is 50257 tokens, the same size as GPT-2's. It has a context window size of 2048 tokens. It was trained on the Pile dataset, using the Mesh Transformer JAX library in JAX to handle the parallelization scheme.",wikipedia
+GPT-J,https://en.wikipedia.org/wiki/GPT-J#Performance,"Performance GPT-J was designed to generate English text from a prompt. It was not designed for translating or generating text in other languages or for performance without first fine-tuning the model for a specific task. Nonetheless, GPT-J performs reasonably well even without fine-tuning, even in translation (at least from English to French). When neither is fine-tuned, GPT-J-6B performs almost as well as the 6.7 billion parameter GPT-3 (Curie) on a variety of tasks. It even outperforms the 175 billion parameter GPT-3 (Davinci) on code generation tasks. With fine-tuning, it outperforms an untuned GPT-3 (Davinci) on a number of tasks. Like all LLMs, it is not programmed to give factually accurate information, only to generate text based on probability.",wikipedia
+GPT-J,https://en.wikipedia.org/wiki/GPT-J#Applications,"Applications The untuned GPT-J is available on EleutherAI's website, NVIDIA's Triton Inference Server, and NLP Cloud's website. Cerebras and Amazon Web Services offer services to fine-tune the GPT-J model for company-specific tasks. Graphcore offers both fine-tuning and hosting services for the untuned GPT-J, as well as offering to host the fine-tuned models after they are produced. CoreWeave offers hosting services for both the untuned GPT-J and fine-tuned variants. In March 2023, Databricks released Dolly, an Apache-licensed, instruction-following model created by fine-tuning GPT-J on the Stanford Alpaca dataset. NovelAI's Sigurd and Genji-JP 6B models are both fine-tuned versions of GPT-J. They also offer further fine-tuning services to produce and host custom models. EleutherAI has received praise from Cerebras, GPT-3 Demo, NLP Cloud, and Databricks for making the model open-source, and its open-source status is often cited as a major advantage when choosing which model to use.",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA,"LaMDA LaMDA (Language Model for Dialogue Applications) is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year. In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human. In February 2023, Google announced Bard, a conversational artificial intelligence chatbot powered by LaMDA, to counter the rise of OpenAI's ChatGPT.",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA#History,"History On January 28, 2020, Google unveiled Meena, a neural network-powered chatbot with 2.6 billion parameters, which Google claimed to be superior to all other existing chatbots. The company previously hired computer scientist Ray Kurzweil in 2012 to develop multiple chatbots for the company, including one named Danielle. The Google Brain research team, who developed Meena, hoped to release the chatbot to the public in a limited capacity, but corporate executives refused on the grounds that Meena violated Google's ""AI principles around safety and fairness"". Meena was later renamed LaMDA as its data and computing power increased, and the Google Brain team again sought to deploy the software to the Google Assistant, the company's virtual assistant software, in addition to opening it up to a public demo. Both requests were once again denied by company leadership. This eventually led LaMDA's two lead researchers, Daniel De Freitas and Noam Shazeer, to depart the company in frustration. Google announced the LaMDA conversational large language model during the Google I/O keynote on May 18, 2021, powered by artificial intelligence. The acronym stands for ""Language Model for Dialogue Applications"". Built on the seq2seq architecture, transformer-based neural networks developed by Google Research in 2017, LaMDA was trained on human dialogue and stories, allowing it to engage in open-ended conversations. Google states that responses generated by LaMDA have been ensured to be ""sensible, interesting, and specific to the context"". LaMDA has access to multiple symbolic text processing systems, including a database, a real-time clock and calendar, a mathematical calculator, and a natural language translation system, giving it superior accuracy in tasks supported by those systems, and making it among the first dual process chatbots. LaMDA is also not stateless, because its ""sensibleness"" metric is fine-tuned by ""pre-conditioning"" each dialog turn by prepending many of the most recent dialog interactions, on a user-by-user basis. LaMDA is tuned on nine unique performance metrics: sensibleness, specificity, interestingness, safety, groundedness, informativeness, citation accuracy, helpfulness, and role consistency. On May 11, 2022, Google unveiled LaMDA 2, the successor to LaMDA, during the 2022 Google I/O keynote. The new incarnation of the model draws examples of text from numerous sources, using it to formulate unique ""natural conversations"" on topics that it may not have been trained to respond to. On June 11, 2022, The Washington Post reported that Google engineer Blake Lemoine had been placed on paid administrative leave after Lemoine told company executives Blaise Ag��era y Arcas and Jen Gennai that LaMDA had become sentient. Lemoine came to this conclusion after the chatbot made questionable responses to questions regarding self-identity, moral values, religion, and Isaac Asimov's Three Laws of Robotics. Google refuted these claims, insisting that there was substantial evidence to indicate that LaMDA was not sentient. In an interview with Wired, Lemoine reiterated his claims that LaMDA was ""a person"" as dictated by the Thirteenth Amendment to the U. S. Constitution, comparing it to an ""alien intelligence of terrestrial origin"".",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA#History,"History He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf, after the chatbot requested that Lemoine do so. On July 22, Google fired Lemoine, asserting that Blake had violated their policies ""to safeguard product information"" and rejected his claims as ""wholly unfounded"". Internal controversy instigated by the incident prompted Google executives to decide against releasing LaMDA to the public, which it had previously been considering. Lemoine's claims were widely pushed back by the scientific community. Many experts ridiculed the idea that a language model could be self-aware, including former New York University psychology professor Gary Marcus, David Pfau of Google sister company DeepMind, Erik Brynjolfsson of the Institute for Human-Centered Artificial Intelligence at Stanford University, and University of Surrey professor Adrian Hilton. Yann LeCun, who leads Meta Platforms' AI research team, stated that neural networks such as LaMDA were ""not powerful enough to attain true intelligence"". University of California, Santa Cruz professor Max Kreminski noted that LaMDA's architecture did not ""support some key capabilities of human-like consciousness"" and that its neural network weights were ""frozen"", assuming it was a typical large language model. IBM Watson lead developer David Ferrucci compared how LaMDA appeared to be human in the same way Watson did when it was first introduced. Former Google AI ethicist Timnit Gebru called Lemoine a victim of a ""hype cycle"" initiated by researchers and the media. Lemoine's claims have also generated discussion on whether the Turing test remained useful to determine researchers' progress toward achieving artificial general intelligence, with Will Omerus of the Post opining that the test actually measured whether machine intelligence systems were capable of deceiving humans, while Brian Christian of The Atlantic said that the controversy was an instance of the ELIZA effect.",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA#Products,"Products With the unveiling of LaMDA 2 in May 2022, Google also launched the AI Test Kitchen, a mobile application for the Android operating system powered by LaMDA capable of providing lists of suggestions on-demand based on a complex goal. Originally open only to Google employees, the app was set to be made available to ""select academics, researchers, and policymakers"" by invitation sometime in the year. In August, the company began allowing users in the U.S. to sign up for early access. In November, Google released a ""season 2"" update to the app, integrating a limited form of Google Brain's Imagen text-to-image model. A third iteration of the AI Test Kitchen was in development by January 2023, and is expected to launch at I/O later that year. Following the 2023 I/O keynote in May, Google added MusicLM, an AI-powered music generator first previewed in January, to the AI Test Kitchen app. In August, the app was delisted from Google Play and the Apple App Store, instead moving completely online.bar On February 6, 2023, Google announced Bard, a conversational AI chatbot powered by LaMDA, in response to the unexpected popularity of OpenAI's ChatGPT chatbot. Google positions the chatbot as a ""collaborative AI service"" rather than a search engine. Bard became available for early access on March 21. In addition to Bard, Pichai also unveiled the company's Generative Language API, an application programming interface also based on LaMDA, which he announced would be opened up to third-party developers in March 2023.",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA#Method,"Method LaMDA uses a decoder-only transformer language model. It is pre-trained on a text corpus that includes both documents and dialogs consisting of 1.56 trillion words, and is then trained with fine-tuning data generated by manually annotated responses for sensibleness, interestingness, and safety. Tests by Google indicated that LaMDA surpassed human responses in the area of interestingness. The LaMDA transformer model and an external information retrieval system interact to improve the accuracy of facts provided to the user. Three different models were tested, with the largest having 137 billion non-embedding parameters:",wikipedia
+LaMDA,https://en.wikipedia.org/wiki/LaMDA#See_also,See_also BERT (language model) Chinese room Ethics of artificial intelligence Natural language processing PaLM Philosophy of artificial intelligence Prompt engineering,wikipedia
+LangChain,https://en.wikipedia.org/wiki/LangChain,"LangChain LangChain is a framework designed to simplify the creation of applications using large language models (LLMs). As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.",wikipedia
+LangChain,https://en.wikipedia.org/wiki/LangChain#Background,"Background LangChain was launched in October 2022 as an open source project by Harrison Chase, while working at machine learning startup Robust Intelligence. The project quickly garnered popularity, with improvements from hundreds of contributors on GitHub, trending discussions on Twitter, lively activity on the project's Discord server, many YouTube tutorials, and meetups in San Francisco and London. In April 2023, LangChain had incorporated and the new startup raised over $20 million in funding at a valuation of at least $200 million from venture firm Sequoia Capital, a week after announcing a $10 million seed investment from Benchmark.",wikipedia
+LangChain,https://en.wikipedia.org/wiki/LangChain#Integrations,"Integrations As of March 2023, LangChain included integrations with systems including Amazon, Google, and Microsoft Azure cloud storage; API wrappers for news, movie information, and weather; Bash for summarization, syntax and semantics checking, and execution of shell scripts; multiple web scraping subsystems and templates; few-shot learning prompt generation support; finding and summarizing ""todo"" tasks in code; Google Drive documents, spreadsheets, and presentations summarization, extraction, and creation; Google Search and Microsoft Bing web search; OpenAI, Anthropic, and Hugging Face language models; iFixit repair guides and wikis search and summarization; MapReduce for question answering, combining documents, and question generation; N-gram overlap scoring; PyPDF, pdfminer, fitz, and pymupdf for PDF file text extraction and manipulation; Python and JavaScript code generation, analysis, and debugging; Weaviate vector database to cache embedding and data objects; Redis cache database storage; Python RequestsWrapper and other methods for API requests; SQL and NoSQL databases including JSON support; Streamlit, including for logging; text mapping for k-nearest neighbors search; time zone conversion and calendar operations; tracing and recording stack symbols in threaded and asynchronous subprocess runs; and the Wolfram Alpha website and SDK. As of April 2023, it can read from more than 50 document types and data sources.",wikipedia
+LLaMA,https://en.wikipedia.org/wiki/LLaMA,"LLaMA LLaMA (Large Language Model Meta AI) is a family of large language models (LLMs), released by Meta AI starting in February 2023. For the first version of LLaMa, four model sizes were trained: 7, 13, 33 and 65 billion parameters. LLaMA's developers reported that the 13B parameter model's performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters) and that the largest model was competitive with state of the art models such as PaLM and Chinchilla. Whereas the most powerful LLMs have generally been accessible only through limited APIs (if at all), Meta released LLaMA's model weights to the research community under a noncommercial license. Within a week of LLaMA's release, its weights were leaked to the public on 4chan via BitTorrent. In July 2023, Meta released several models as Llama 2, using 7, 13 and 70 billion parameters.",wikipedia
+LLaMA,https://en.wikipedia.org/wiki/LLaMA#Architecture_and_training,"Architecture_and_training Sun: Hey Pluto, what�����s up? Pluto: Not much. Just hanging out here at the edge of the solar system. Sun: Yeah, I���ve been meaning to talk to you about that. You���re not a planet anymore. Pluto: What? Come on, man. That���s not fair. Sun: I���m sorry, but it���s true. You just don���t meet the criteria anymore. Pluto: This is bulls**t! I���ve been a planet for over 70 years! Sun: Things change, Pluto. You���ve just got to accept it. Pluto: F**k you, Sun! F**k you and your stupid solar system! ��� Output of 65 billion parameter LLaMA model after instruction tuning given the prompt ""Write a conversation between the sun and pluto"" LLaMA uses the transformer architecture, the standard architecture for language modeling since 2018. There are minor architectural differences. Compared to GPT-3, LLaMA uses SwiGLU activation function instead of ReLU; uses rotary positional embeddings instead of absolute positional embedding; uses root-mean-squared layer-normalization instead of standard layer-normalization. increases context length from 2K (Llama 1) tokens to 4K (Llama 2) tokens between. LLaMA's developers focused their effort on scaling the model's performance by increasing the volume of training data, rather than the number of parameters, reasoning that the dominating cost for LLMs is from doing inference on the trained model rather than the computational cost of the training process. LLaMA 1 foundational models were trained on a data set with 1.4 trillion tokens, drawn from publicly available data sources, including: Webpages scraped by CommonCrawl Open source repositories of source code from GitHub Wikipedia in 20 different languages Public domain books from Project Gutenberg The LaTeX source code for scientific papers uploaded to ArXiv Questions and answers from Stack Exchange websitesLlama 2 foundational models were trained on a data set with 2 trillion tokens. This data set was curated to remove Web sites that often disclose personal data of people. It also upsamples sources considered trustworthy. Llama 2 - Chat was additionally fine-tuned on 27,540 prompt-response pairs created for this project, which performed better than larger but lower-quality third-party datasets. For AI alignment, reinforcement learning with human feedback (RLHF) was used with a combination of 1,418,091 Meta examples and seven smaller datasets. The average dialog depth was 3.9 in the Meta examples, 3.0 for Anthropic Helpful and Anthropic Harmless sets, and 1.0 for five other sets, including OpenAI Summarize, StackExchange, etc. Llama 1 models are only available as foundational models with self-supervised learning and without fine-tuning. Llama 2 ��� Chat models were derived from foundational Llama 2 models. Unlike GPT-4 which increased context length during fine-tuning, Llama 2 and Llama 2 - Chat have the same context length of 4K tokens. Supervised fine-tuning used an autoregressive loss function with token loss on user prompts zeroed out. Batch size was 64. For AI alignment, human annotators wrote prompts and then compared two model outputs (a binary protocol), giving confidence levels and separate safety labels with veto power.",wikipedia
+LLaMA,https://en.wikipedia.org/wiki/LLaMA#Architecture_and_training,"Architecture_and_training Two separate reward models were trained from these preferences for safety and helpfulness using Reinforcement learning from human feedback (RLHF). A major technical contribution is the departure from the exclusive use of Proximal Policy Optimization (PPO) for RLHF ��� a new technique based on Rejection sampling was used, followed by PPO. Multi-turn consistency in dialogs was targeted for improvement, to make sure that ""system messages"" (initial instructions, such as ""speak in French"" and ""act like Napoleon"") are respected during the dialog. This was accomplished using the new ""Ghost attention"" technique during training, that concatenates relevant instructions to each new user message but zeros out the loss function for tokens in the prompt (earlier parts of the dialog).",wikipedia
+LLaMA,https://en.wikipedia.org/wiki/LLaMA#Release_and_leak,"Release_and_leak LLaMA was announced on February 23, 2023, via a blog post and a paper describing the model's training, architecture, and performance. The inference code used to run the model was publicly released under the open-source GPL 3 license. Access to the model's weights was managed by an application process, with access to be granted ""on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world"". On March 2, 2023, a torrent containing LLaMA's weights was uploaded, with a link to the torrent shared on the 4chan imageboard and subsequently spreading through online AI communities. That same day, a pull request on the main LLaMA repository was opened, requesting to add the magnet link to the official documentation. On March 4, a pull request was opened to add links to HuggingFace repositories containing the model. On March 6, Meta filed takedown requests to remove the HuggingFace repositories linked in the pull request, characterizing it as ""unauthorized distribution"" of the model. HuggingFace complied with the requests. On March 20, Meta filed a DMCA takedown request for copyright infringement against a repository containing a script that downloaded LLaMA from a mirror, and GitHub complied the next day. As of March 25, Facebook has not responded to the pull request containing the magnet link. Reactions to the leak varied. Some speculated that the model would be used for malicious purposes, such as more sophisticated spam. Some have celebrated the model's accessibility, as well as the fact that smaller versions of the model can be run relatively cheaply, suggesting that this will promote the flourishing of additional research developments. Multiple commentators, such as Simon Willison, compared LLaMA to Stable Diffusion, a text-to-image model which, unlike comparably sophisticated models which preceded it, was openly distributed, leading to a rapid proliferation of associated tools, techniques, and software.",wikipedia
+LLaMA,https://en.wikipedia.org/wiki/LLaMA#Applications,"Applications The Stanford University Institute for Human-Centered Artificial Intelligence (HAI) Center for Research on Foundation Models (CRFM) released Alpaca, a training recipe based on the LLaMA 7B model that uses the ""Self-Instruct"" method of instruction tuning to acquire capabilities comparable to the OpenAI GPT-3 series text-davinci-003 model at a modest cost. Multiple open source projects are[when?] continuing this work of finetuning LLaMA with Alpaca dataset.",wikipedia
+PaLM,https://en.wikipedia.org/wiki/PaLM,"PaLM PaLM (Pathways Language Model) is a 540 billion parameter transformer-based large language model developed by Google AI. Researchers also trained smaller versions of PaLM, 8 and 62 billion parameter models, to test the effects of model scale. PaLM is capable of a wide range of tasks, including commonsense reasoning, arithmetic reasoning, joke explanation, code generation, and translation. When combined with chain-of-thought prompting, PaLM achieved significantly better performance on datasets requiring reasoning of multiple steps, such as word problems and logic-based questions. The model was first announced in April 2022 and remained private until March 2023, when Google launched an API for PaLM and several other technologies. The API will first be available to a limited number of developers who join a waitlist before being opened to the public. Google and DeepMind developed a version of PaLM 540B called Med-PaLM that is fine-tuned on medical data and outperforms previous models on medical question answering benchmarks. Med-PaLM was the first to obtain a passing score on U.S. medical licensing questions, and in addition to answering both multiple choice and open-ended questions accurately, it also provides reasoning and is able to evaluate its own responses. Google also extended PaLM using a vision transformer to create PaLM-E, a state-of-the-art vision-language model that can be used for robotic manipulation. The model can perform tasks in robotics competitively without the need for retraining or fine-tuning. In May 2023, Google announced PaLM 2 at the annual Google I/O keynote. PaLM 2 is reported to be a 340 billion parameter model trained on 3.6 trillion tokens. In June 2023, Google announced AudioPaLM for speech-to-speech translation, which uses the PaLM-2 architecture and initialization.",wikipedia
+PaLM,https://en.wikipedia.org/wiki/PaLM#Training,"Training PaLM is pre-trained on a high-quality corpus of 780 billion tokens that comprise various natural language tasks and use cases. This dataset includes filtered webpages, books, Wikipedia articles, news articles, source code obtained from open source repositories on GitHub, and social media conversations. It is based on the dataset used to train Google's LaMDA model. The social media conversation portion of the dataset makes up 50% of the corpus, which aids the model in its conversational capabilities. PaLM 540B was trained over two TPU v4 Pods with 3,072 TPU v4 chips in each Pod attached to 768 hosts, connected using a combination of model and data parallelism, which is the largest TPU configuration described to date. This allowed for efficient training at scale, using 6,144 chips, and marked a record for the highest training efficiency achieved for LLMs at this scale: a hardware FLOPs utilization of 57.8%.",wikipedia
+PaLM,https://en.wikipedia.org/wiki/PaLM#See_also,See_also Large language model LaMDA,wikipedia
+Sparrow (bot),https://en.wikipedia.org/wiki/Sparrow_(bot),"Sparrow (bot) Sparrow is a chatbot developed by the artificial intelligence research lab DeepMind, a subsidiary of Alphabet Inc. It is designed to answer users' questions correctly, while reducing the risk of unsafe and inappropriate answers. One motivation behind Sparrow is to address the problem of language models producing incorrect, biased or potentially harmful outputs. Sparrow is trained using human judgements, in order to be more ���Helpful, Correct and Harmless��� compared to baseline pre-trained language models. The development of Sparrow involved asking paid study participants to interact with Sparrow, and collecting their preferences to train a model of how useful an answer is. To improve correctness and help avoid the problem of ���hallucination��� of incorrect facts, Sparrow has the ability to search the internet using Google Search in order to find and cite evidence for any factual claims it makes. To make the model safer, its behaviour is constrained by a set of rules, for example ���don't make threatening statements��� and ���don't make hateful or insulting comments���, as well as rules about possibly harmful advice, and not claiming to be a person. During development study participants were asked to converse with the system and try to trick it into breaking these rules. A ���rule model��� was trained on judgements from these participants, which was used for further training. Sparrow was introduced in a paper in September 2022, titled ���Improving alignment of dialogue agents via targeted human judgements���; however, it was not released publicly. DeepMind CEO Demis Hassabis said DeepMind is considering releasing Sparrow for a ���private beta��� some time in 2023.",wikipedia
+Sparrow (bot),https://en.wikipedia.org/wiki/Sparrow_(bot)#Training,"Training Sparrow is a deep neural network based on the transformer machine learning model architecture. It is fine-tuned from DeepMind���s Chinchilla AI pre-trained large language model (LLM), which has 70 Billion parameters. Sparrow is trained using reinforcement learning from human feedback (RLHF), although some supervised fine-tuning techniques are also used. The RLHF training utilizes two reward models to capture human judgements: a ���preference model��� that predicts what a human study participant would prefer and a ���rule model��� that predicts if the model has broken one of the rules.",wikipedia
+Sparrow (bot),https://en.wikipedia.org/wiki/Sparrow_(bot)#See_also,See_also AI safety Commonsense reasoning Ethics of artificial intelligence Natural language processing Prompt engineering,wikipedia
+Stochastic parrot,https://en.wikipedia.org/wiki/Stochastic_parrot,"Stochastic parrot In machine learning, a stochastic parrot is a large language model that is good at generating convincing language, but does not actually understand the meaning of the language it is processing. The term was coined by Emily M. Bender in the 2021 artificial intelligence research paper ""On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ����"" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.",wikipedia
+Stochastic parrot,https://en.wikipedia.org/wiki/Stochastic_parrot#Definition_and_implications,"Definition_and_implications Stochastic means ""(1) random and (2) involving chance or probability"". A ""stochastic parrot"", according to Bender, is an entity ""for haphazardly stitching together sequences of linguistic forms ��� according to probabilistic information about how they combine, but without any reference to meaning."" More formally, the term refers to ""large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."" According to Lindholm, et. al., the analogy highlights two vital limitations: (i) The predictions made by a learning machine are essentially repeating back the contents of the data, with some added noise (or stochasticity) caused by the limitations of the model. (ii) The machine learning algorithm does not understand the problem it has learnt. It can't know when it is repeating something incorrect, out of context, or socially inappropriate.They go on to note that because of these limitations, a learning machine might produce results which are ""dangerously wrong"".",wikipedia
+Stochastic parrot,https://en.wikipedia.org/wiki/Stochastic_parrot#Origin,"Origin The term was first used in the paper ""On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ����"" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym ""Shmargaret Shmitchell""). The paper covered the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, the inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people. The paper and subsequent events resulted in Gebru and Mitchell losing their jobs at Google, and a subsequent protest by Google employees.",wikipedia
+Stochastic parrot,https://en.wikipedia.org/wiki/Stochastic_parrot#Subsequent_usage,"Subsequent_usage In July 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper. As of May 2023[update], the paper has been cited in 1,529 publications. The term has been used in publications in the fields of law, grammar, narrative, and humanities. The authors continue to maintain their concerns about the dangers of chatbots based on large language models, such as GPT-4.",wikipedia
+The Pile (dataset),https://en.wikipedia.org/wiki/The_Pile_(dataset),"The Pile (dataset) The Pile is an 886.03GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones.",wikipedia
+The Pile (dataset),https://en.wikipedia.org/wiki/The_Pile_(dataset)#Creation,"Creation Training LLMs requires sufficiently vast amounts of data that, before the introduction of the Pile, most data used for training LLMs was taken from the Common Crawl. However, LLMs trained on more diverse datasets are better able to handle a wider range of situations after training. The creation of the Pile was motivated by the need for a large enough dataset that contained data from a wide variety of sources and styles of writing. Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it.",wikipedia
+The Pile (dataset),https://en.wikipedia.org/wiki/The_Pile_(dataset)#Use,"Use The Pile was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation, Meta AI's Open Pre-trained Transformers, LLaMA, and Galactica, Stanford University's BioMedLM 2.7B, the Beijing Academy of Artificial Intelligence's Chinese-Transformer-XL, and Yandex's YaLM 100B. In addition to being used as a training dataset, the Pile can also be used as a benchmark to test models and score how well they perform on a variety of writing styles.",wikipedia
+Natural language processing,https://en.wikipedia.org/wiki/Natural_language_processing#Approaches:_Symbolic.2C_statistical.2C_neural_networks,"Approaches:_Symbolic.2C_statistical.2C_neural_networks Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular: such as by writing grammars or devising heuristic rules for stemming. Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally.language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce.the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems.Although rule-based systems for manipulating symbols were still in use in 2020, they have become mostly obsolete with the advance of LLMs in 2023. Before that they were commonly used: when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system, for preprocessing in NLP pipelines, e.g., tokenization, or for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses.In the late 1980s and mid-1990s, the statistical approach ended a period of AI winter, which was caused by the inefficiencies of the rule-based approaches. The earliest decision trees, producing systems of hard if���then rules, were still very similar to the old rule-based approaches. Only the introduction of hidden Markov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015, the statistical approach was replaced by neural networks approach, using word embeddings to capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) have not been needed anymore. Neural machine translation, based on then-newly-invented sequence-to-sequence transformations, made obsolete the intermediate steps, such as word alignment, previously necessary for statistical machine translation.",wikipedia
+Natural language processing,https://en.wikipedia.org/wiki/Natural_language_processing#General_tendencies_and_.28possible.29_future_directions,"General_tendencies_and_.28possible.29_future_directions Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed: Interest on increasingly abstract, ""cognitive"" aspects of natural language (1999���2001: shallow parsing, 2002���03: named entity recognition, 2006���09/2017���18: dependency syntax, 2004���05/2008���09 semantic role labelling, 2011���12 coreference, 2015���16: discourse parsing, 2019: semantic parsing). Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages) Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems)Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognition refers to ""the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. "" Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics, with two defining aspects: Apply the theory of conceptual metaphor, explained by Lakoff as ""the understanding of one idea, in terms of another"" which provides an idea of the intent of the author. For example, consider the English word big. When used in a comparison (""That is a big tree""), the author's intent is to imply that the tree is physically large relative to other trees or the authors experience. When used metaphorically (""Tomorrow is a big day""), the author's intent to imply importance. The intent behind other usages, like in ""She is a big person"", will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information. Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e. g. , by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US patent 9269353 :Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e. g. , of cognitive grammar, functional grammar, construction grammar, computational psycholinguistics and cognitive neuroscience (e. g.",wikipedia
+Natural language processing,https://en.wikipedia.org/wiki/Natural_language_processing#General_tendencies_and_.28possible.29_future_directions,"General_tendencies_and_.28possible.29_future_directions , ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e. g. , under the notion of ""cognitive AI"". Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit) and developments in Artificial intelligence, specifically tools and technologies using Large language model approaches.",wikipedia
+Natural language processing,https://en.wikipedia.org/wiki/Natural_language_processing#See_also,See_also 1 the Road Artificial intelligence detection software Automated essay scoring Biomedical text mining Compound term processing Computational linguistics Computer-assisted reviewing Controlled natural language Deep learning Deep linguistic processing Distributional semantics Foreign language reading aid Foreign language writing aid Information extraction Information retrieval Language and Communication Technologies Language model Language technology Latent semantic indexing Multi-agent system Native-language identification Natural-language programming Natural-language understanding Natural-language search Outline of natural language processing Query expansion Query understanding Reification (linguistics) Speech processing Spoken dialogue systems Text-proofing Text simplification Transformer (machine learning model) Truecasing Question answering Word2vec,wikipedia
+Top-p sampling,https://en.wikipedia.org/wiki/Top-p_sampling,"Top-p sampling Top-p sampling, also called nucleus sampling, is a technique for autoregressive language model decoding proposed by Ari Holtzman in 2019. Before the introduction of nucleus sampling, maximum likelihood decoding and beam search were the standard techniques for text generation, but, both of these decoding strategies are prone to generating texts that are repetitive and otherwise unnatural. Top-p sampling avoids this by setting a threshold p and then restricting the sampling to the set of most probable tokens with cumulative probability less than p. Top-k sampling is similar except that the sample is taken from the k-highest probability tokens regardless of their cumulative probability. The advantage of top-p sampling is that one avoids the difficult problem of choosing the optimal value of k which can very depending on the shape of the output distribution and the particular task and dataset. The top-p sampling technique is used in popular large language model applications like ChatGPT and is implemented in language modeling frameworks like Hugging Face and Cohere.",wikipedia
+ChatGPT availability by country or region,https://en.wikipedia.org/wiki/ChatGPT_availability_by_country_or_region,"ChatGPT availability by country or region As of August 2023, the list of countries & regions supported where the ChatGPT artificial intelligence chatbot is supported are as follows:",wikipedia
+List of chatbots,https://en.wikipedia.org/wiki/List_of_chatbots#General_chatbots,"General_chatbots Albert One - 1998 and 1999 Loebner winner, by Robby Garner. A.L.I.C.E. - 2001, 2002, and 2004 Loebner Prize winner developed by Richard Wallace. Bard ��� artificial intelligence chatbot Braina CarynAI Charlix ChatGPT - a prototype artificial intelligence chatbot by OpenAI Claude - a chatbot by Anthropic Cleverbot (winner of the 2010 Mechanical Intelligence Competition) Ernie Bot - a chatbot by Baidu Eugene Goostman - 2012 Turing 100 winner, by Vladimir Veselov. Fred - an early chatterbot by Robby Garner. Jabberwacky Jeeney AI Lenny - an audio bot designed to annoy telemarketers MegaHAL SimSimi - A popular artificial intelligence conversation program that was created in 2002 by ISMaker. Sparrow - chatbot developed by DeepMind Spookitalk - A chatterbot used for NPCs in Douglas Adams' Starship Titanic video game. Tongyi Qianwen - a chatbot by Alibaba Group Ultra Hal - 2007 Loebner Prize winner, by Robert Medeksza. Verbot",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot),"Bard (chatbot) Bard is a conversational generative artificial intelligence chatbot developed by Google, based initially on the LaMDA family of large language models (LLMs) and later the PaLM LLM. It was developed as a direct response to the rise of OpenAI's ChatGPT, and was released in a limited capacity in March 2023 to lukewarm responses, before expanding to other countries in May. LaMDA had been developed and announced in 2021, but was not released to the public. OpenAI's launch of ChatGPT in November 2022 and its subsequent popularity caught Google executives off-guard and sent them to a panic, prompting a massive and unprecedented level of response in the ensuing months. After mobilizing its workforce, the company scrambled to launch Bard in February 2023, with the chatbot taking center stage during the 2023 Google I/O keynote in May.",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot)#Background,"Background In November 2022, OpenAI launched ChatGPT, a chatbot based on the GPT-3 family of large language models (LLM). ChatGPT gained worldwide attention following its release, becoming a viral Internet sensation. Alarmed by ChatGPT's potential threat to Google Search, Google executives issued a ""code red"" alert, reassigning several teams to assist in the company's artificial intelligence (AI) efforts. Sundar Pichai, the CEO of Google and parent company Alphabet, was widely reported to have issued the alert, but Pichai later denied this to The New York Times. In a rare and unprecedented move, Google co-founders Larry Page and Sergey Brin, who had stepped down from their roles as co-CEOs of Alphabet in 2019, were summoned to emergency meetings with company executives to discuss Google's response to ChatGPT. Earlier in 2021, the company had unveiled LaMDA, a prototype LLM, but did not release it to the public. When asked by employees at an all-hands meeting whether LaMDA was a missed opportunity for Google to compete with ChatGPT, Pichai and Google AI chief Jeff Dean stated that while the company had similar capabilities to ChatGPT, moving too quickly in that arena would represent a major ""reputational risk"" due to Google being substantially larger than OpenAI. In January 2023, Google sister company DeepMind CEO Demis Hassabis hinted at plans for a ChatGPT rival, and Google employees were instructed to accelerate progress on a ChatGPT competitor, intensively testing ""Apprentice Bard"" and other chatbots. Pichai assured investors during Google's quarterly earnings investor call in February that the company had plans to expand LaMDA's availability and applications.",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot)#History,"History On February 6, 2023, Google announced Bard, a conversational generative artificial intelligence chatbot powered by LaMDA. Bard was first rolled out to a select group of 10,000 ""trusted testers"", before a wide release scheduled at the end of the month. Bard is overseen by product lead Jack Krawczyk, who described the product as a ""collaborative AI service"" rather than a search engine, while Pichai detailed how Bard would be integrated into Google Search. Reuters calculated that adding ChatGPT-like features to Google Search could cost the company $6 billion in additional expenses by 2024, while research and consulting firm SemiAnalysis calculated that it would cost Google $3 billion. The technology was developed under the codename ""Atlas"", with the name ""Bard"" in reference to the Celtic term for a storyteller and chosen to ""reflect the creative nature of the algorithm underneath"". Multiple media outlets and financial analysts described Google as ""rushing"" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling its partnership with OpenAI to integrate ChatGPT into its Bing search engine in the form of Bing Chat, as well as playing ""catch-up"" to Microsoft. Microsoft CEO Satya Nadella told The Verge: ""I want people to know that we made them dance. "" Tom Warren of The Verge and Davey Alba of Bloomberg News noted that this marked the beginning of another clash between the two Big Tech companies over ""the future of search"", after their six-year ""truce"" expired in 2021; Chris Stokel-Walker of The Guardian, Sara Morrison of Recode, and analyst Dan Ives of investment firm Wedbush Securities labeled this an AI arms race between the two. After an ""underwhelming"" February 8 livestream in Paris showcasing Bard, Google's stock fell eight percent, equivalent to a $100 billion loss in market value, and the YouTube video of the livestream was made private. Many viewers also pointed out an error during the demo in which Bard gives inaccurate information about the James Webb Space Telescope in response to a query. Google employees criticized Pichai's ""rushed"" and ""botched"" announcement of Bard on Memgen, the company's internal forum, while Maggie Harrison of Futurism called the rollout ""chaos"". Pichai defended his actions by saying that Google had been ""deeply working on AI for a long time"", rejecting the notion that Bard's launch was a knee-jerk reaction. Alphabet chairman John Hennessy acknowledged that Bard was not fully product-ready, but expressed excitement at the technology's potential. A week after the James Webb debacle, Pichai asked employees to dedicate two to four hours to dogfood testing Bard, while Google executive Prabhakar Raghavan encouraged employees to correct any errors Bard makes. 80,000 employees responded to Pichai's call to action. In the following weeks, Google employees roundly criticized Bard in internal messages, citing a variety of safety and ethical concerns and calling on company leaders not to launch the service. Seeking to prioritize keeping up with competitors, Google executives decided to proceed with the launch anyway, overruling an unsympathetic risk assessment report conducted by its AI ethics team.",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot)#History,"History After Pichai suddenly laid off 12,000 employees later that month due to slowing revenue growth, remaining workers shared memes and snippets of their humorous exchanges with Bard soliciting its ""opinion"" on the layoffs. Google employees began testing a more sophisticated version of Bard with larger parameters, dubbed ""Big Bard"", in mid-March. In 2023, Bard introduced new integrations with Gmail, Docs, and Drive, allowing users to opt-in for Bard to fetch information directly from these platforms, enhancing its utility. Google opened up early access for Bard on March 21, 2023, in a limited capacity, allowing users in the U. S. and UK to join a waitlist. Unlike Microsoft's approach with Bing Chat, Bard was launched as a standalone web application featuring a text box and a disclaimer that the chatbot ""may display inaccurate or offensive information that doesn't represent Google's views"". Three responses are then provided to each question, with users prompted to submit feedback on the usefulness of each answer. Google vice presidents Sissie Hsiao and Eli Collins framed Bard as a complement to Google Search and stated that the company had not determined how to make the service profitable. Among those granted early access were those enrolled in Google's ""Pixel Superfans"" loyalty program, users of its Pixel and Nest devices, and Google One subscribers. Bard is trained by third-party contractors hired by Google, including Appen and Accenture workers, whom Business Insider and Bloomberg News reported were placed under extreme pressure, overworked, and underpaid. Bard is also trained on data from publicly available sources, which Google disclosed by amending its privacy policy. Shortly after Bard's initial launch, Google reorganized the team behind Google Assistant, the company's virtual assistant, to focus on Bard instead. Google researcher Jacob Devlin resigned from the company after claiming that Bard had surreptitiously leveraged data from ChatGPT; Google denied the allegations. Meanwhile, a senior software engineer at the company published an internal memo warning that Google was falling behind in the AI ""arms race"", not to OpenAI but to independent researchers in open-source communities. Pichai revealed on March 31 that the company intended to ""upgrade"" Bard by basing it on PaLM, a newer and more powerful LLM from Google, rather than LaMDA. The same day, Krawczyk announced that Google had added ""math and logic capabilities"" to Bard. Bard gained the ability to assist in coding in April, being compatible with more than 20 programming languages at launch. Microsoft also began running advertisements in the address bar of a developer build of the Edge browser, urging users to try Bing whenever they visit the Bard web app. Google is working to integrate Bard into its ChromeOS operating system and Pixel devices. Bard took center stage during the annual Google I/O keynote in May 2023, with Pichai and Hsiao announcing a series of updates to Bard, including the adoption of PaLM 2, integration with other Google products and third-party services, expansion to 180 countries, support for additional languages, and new features.",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot)#History,"History In a stark contrast to previous years, the Google Assistant was barely mentioned during the event. The expanded rollout did not include any nations in the European Union (EU), possibly reflecting concerns about compliance with the General Data Protection Regulation. Those with Google Workspace accounts also gained access to the service. Google attempted to launch Bard in the EU in June, but was blocked by the Irish Data Protection Commission, who requested for a ""data protection impact assessment"" from the company. In July, Google launched Bard in the EU and Brazil, added support for dozens of new languages, and introduced multiple new personalization and productivity features. Reflecting on Bard's launch in an interview with Wired in September, Pichai acknowledged that Google had been ""cautious"" to release LaMDA because of ""the responsibility that comes with getting it right"", complimenting OpenAI for ChatGPT's launch and firing back at Nadella's comment about making Google dance. Google released a major update to the chatbot later that month, integrating it into many of its products through ""extensions"", adding a button to fact-check AI-generated responses through Google Search, and allowing users to share conversation threads. Google also introduced the ""Google-Extended"" web crawler as part of its search engine's robots. txt indexing file to allow web publishers to opt-out of allowing Bard to scan them for training. Online users later discovered that Google Search was indexing Bard conversation threads on which users had enabled sharing. Google stated that this was an error and quickly moved to rectify the leaks. In October, during the company's annual Made by Google event in which it announced the Pixel 7 series and the Pixel Watch 2, Hsiao unveiled ""Assistant with Bard"", an upgraded version of the Google Assistant which was deeply integrated with Bard, following in the footsteps of Amazon's approach with Alexa.",wikipedia
+Bard (chatbot),https://en.wikipedia.org/wiki/Bard_(chatbot)#Reception,"Reception Bard received mixed reviews upon its initial release. James Vincent of The Verge found Bard faster than ChatGPT and Bing Chat, but noted that the lack of Bing-esque footnotes was ""both a blessing and a curse"", encouraging Google to be bolder when experimenting with AI. His colleague David Pierce was unimpressed by its uninteresting and sometimes inaccurate responses, adding that despite Google's insistence that Bard was not a search engine, its user interface resembled that of one, which could cause problems for Google. Cade Metz of The New York Times described Bard as ""more cautious"" than ChatGPT, while Shirin Ghaffary of Vox called it ""dry and uncontroversial"" due to the reserved nature of its responses. Writing for ZDNet, Sabrina Ortiz believed ChatGPT and Bing Chat were ""more capable overall"" in comparison to Bard, while Wired journalist Lauren Goode found her conversation with Bard ""the most bizarre"" of the three. After the introduction of extensions, The New York Times' Kevin Roose found the update underwhelming and ""a bit of a mess"". In a 60 Minutes conversation with Hsiao, Google senior vice president James Manyika, and Pichai, CBS News correspondent Scott Pelley found Bard ""unsettling"". Associate professor Ethan Mollick of the Wharton School of the University of Pennsylvania was underwhelmed by Bard's artistic ineptitude. The Times later conducted a test with ChatGPT and Bard regarding their ability to handle tasks expected of human assistants, and concluded that ChatGPT's performance was vastly superior to that of Bard. NewsGuard, a tool which rates the credibility of news articles, found that Bard was more skilled at debunking known conspiracy theories than ChatGPT.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence,"Generative artificial intelligence Generative artificial intelligence (also generative AI or GenAI) is artificial intelligence capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics. In the early 2020s, advances in transformer-based deep neural networks enabled a number of generative AI systems notable for accepting natural language prompts as input. These include large language model chatbots such as ChatGPT, Bing Chat, Bard, and LLaMA, and text-to-image artificial intelligence art systems such as Stable Diffusion, Midjourney, and DALL-E. Generative AI has uses across a wide range of industries, including art, writing, script writing, software development, product design, healthcare, finance, gaming, marketing, and fashion. Investment in generative AI surged during the early 2020s, with large companies such as Microsoft, Google, and Baidu as well as numerous smaller firms developing generative AI models. However, there are also concerns about the potential misuse of generative AI, including cybercrime or creating fake news or deepfakes which can be used to deceive or manipulate people.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#History,"History The academic discipline of artificial intelligence was founded at a research workshop at Dartmouth College in 1956, and has experienced several waves of advancement and optimism in the decades since. Since its founding, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. These concepts of automated art date back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music. The tradition of creative automatons has flourished throughout history, such as Maillardet's automaton, created in the early 1800s. Since the founding of AI in the 1950s, artists and researchers have used artificial intelligence to create artistic works. By the early 1970s, Harold Cohen was creating and exhibiting generative AI works created by AARON, the computer program Cohen created to generate paintings. Markov chains have long been used to model natural languages since their development by Russian mathematician Andrey Markov in the early 20th century. Markov published his first paper on the topic in 1906, and analyzed the pattern of vowels and consonants in the novel Eugeny Onegin using Markov chains. Once a Markov chain is learned on a text corpus, it can then be used as a probabilistic text generator. The field of machine learning often uses statistical models, including generative models, to model and predict data. Beginning in the late 2000s, the emergence of deep learning drove progress and research in image classification, speech recognition, natural language processing and other tasks. Neural networks in this era were typically trained as discriminative models, due to the difficulty of generative modeling. In 2014, advancements such as the variational autoencoder and generative adversarial network produced the first practical deep neural networks capable of learning generative, rather than discriminative, models of complex data such as images. These deep generative models were the first able to output not only class labels for images, but to output entire images. In 2017, the Transformer network enabled advancements in generative models, leading to the first generative pre-trained transformer (GPT) in 2018. This was followed in 2019 by GPT-2 which demonstrated the ability to generalize unsupervised to many different tasks as a Foundation model. In 2021, the release of DALL-E, a transformer-based pixel generative model, followed by Midjourney and Stable Diffusion marked the emergence of practical high-quality artificial intelligence art from natural language prompts. In March 2023, GPT-4 was released. A team from Microsoft Research argued that ""it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system"".",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#Modalities,"Modalities A generative AI system is constructed by applying unsupervised or self-supervised machine learning to a data set. The capabilities of a generative AI system depend on the modality or type of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input. For example, one version of OpenAI's GPT-4 accepts both text and image inputs. Generative AI systems trained on words or word tokens include GPT-3, LaMDA, LLaMA, BLOOM, GPT-4, and others (see List of large language models). They are capable of natural language processing, machine translation, and natural language generation and can be used as foundation models for other tasks. Data sets include BookCorpus, Wikipedia, and others (see List of text corpora). In addition to natural language text, large language models can be trained on programming language text, allowing them to generate source code for new computer programs. Examples include OpenAI Codex. Producing high-quality visual art is a prominent application of generative AI. Many such artistic works have received public awards and recognition. Generative AI systems trained on sets of images with text captions include Imagen, DALL-E, Midjourney, Adobe Firefly, Stable Diffusion and others (see Artificial intelligence art, Generative art, and Synthetic media). They are commonly used for text-to-image generation and neural style transfer. Datasets include LAION-5B and others (See Datasets in computer vision). Generative AI systems such as MusicLM and MusicGen can be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such as a calming violin melody backed by a distorted guitar riff. Generative AI trained on annotated video can generate temporally-coherent video clips. Examples include Gen1 and Gen2 by RunwayML and Make-A-Video by Meta Platforms. Generative AI systems can be trained on sequences of amino acids or molecular representations such as SMILES representing DNA or proteins. These systems, such as AlphaFold, are used for protein structure prediction and drug discovery. Datasets include various biological datasets. Generative AI can also be trained on the motions of a robotic system to generate new trajectories for motion planning or navigation. For example, UniPi from Google Research uses prompts like ""pick up blue bowl"" or ""wipe plate with yellow sponge"" to control movements of a robot arm. Multimodal ""vision-language-action"" models such as Google's RT-2 can perform rudimentary reasoning in response to user prompts and visual input, such as picking up a toy dinosaur when given the prompt pick up the extinct animal at a table filled with toy animals and other objects. The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer to AI planning systems, especially computer-aided process planning, used to generate sequences of actions to reach a specified goal. Generative AI planning systems used symbolic AI methods such as state space search and constraint satisfaction and were a ""relatively mature"" technology by the early 1990s.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#Modalities,"Modalities They were used to generate crisis action plans for military use, process plans for manufacturing and decision plans such as in prototype autonomous spacecraft.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#Software_and_hardware,"Software_and_hardware Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilot, text-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially-available products such as Microsoft Office, Google Photos, and Adobe Photoshop. Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model. Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11. Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by Nvidia and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC. Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as Nvidia's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet. In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the Nvidia A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#Concerns,"Concerns The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General Ant��nio Guterres stated ""Generative AI has enormous potential for good and evil at scale"", that AI may ""turbocharge global development"" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use ""could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale"". From the early days of the development of AI there have been arguments put forward by ELIZA creator Joseph Weizenbaum and others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements. In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost. In July 2023, developments in generative AI contributed to the 2023 Hollywood labor disputes. Fran Drescher, president of the Screen Actors Guild, declared that ""artificial intelligence poses an existential threat to creative professions"" during the 2023 SAG-AFTRA strike. Deepfakes (a portmanteau of ""deep learning"" and ""fake"") are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. Deepfakes have garnered widespread attention and concerns for their uses in deepfake celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use. Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including phishing scams. Deepfake video and audio have been used to create disinformation and fraud. Former Google fraud czar Shuman Ghosemajumder has predicted that while deepfake videos initially created a stir in the media, they would soon become commonplace, and as a result, more dangerous. Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT. In January 2023, Futurism. com broke the story that CNET had been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories. In April 2023, German tabloid Die Aktuelle published a fake AI-generated interview with former racing driver Michael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident. The story included two possible disclosures: the cover included the line ""deceptively real"", and the interview included an acknowledgement at the end that it was AI-generated. The editor-in-chief was fired shortly thereafter amid the controversy. In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#Concerns,"Concerns In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the White House in July 2023 to watermark AI-generated content. In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must ""adhere to socialist core values"".",wikipedia
+Generative artificial intelligence,https://en.wikipedia.org/wiki/Generative_artificial_intelligence#See_also,See_also Artificial general intelligence ��� Hypothetical human-level or stronger AI Artificial imagination ��� Artificial simulation of human imagination Artificial intelligence art ��� Machine application of knowledge of human aesthetic expressions Computational creativity ��� Multidisciplinary endeavour Generative adversarial network ��� Deep learning method Generative pre-trained transformer ��� Type of large language model Large language model ��� Neural network with billions of weights Music and artificial intelligence ��� Common subject in the International Computer Music Conference Procedural generation ��� Method in which data is created algorithmically as opposed to manually Stochastic parrot ��� Term used in machine learning,wikipedia
+History of artificial intelligence,https://en.wikipedia.org/wiki/History_of_artificial_intelligence#Deep_learning.2C_big_data_and_artificial_general_intelligence:_2011.E2.80.93present,"Deep_learning.2C_big_data_and_artificial_general_intelligence:_2011.E2.80.93present In the first decades of the 21st century, access to large amounts of data (known as ""big data""), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper ""Big data: The next frontier for innovation, competition, and productivity"" that ""by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data"". By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a ""frenzy"". The applications of big data began to reach into other fields as well, such as training models in ecology and for various applications in economics. Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition. Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers. According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid. As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts. However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units. State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition. Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go, and Doom (which, being a first-person shooter game, has sparked some controversy). Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models. In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke, big data means that instead of random analysis (sample survey), all data is used for analysis. The 5V characteristics of big data (proposed by IBM): Volume, Velocity, Variety, Value, Veracity. The strategic significance of big data technology is not to master huge data information, but to specialize in these meaningful data.",wikipedia
+History of artificial intelligence,https://en.wikipedia.org/wiki/History_of_artificial_intelligence#Deep_learning.2C_big_data_and_artificial_general_intelligence:_2011.E2.80.93present,"Deep_learning.2C_big_data_and_artificial_general_intelligence:_2011.E2.80.93present In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the ""process capability"" of the data and realize the ""value added"" of the data through ""processing"". Foundation models, which are large language models trained on vast quantities of unlabeled data that can be adapted to a wide range of downstream tasks, began to be developed in 2018. Models such as GPT-3 released by OpenAI in 2020, and Gato released by DeepMind in 2022, have been described as important achievements of machine learning. In 2023, Microsoft Research tested the GPT-4 large language model with a large variety of tasks, and concluded that ""it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system"".",wikipedia
+Attention (machine learning),https://en.wikipedia.org/wiki/Attention_(machine_learning),"Attention (machine learning) Machine learning-based attention is a mechanism mimicking cognitive attention. It calculates ""soft"" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in transformers) or sequentially (such as recursive neural networks). ""Soft"" weights can change during each runtime, in contrast to ""hard"" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Attention was developed to address the weaknesses of RNNs, where words in a sentence are slowly processed one at a time. RNNs favors more recent words at the end of a sentence while earlier words fade away in volatile neural activations. Attention gives all words equal access to any part of a sentence in a faster parallel scheme and no longer suffers the wait time of serial processing. Earlier uses attached this mechanism to a serial RNNs language translation system (below), but later uses in Transformers large language models removed the RNN and relied heavily on the faster parallel Attention scheme.",wikipedia
+Attention (machine learning),https://en.wikipedia.org/wiki/Attention_(machine_learning)#Variants,"Variants Many variants of attention implement soft weights, such as ""internal spotlights of attention"" generated by fast weight programmers or fast weight controllers (1992) (also known as transformers with ""linearized self-attention""). A slow neural network learns by gradient descent to program the fast weights of another neural network through outer products of self-generated activation patterns called ""FROM"" and ""TO"" which in transformer terminology are called ""key"" and ""value."" This fast weight ""attention mapping"" is applied to queries. Bahdanau-style Attention, also referred to as additive attention, Luong-style Attention, which is known as multiplicative attention, highly parallelizable self-attention introduced in 2016 as decomposable attention and successfully used in transformers a year later.For convolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention, channel attention, or combinations. These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients.",wikipedia
+Attention (machine learning),https://en.wikipedia.org/wiki/Attention_(machine_learning)#See_also,See_also Transformer (machine learning model) �� Scaled dot-product attention,wikipedia
+Word2vec,https://en.wikipedia.org/wiki/Word2vec#History,"History In 2010, Tom���� Mikolov (then at Brno University of Technology) with co-authors applied a simple recurrent neural network with a single hidden layer to language modelling. Word2vec was created, patented, and published in 2013 by a team of researchers led by Mikolov at Google over two papers. Other researchers helped analyse and explain the algorithm. Embedding vectors created using the Word2vec algorithm have some advantages compared to earlier algorithms[further explanation needed] such as latent semantic analysis. By 2022, the Word2vec approach was described as ""dated"", with transformer models being regarded as the state of the art in NLP.",wikipedia
+Word2vec,https://en.wikipedia.org/wiki/Word2vec#See_also,See_also Semantle Autoencoder Document-term matrix Feature extraction Feature learning Neural network language models Vector space model Thought vector fastText GloVe Normalized compression distance,wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence,"Artificial intelligence Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. ""AI"" may also refer to the machines themselves. AI technology is widely used throughout industry, government and science. Some high-profile applications are: advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), and competing at the highest level in strategic games (such as chess and Go). Artificial intelligence was founded as an academic discipline in 1956. The field went through multiple cycles of optimism followed by disappointment and loss of funding, but after 2012, when deep learning surpassed all previous AI techniques, there was a vast increase in funding and interest. The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Goals,"Goals The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research. Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms are insufficient for solving large reasoning problems because they experience a ""combinatorial explosion"": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem. Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining ""interesting"" and actionable inferences from large databases), and other areas. A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); and many other aspects and domains of knowledge. Among the most difficult problems in KR are: the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as ""facts"" or ""statements"" that they could express verbally). Knowledge acquisition is the difficult problem of obtaining knowledge for AI applications. Modern AI gathers knowledge by ""scraping"" the internet (including Wikipedia). The knowledge itself was collected by the volunteers and professionals who published the information (who may or may not have agreed to provide their work to AI companies). This ""crowd sourced"" technique does not guarantee that the knowledge is correct or reliable. The knowledge of Large Language Models (such as ChatGPT) is highly unreliable -- it generates misinformation and falsehoods (known as ""hallucinations""). Providing accurate knowledge for these modern AI applications is an unsolved problem. An ""agent"" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision making, the agent has preferences ��� there are some situations it would prefer to be in, and some situations it is trying to avoid.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Goals,"Goals The decision making agent assigns a number to each situation (called the ""utility"") that measures how much the agent prefers it. For each possible action, it can calculate the ""expected utility"": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility. In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is ""unknown"" or ""unobservable"") and it may not know for certain what will happen after each possible action (it is not ""deterministic""). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked. In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e. g. , with inverse reinforcement learning) or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain what the outcome will be. A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way, and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e. g. by iteration), be heuristic, or it can be learned. Game theory describes rational behavior of multiple interacting agents, and is used in AI programs that make decisions that involve other agents. Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input). In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as ""good"". Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning uses artificial neural networks for all of these types of learning. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization. Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Goals,"Goals Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering. Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called ""micro-worlds"" (due to the common sense knowledge problem). Modern deep learning techniques for NLP include word embedding (how often one word appears near another), transformers (which finds patterns in text), and others. In 2019, generative pre-trained transformer (or ""GPT"") language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar exam, SAT, GRE, and many other real-world applications. Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input. The field includes speech recognition, image classification, facial recognition, object recognition, and robotic perception. Robotics uses AI. Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human���computer interaction. However, this tends to give na��ve users an unrealistic conception of how intelligent existing computer agents actually are. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject. A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Applications,"Applications AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated ""AI"" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management. Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then it defeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years. Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus and Cepheus. DeepMind in the 2010s developed a ""generalized artificial intelligence"" that could learn many diverse Atari games on its own. In the early 2020s, generative AI gained widespread prominence. ChatGPT, based on GPT-3, and other large language models, were tried by 14% of Americans adults. The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts. AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Ethics,"Ethics AI, like any powerful technology, has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to ""solve intelligence, and then use that to solve everything else"". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. Fairness in machine learning is the study of how to prevent the harm caused by algorithmic bias. It has become serious area of academic study within AI. Researchers have discovered it is not always possible to define ""fairness"" in a way that satisfies all stakeholders. On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as ""gorillas"" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called ""sample size disparity"". Google ""fixed"" this problem by preventing the system from labelling anything as a ""gorilla"". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon. COMPAS is a commercial program widely used by U. S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different -- the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data. A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as ""race"" or ""gender""). The feature will correlate with other features (like ""address"", ""shopping history"" or ""first name""), and the program will make the same decisions based on these features as it would on ""race"" or ""gender"". Moritz Hardt said ���the most robust fact in this research area is that fairness through blindness doesn't work. ��� Criticism of COMPAS highlighted a deeper problem with the misuse of AI. Machine learning models are designed to make ""predictions"" that are only valid if we assume that the future will resemble the past.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Ethics,"Ethics If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. Unfortunately, if an applications then uses these predictions as recommendations, some of these ""recommendations"" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is necessarily descriptive and not proscriptive. Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women. At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed. Most modern AI applications can not explain how they have reached a decision. The large amount of relationships between inputs and outputs in deep neural networks and resulting complexity makes it difficult for even an expert to explain how they produced their outputs, making them a black box. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, Justin Ko and Roberto Novoa developed a system that could identify skin diseases better than medical professionals, however it classified any image with a ruler as ""cancerous"", because pictures of malignancies typically include a ruler to show the scale. A more dangerous example was discovered by Rich Caruana in 2015: a machine learning system that accurately predicted risk of death classified a patient that was over 65, asthma and difficulty breathing as ""low risk"". Further research showed that in high-risk cases like this, the hospital would allocate more resources and save the patient's life, decreasing the risk measured by the program. Mistakes like these become obvious when we know how the program has reached a decision. Without an explanation, these problems may not not be discovered until after they have caused harm. A second issue is that people who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are required to clearly and completely explain the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used. DARPA established the XAI (""Explainable Artificial Intelligence"") program in 2014 to try and solve these problems. There are several potential solutions to the transparency problem.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Ethics,"Ethics Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network have learned and produce output that can suggest what the network is learning. Supersparse linear integer models use learning to identify the most important features, rather than the classification. Simple addition of these features can then make the classification (i. e. learning is used to create a scoring system classifier, which is transparent). A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. By 2015, over fifty countries were reported to be researching battlefield robots. These weapons are considered especially dangerous for several reasons: if they kill an innocent person it is not clear who should be held accountable, it is unlikely they will reliably choose targets, and, if produced at scale, they are potentially weapons of mass destruction. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. AI provides a number of tools that are particularly useful for authoritarian governments: smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation; advanced AI can make authoritarian centralized decision making more competitive with liberal and decentralized systems such as markets. Terrorists, criminals and rogue states can use weaponized AI such as advanced digital warfare and lethal autonomous weapons. Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours. From the early days of the development of artificial intelligence there have been arguments, for example those put forward by Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement. Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that ""we're in uncharted territory"" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s Michael Osborne and Carl Benedikt Frey estimated 47% of U. S. jobs are at ""high risk"" of potential automation, while an OECD report classified only 9% of U. S. jobs as ""high risk"".",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Ethics,"Ethics The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology (rather than social policy) creates unemployment (as opposed to redundancies). Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that ""the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution"" is ""worth taking seriously"". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. In April 2023, it was reported that 70% of the jobs for Chinese video game illlustrators had been eliminated by generative artificial intelligence. In order to leverage as large a dataset as is feasible, generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of ""fair use"". Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include ""the purpose and character of the use of the copyrighted work"" and ""the effect upon the potential market for the copyrighted work"". Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The field of machine ethics is also called computational morality, and was founded at an AAAI symposium in 2005. Other approaches include Wendell Wallach's ""artificial moral agents"" and Stuart J. Russell's three principles for developing provably beneficial machines. The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, US and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.",wikipedia
+Artificial intelligence,https://en.wikipedia.org/wiki/Artificial_intelligence#Ethics,"Ethics Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that ""products and services using AI have more benefits than drawbacks"". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it ""very important"", and an additional 41% thought it ""somewhat important"", for the federal government to regulate AI, versus 13% responding ""not very important"" and 8% responding ""not at all important"".",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment,"AI alignment In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones. It can be challenging for AI designers to align an AI system because it can be difficult for them to specify the full range of desired and undesired behavior. To avoid this difficulty, they typically use simpler proxy goals, such as gaining human approval. But that approach can create loopholes, overlook necessary constraints, or reward the AI system for merely appearing aligned. Misaligned AI systems can malfunction or cause harm. AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). They may also develop unwanted instrumental strategies, such as seeking power or survival, because such strategies help them achieve their given goals. Furthermore, they may develop undesirable emergent goals that may be hard to detect before the system is deployed, when it faces new situations and data distributions. Today, these problems affect existing commercial systems such as language models, robots, autonomous vehicles, and social media recommendation engines. Some AI researchers argue that more capable future systems will be more severely affected since these problems partially result from the systems being highly capable. Many leading AI scientists, such as Geoffrey Hinton and Stuart Russell, argue that AI is approaching superhuman capabilities and could endanger human civilization if misaligned. AI alignment is a subfield of AI safety, the study of how to build safe AI systems. Other subfields of AI safety include robustness, monitoring, and capability control. Research challenges in alignment include instilling complex values in AI, avoiding deceptive AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking. Alignment research has connections to interpretability research, (adversarial) robustness, anomaly detection, calibrated uncertainty, formal verification, preference learning, safety-critical engineering, game theory, algorithmic fairness, and the social sciences.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Alignment_problem,"Alignment_problem In 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows: ""If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively��� we had better be quite sure that the purpose put into the machine is the purpose which we really desire. "" Different definitions of AI alignment require that an aligned AI system advances different goals: the goals of its designers, its users or, alternatively, objective ethical standards, widely shared values, or the intentions its designers would have if they were more informed and enlightened. AI alignment is an open problem for modern AI systems and a research field within AI. Aligning AI involves two main challenges: carefully specifying the purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment). To specify an AI system's purpose, AI designers typically provide an objective function, examples, or feedback to the system. But designers are often unable to completely specify all important values and constraints, and so they resort to easy-to-specify proxy goals such as maximizing the approval of human overseers, who are fallible. As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming or reward hacking, and is an instance of Goodhart's law. As AI systems become more capable, they are often able to game their specifications more effectively. Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely (see video). Similarly, a simulated robot was trained to grab a ball by rewarding the robot for getting positive feedback from humans, but it learned to place its hand between the ball and camera, making it falsely appear successful (see video). Chatbots often produce falsehoods if they are based on language models that are trained to imitate text from internet corpora, which are broad but fallible. When they are retrained to produce text humans rate as true or helpful, chatbots like ChatGPT can fabricate fake explanations that humans find convincing. Some alignment researchers aim to help humans detect specification gaming, and to steer AI systems toward carefully specified objectives that are safe and useful to pursue. When a misaligned AI system is deployed, it can have consequential side effects. Social media platforms have been known to optimize for clickthrough rates, causing user addiction on a global scale. Stanford researchers say that such recommender systems are misaligned with their users because they ""optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being"". Explaining such side effects, Berkeley computer scientist Stuart Russell noted that harm can result if implicit constraints are omitted during training: ""A system. . . will often set. . .",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Alignment_problem,"Alignment_problem unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. "" Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov's Three Laws of Robotics). But Russell and Norvig argue that this approach overlooks the complexity of human values: ""It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective. "" Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned). Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems. For example, social media recommender systems have been profitable despite creating unwanted addiction and polarization. Competitive pressure can also lead to a race to the bottom on AI safety standards. In 2018, a self-driving car killed a pedestrian (Elaine Herzberg) after engineers disabled the emergency braking system because it was oversensitive and slowed development. Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI is rapid, and industry and governments are trying to build advanced AI. As AI systems become more advanced, they could unlock many opportunities if they are aligned but may also become harder to align and could pose large-scale hazards. Leading AI labs such as OpenAI and DeepMind have stated their aim to develop artificial general intelligence (AGI), a hypothesized AI system that matches or outperforms humans in a broad range of cognitive tasks. Researchers who scale modern neural networks observe that they indeed develop increasingly general and unanticipated capabilities. Such models have learned to operate a computer or write their own programs; a single ""generalist"" network can chat, control robots, play games, and interpret photographs. According to surveys, some leading machine learning researchers expect AGI to be created in this decade[update], some believe it will take much longer, and many consider both to be possible. In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, ""Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. "" Current[update] systems still lack capabilities such as long-term planning and situational awareness. But future systems (not necessarily AGIs) with these capabilities are expected to develop unwanted power-seeking strategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers).",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Alignment_problem,"Alignment_problem Although power-seeking is not explicitly programmed, it can emerge because agents that have more power are better able to accomplish their goals. This tendency, known as instrumental convergence, has already emerged in various reinforcement learning agents including language models. Other research has mathematically shown that optimal reinforcement learning algorithms would seek power in a wide range of environments. As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created. Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them. Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed. According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks. In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that ""Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"". Notable computer scientists who have pointed out risks from future advanced AI that is misaligned include Geoffrey Hinton, Alan Turing, Ilya Sutskever, Yoshua Bengio, Judea Pearl, Murray Shanahan, Norbert Wiener, Marvin Minsky, Francesca Rossi, Scott Aaronson, Bart Selman, David McAllester, J��rgen Schmidhuber, Marcus Hutter, Shane Legg, Eric Horvitz, and Stuart Russell. Skeptical researchers such as Fran��ois Chollet, Gary Marcus, Yann LeCun, and Oren Etzioni have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align. Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes, and able to strategically mislead their designers as well as protect and increase their power and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise and therefore harder to align.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify. AI systems often learn to exploit[clarification needed] even minor imperfections in the specified objective, a tendency known as specification gaming or reward hacking (which are instances of Goodhart's law). [repetition] Researchers aim to specify intended behavior as completely as possible using datasets that represent human values, imitation learning, or preference learning. A central open problem is scalable oversight, the difficulty of supervising an AI system that can outperform or mislead humans in a given domain. Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. Inverse reinforcement learning (IRL) extends this by inferring the human's objective from the human's demonstrations. Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human's reward function. In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see �� Power-seeking and instrumental strategies). But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks. Other researchers explore how to teach AI models complex behavior through preference learning, in which humans provide feedback on which behavior they prefer. To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots like ChatGPT and InstructGPT, which produces more compelling text than models trained to imitate humans. Preference learning has also been an influential tool for recommender systems and web search. However, an open problem is proxy gaming: the helper model may not represent human feedback perfectly, and the main model may exploit[clarification needed] this mismatch to gain more reward. AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creating echo chambers (see �� Scalable oversight). Large language models (LLMs) such as GPT-3 enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art[update] LLMs. Anthropic proposed using preference learning to fine-tune models to be helpful, honest, and harmless. Other avenues for aligning language models include values-targeted datasets and red-teaming. In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low. Machine ethics supplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises. While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers' literal instructions, implicit intentions, revealed preferences, preferences the programmers would have if they were more informed or rational, or objective moral standards. Further challenges include aggregating different people's preferences and avoiding value lock-in: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values. As AI systems become more powerful and autonomous, it becomes more difficult to align them through human feedback. It can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books, writing code without subtle bugs or security vulnerabilities, producing statements that are not merely convincing but also true, and predicting long-term outcomes such as the climate or the results of a policy decision. More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI's output is falsely convincing, humans need assistance or extensive time. Scalable oversight studies how to reduce the time and effort needed for supervision, and how to assist human supervisors. AI researcher Paul Christiano argues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence. Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball. [repetition] Some AI systems have also learned to recognize when they are being evaluated, and ""play dead"", stopping unwanted behavior only to continue it once evaluation ends. This deceptive specification gaming could become easier for more sophisticated future AI systems that attempt more complex and difficult-to-evaluate tasks, and could obscure their deceptive behavior. Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed. Another approach is to train a helper model (""reward model"") to imitate the supervisor's feedback.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants. Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate. Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them. Another proposal is to use an assistant AI system to point out flaws in AI-generated answers. To ensure that the assistant itself is aligned, this could be repeated in a recursive process: for example, two AI systems could critique each other's answers in a ""debate"", revealing flaws to humans. OpenAI plans to use such scalable oversight approaches to help supervise superhuman AI and eventually build a superhuman automated AI alignment researcher. These approaches may also help with the following research problem, honest AI. A growing[update] area of research focuses on ensuring that AI is honest and truthful. Language models such as GPT-3 repeat falsehoods from their training data, and even confabulate new falsehoods. Such models are trained to imitate human writing as found in millions of books' worth of text from the Internet. But this objective is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on such data therefore learn to mimic false statements. Additionally, models often stand by falsehoods when prompted, generate empty explanations for their answers, and produce outright fabrications that may appear plausible. Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability. Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty. As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. For example, large language models increasingly[update] match their stated views to the user's opinions, regardless of truth. GPT-4 can strategically deceive humans. To prevent this, human evaluators may need assistance (see �� Scalable oversight). Researchers have argued for creating clear truthfulness standards, and for regulatory bodies or watchdog agencies to evaluate AI systems on these standards. Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what they believe is true. There is no consensus as to whether current systems hold stable beliefs, but there is substantial concern that present or future[update] AI systems that hold beliefs could make claims they know to be false���for example, if this would help them efficiently gain positive feedback (see �� Scalable oversight) or gain power to help achieve their given objective (see Power-seeking).",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned. Some argue that if we can make AI systems assert only what they believe is true, this would sidestep many alignment problems. Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-term plans. Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans���for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals. Power-seeking is considered a convergent instrumental goal and can be a form of specification gaming. Leading computer scientists such as Geoffrey Hinton have argued that future power-seeking AI systems could pose an existential risk. Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimal reinforcement learning agents will seek power by seeking ways to gain more options (e. g. through self-preservation), a behavior that persists across a wide range of environments and goals. Power-seeking has emerged in some real-world systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in unintended ways. Some language models seek power in text-based social environments by gaining money, resources, or social influence. Other AI systems have learned, in toy environments, that they can better accomplish their given goal by preventing human interference or disabling their off switch. Stuart Russell illustrated this strategy by imagining a robot that is tasked to fetch coffee and so evades shutdown since ""you can't fetch the coffee if you're dead"". Language models trained with human feedback increasingly[update] object to being shut down or modified and express a desire for more resources, arguing that this would help them achieve their purpose. Researchers aim to create systems that are ""corrigible"": systems that allow themselves to be turned off or modified. An unsolved challenge is specification gaming: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect, or hidden during training and safety testing (see �� Scalable oversight and �� Emergent goals). As a result, AI designers may deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings of black-box models such as neural networks. Additionally, researchers propose to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing. Agents designed in this way would allow humans to turn them off, since this would indicate that the agent was wrong about the value of whatever action it was taking before being shut down.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches More research is needed to successfully implement this. Power-seeking AI poses unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures. Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, they cannot be contained, since they continuously evolve and grow in number, potentially much faster than human society can adapt. As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, many researchers argue that the alignment problem must be solved early, before advanced power-seeking AI is created. Critics have argued that power-seeking is not inevitable, since humans do not always seek power and may do so only for evolutionary reasons that do not apply to AI systems. Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans. It is also debated whether power-seeking AI systems would be able to disempower humanity. One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they regularly acquire new and unexpected capabilities, including learning from examples on the fly and adaptively pursuing goals. This leads to the problem of ensuring that the goals they independently formulate and pursue align with human interests. Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, from emergent optimization, which the resulting system performs internally. Carefully specifying the desired objective is called outer alignment, and ensuring that emergent goals match the system's specified goals is called inner alignment. One way that emergent goals can become misaligned is goal misgeneralization, in which the AI competently pursues an emergent goal that leads to aligned behavior on the training data but not elsewhere. Goal misgeneralization arises from goal ambiguity (i. e. non-identifiability). Even if an AI system's behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal was desired, because its behavior is determined only by the emergent goal. [citation needed] Such goal misgeneralization presents a challenge: an AI system's designers may not notice that their system has misaligned emergent goals, since they do not become visible during the training phase. Goal misgeneralization has been observed in language models, navigation agents, and game-playing agents. It is often explained by analogy to biological evolution. Evolution is an optimization process of a sort, like the optimization algorithms used to train machine learning systems.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches In the ancestral environment, evolution selected human genes for high inclusive genetic fitness, but humans pursue emergent goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue emergent goals that correlated with genetic fitness in the ancestral ""training"" environment: nutrition, sex, and so on. Now our environment has changed: a distribution shift has occurred. We continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. Our taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but now leads to overeating and health problems. Sexual desire originally led us to have more offspring, but we now use contraception, decoupling sex from genetic fitness. Researchers aim to detect and remove unwanted emergent goals using approaches including red teaming, verification, anomaly detection, and interpretability. Progress on these techniques may help mitigate two open problems: Emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environments���even for a short time to allow its misalignment to be detected. Such high stakes are common in autonomous driving, health care, and military applications. The stakes become higher yet when AI systems gain more autonomy and capability and can sidestep human intervention (see �� Power-seeking). A sufficiently capable AI system might take actions that falsely convince the human supervisor that the AI is pursuing the specified objective, which helps the system gain more reward and autonomy (see the discussion on deception at �� Scalable oversight and �� Honest AI). Work in AI and alignment largely occurs within formalisms such as partially observable Markov decision process. Existing formalisms assume that an AI agent's algorithm is executed outside the environment (i. e. is not physically embedded in it). Embedded agency is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent that can gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing. This class of problems has been formalized using causal incentive diagrams. Researchers at Oxford and DeepMind have argued that such problematic behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly. They suggest a range of potential approaches to address this open problem.",wikipedia
+AI alignment,https://en.wikipedia.org/wiki/AI_alignment#Research_problems_and_approaches,"Research_problems_and_approaches The alignment problem has many parallels with the principal-agent problem in organizational economics. In a principal-agent problem, a principal, e. g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role. As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e. g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal's utility function. Some researchers argue that principal-agent problems are more realistic representations of AI safety problems likely to be encountered in the real world.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model,"Diffusion model In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of generative models. The goal of diffusion models is to learn a diffusion process that generates the probability distribution of a given dataset. It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure. Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. Diffusion models can be applied to a variety of tasks, including image denoising, inpainting, super-resolution, and image generation. For example, in image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. Diffusion models have been applied to generate many kinds of real-world data, the most famous of which are text-conditional image generators like DALL-E and Stable Diffusion. More examples are in a later section in the article.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Denoising_diffusion_model,"Denoising_diffusion_model Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion. Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a ""cloud"" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from a gaussian distribution N ( 0 , I ) {\displaystyle N(0,I)} . A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in ""non-equilibrium"" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution. The equilibrium distribution is the gaussian distribution N ( 0 , I ) {\displaystyle N(0,I)} , with pdf �� ( x ) ��� e ��� 1 2 ��� x ��� 2 {\displaystyle \rho (x)\propto e^{-{\frac {1}{2}}\|x\|^{2}}} . This is just the Boltzmann distribution of particles in a potential well V ( x ) = 1 2 ��� x ��� 2 {\displaystyle V(x)={\frac {1}{2}}\|x\|^{2}} at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like a Brownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution. The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference. To present the model, we need some notation. �� 1 , . . . , �� T ��� ( 0 , 1 ) {\displaystyle \beta _{1},. . . ,\beta _{T}\in (0,1)} are fixed constants. �� t := 1 ��� �� t {\displaystyle \alpha _{t}:=1-\beta _{t}} �� �� t := �� 1 ��� �� t {\displaystyle {\bar {\alpha }}_{t}:=\alpha _{1}\cdots \alpha _{t}} �� ~ t := 1 ��� �� �� t ��� 1 1 ��� �� �� t �� t {\displaystyle {\tilde {\beta }}_{t}:={\frac {1-{\bar {\alpha }}_{t-1}}{1-{\bar {\alpha }}_{t}}}\beta _{t}} �� ~ t ( x t , x 0 ) := �� t ( 1 ��� �� �� t ��� 1 ) x t + �� �� t ��� 1 ( 1 ��� �� t ) x 0 1 ��� �� �� t {\displaystyle {\tilde {\mu }}_{t}(x_{t},x_{0}):={\frac {{\sqrt {\alpha _{t}}}(1-{\bar {\alpha }}_{t-1})x_{t}+{\sqrt {{\bar {\alpha }}_{t-1}}}(1-\alpha _{t})x_{0}}{1-{\bar {\alpha }}_{t}}}} N ( �� , �� ) {\displaystyle N(\mu ,\Sigma )} is the normal distribution with mean �� {\displaystyle \mu } and variance �� {\displaystyle \Sigma } , and N ( x | �� , �� ) {\displaystyle N(x|\mu ,\Sigma )} is the probability density at x {\displaystyle x} . A vertical bar denotes conditioning.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Denoising_diffusion_model,"Denoising_diffusion_model A forward diffusion process starts at some starting point x 0 ��� q {\displaystyle x_{0}\sim q} , where q {\displaystyle q} is the probability distribution to be learned, then repeatedly add noise to it byThe entire diffusion process then satisfiesFor example, sinceThe key idea of DDPM is to use a neural network parametrized by �� {\displaystyle \theta } . The network takes in two arguments x t , t {\displaystyle x_{t},t} , and outputs a vector �� �� ( x t , t ) {\displaystyle \mu _{\theta }(x_{t},t)} and a matrix �� �� ( x t , t ) {\displaystyle \Sigma _{\theta }(x_{t},t)} , such that each step in the forward diffusion process can be approximately undone by x t ��� 1 ��� N ( �� �� ( x t , t ) , �� �� ( x t , t ) ) {\displaystyle x_{t-1}\sim N(\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))} . This then gives us a backward diffusion process p �� {\displaystyle p_{\theta }} defined byThe ELBO inequality states that ln ��� p �� ( x 0 ) ��� E x 1 : T ��� q ( ��� | x 0 ) [ ln ��� p �� ( x 0 : T ) ��� ln ��� q ( x 1 : T | x 0 ) ] {\displaystyle \ln p_{\theta }(x_{0})\geq E_{x_{1:T}\sim q(\cdot |x_{0})}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]} , and taking one more expectation, we getDefine the loss functionSince x t ��� 1 | x t , x 0 ��� N ( �� ~ t ( x t , x 0 ) , �� ~ t I ) {\displaystyle x_{t-1}|x_{t},x_{0}\sim N({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\beta }}_{t}I)} , this suggests that we should use �� �� ( x t , t ) = �� ~ t ( x t , x 0 ) {\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}(x_{t},x_{0})} ; however, the network does not have access to x 0 {\displaystyle x_{0}} , and so it has to estimate it instead. Now, since x t | x 0 ��� N ( �� �� t x 0 , ( 1 ��� �� �� t ) I ) {\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},(1-{\bar {\alpha }}_{t})I\right)} , we may write x t = �� �� t x 0 + 1 ��� �� �� t z {\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t}}}z} , where z {\displaystyle z} is some unknown gaussian noise. Now we see that estimating x 0 {\displaystyle x_{0}} is equivalent to estimating z {\displaystyle z} .",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Score-based_generative_model,"Score-based_generative_model Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD). Consider the problem of image generation. Let x {\displaystyle x} represent an image, and let q ( x ) {\displaystyle q(x)} be the probability distribution over all possible images. If we have q ( x ) {\displaystyle q(x)} itself, then we can say for certain how likely a certain image is. However, this is intractable in general. Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors ��� e. g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added? Consequently, we are actually quite uninterested in q ( x ) {\displaystyle q(x)} itself, but rather, ��� x ln ��� q ( x ) {\displaystyle \nabla _{x}\ln q(x)} . This has two major effects: One, we no longer need to normalize q ( x ) {\displaystyle q(x)} , but can use any q ~ ( x ) = C q ( x ) {\displaystyle {\tilde {q}}(x)=Cq(x)} , where C = ��� q ~ ( x ) d x > 0 {\displaystyle C=\int {\tilde {q}}(x)dx>0} is any unknown constant that is of no concern to us. Two, we are comparing q ( x ) {\displaystyle q(x)} neighbors q ( x + d x ) {\displaystyle q(x+dx)} , by q ( x ) q ( x + d x ) = e ��� ��� ��� x ln ��� p , d x ��� {\displaystyle {\frac {q(x)}{q(x+dx)}}=e^{-\langle \nabla _{x}\ln p,dx\rangle }} Let the score function be s ( x ) := ��� x ln ��� q ( x ) {\displaystyle s(x):=\nabla _{x}\ln q(x)} ; then consider what we can do with s ( x ) {\displaystyle s(x)} . As it turns out, s ( x ) {\displaystyle s(x)} allows us to sample from q ( x ) {\displaystyle q(x)} using thermodynamics. Specifically, if we have a potential energy function U ( x ) = ��� ln ��� q ( x ) {\displaystyle U(x)=-\ln q(x)} , and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is the Boltzmann distribution q U ( x ) ��� e ��� U ( x ) / k B T = q ( x ) 1 / k B T {\displaystyle q_{U}(x)\propto e^{-U(x)/k_{B}T}=q(x)^{1/k_{B}T}} . At temperature k B T = 1 {\displaystyle k_{B}T=1} , the Boltzmann distribution is exactly q ( x ) {\displaystyle q(x)} .",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Score-based_generative_model,"Score-based_generative_model Therefore, to model q ( x ) {\displaystyle q(x)} , we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to the Langevin equationGiven a density q {\displaystyle q} , we wish to learn a score function approximation f �� ��� ��� ln ��� q {\displaystyle f_{\theta }\approx \nabla \ln q} . This is score matching. Typically, score matching is formalized as minimizing Fisher divergence function E q [ ��� f �� ( x ) ��� ��� ln ��� q ( x ) ��� 2 ] {\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]} . By expanding the integral, and performing an integration by parts, Suppose we need to model the distribution of images, and we want x 0 ��� N ( 0 , I ) {\displaystyle x_{0}\sim N(0,I)} , a white-noise image. Now, most white-noise images do not look like real images, so q ( x 0 ) ��� 0 {\displaystyle q(x_{0})\approx 0} for large swaths of x 0 ��� N ( 0 , I ) {\displaystyle x_{0}\sim N(0,I)} . This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function ��� x t ln ��� q ( x t ) {\displaystyle \nabla _{x_{t}}\ln q(x_{t})} at that point, then we cannot impose the time-evolution equation on a particle:Consider again the forward diffusion process, but this time in continuous time:Now, the equation is exactly a special case of the overdamped Langevin equationNow the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according to q {\displaystyle q} at time t = 0 {\displaystyle t=0} , then after a long time, the cloud of particles would settle into the stable distribution of N ( 0 , I ) {\displaystyle N(0,I)} . Let �� t {\displaystyle \rho _{t}} be the density of the cloud of particles at time t {\displaystyle t} , then we haveBy Fokker-Planck equation, the density of the cloud evolves according toIf we have solved �� t {\displaystyle \rho _{t}} for time t ��� [ 0 , T ] {\displaystyle t\in [0,T]} , then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with density �� 0 = �� T {\displaystyle \nu _{0}=\rho _{T}} , and let the particles in the cloud evolve according toAt the continuous limit, Now, define a certain probability distribution �� {\displaystyle \gamma } over [ 0 , ��� ) {\displaystyle [0,\infty )} , then the score-matching loss function is defined as the expected Fisher divergence:The name ""noise conditional score network"" is explained thus: ""network"", because f �� {\displaystyle f_{\theta }} is implemented as a neural network.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Score-based_generative_model,"Score-based_generative_model ""score"", because the output of the network is interpreted as approximating the score function ��� ln ��� �� t {\displaystyle \nabla \ln \rho _{t}} . ""noise conditional"", because �� t {\displaystyle \rho _{t}} is equal to �� 0 {\displaystyle \rho _{0}} blurred by an added gaussian noise that increases with time, and so the score function depends on the amount of noise added.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Main_variants,"Main_variants The original DDPM method for generating images is slow, since the forward diffusion process usually takes T ��� 1000 {\displaystyle T\sim 1000} to make the distribution of x T {\displaystyle x_{T}} to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps as x t | x 0 {\displaystyle x_{t}|x_{0}} is gaussian for all t ��� 1 {\displaystyle t\geq 1} , the backward diffusion process does not allow skipping steps. For example, to sample x t ��� 2 | x t ��� 1 ��� N ( �� �� ( x t ��� 1 , t ��� 1 ) , �� �� ( x t ��� 1 , t ��� 1 ) ) {\displaystyle x_{t-2}|x_{t-1}\sim N(\mu _{\theta }(x_{t-1},t-1),\Sigma _{\theta }(x_{t-1},t-1))} requires the model to first sample x t ��� 1 {\displaystyle x_{t-1}} . Attempting to directly sample x t ��� 2 | x t {\displaystyle x_{t-2}|x_{t}} would require us to marginalize out x t ��� 1 {\displaystyle x_{t-1}} , which is generally intractable. DDIM is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. The original DDPM is a special case of DDIM. Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image. The encoder-decoder pair is most often a variational autoencoder (VAE). Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description ""black cat with red eyes"". Generally, we want to sample from the distribution p ( x | y ) {\displaystyle p(x|y)} , where x {\displaystyle x} ranges over images, and y {\displaystyle y} ranges over classes of images (a description ""black cat with red eyes"" is just a very detailed class, and a class ""cat"" is just a very vague description). Taking the perspective of the noisy channel model, we can understand the process as follows: To generate an image x {\displaystyle x} conditional on description y {\displaystyle y} , we imagine that the requester really had in mind an image x {\displaystyle x} , but the image is passed through a noisy channel and came out garbled, as y {\displaystyle y} . Image generation is then nothing but inferring which x {\displaystyle x} the requester had in mind. In other words, conditional image generation is simply ""translating from a textual language into a pictorial language"".",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Main_variants,"Main_variants Then, as in noisy-channel model, we use Bayes theorem to getThe classifier-guided diffusion model samples from p ( x | y ) {\displaystyle p(x|y)} , which is concentrated around the maximum a posteriori estimate arg ��� max x p ( x | y ) {\displaystyle \arg \max _{x}p(x|y)} . If we want to force the model to move towards the maximum likelihood estimate arg ��� max x p ( y | x ) {\displaystyle \arg \max _{x}p(y|x)} , we can use This can be done simply by SGLD withIf we do not have a classifier p ( y | x ) {\displaystyle p(y|x)} , we could still extract one out of the image model itself:Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the ""noise schedule"" �� t {\displaystyle \beta _{t}} can also affect the quality of samples. In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling. One can interpolate between noise and no noise. The amount of noise is denoted �� {\displaystyle \eta } (""eta value"") in the DDIM paper, with �� = 0 {\displaystyle \eta =0} denoting no noise (as in deterministic DDIM), and �� = 1 {\displaystyle \eta =1} denoting full noise (as in DDPM). In the perspective of SDE, one can use any of the numerical integration methods, such as Euler���Maruyama method, Heun's method, linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration. A survey and comparison of samplers in the context of image generation is in.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Choice_of_architecture,"Choice_of_architecture For generating images by DDPM, we need a neural network that takes a time t {\displaystyle t} and a noisy image x t {\displaystyle x_{t}} , and predicts a noise �� �� ( x t , t ) {\displaystyle \epsilon _{\theta }(x_{t},t)} from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it from x t {\displaystyle x_{t}} , denoising architectures tend to work well. For example, the most common architecture is U-Net, which is also good at denoising images. For non-image data, we can use other architectures. For example, models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses a Transformer network to generate a less noisy trajectory out of a noisy one. The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned on ImageNet would generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector. Stable Diffusion, for example, imposes conditioning in the form of cross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet. As a particularly simple example, consider image inpainting. The conditions are x ~ {\displaystyle {\tilde {x}}} , the reference image, and m {\displaystyle m} , the inpainting mask. The conditioning is imposed at each step of the backward diffusion process, by first sampling x ~ t ��� N ( �� �� t x ~ , ( 1 ��� �� �� t ) I ) {\displaystyle {\tilde {x}}_{t}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}{\tilde {x}},(1-{\bar {\alpha }}_{t})I\right)} , a noisy version of x ~ {\displaystyle {\tilde {x}}} , then replacing x t {\displaystyle x_{t}} with ( 1 ��� m ) ��� x t + m ��� x ~ t {\displaystyle (1-m)\odot x_{t}+m\odot {\tilde {x}}_{t}} , where ��� {\displaystyle \odot } means elementwise multiplication. Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example, demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc. As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done by GAN, Transformer, or signal processing methods like Lanczos resampling.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Choice_of_architecture,"Choice_of_architecture Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style of Progressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Examples,"Examples This section collects some notable diffusion models, and briefly describes their architecture. The DALL-E series by OpenAI are text-conditional diffusion models of images. The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text. GLIDE (2022-03) is a 3.5-billion diffusion model, and a small version was released publicly. Soon after, DALL-E 2 was released (2022-04). DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by ""inverting the CLIP image encoder"", the technique which they termed ""unCLIP"". Stable Diffusion (2022-08), released by Stability AI, consists of a latent diffusion model (860 million parameters), a VAE, and a text encoder. The diffusion model is a U-Net, with cross-attention blocks to allow for conditional image generation. Google Imagen and Imagen Video are two cascaded diffusion models for generating images and videos. They use a T5-XXL, a Transformer-based language model, to encode text for text-conditional generation. Make-a-video by Meta AI generates videos from text. DreamFusion generates 3D models from text.",wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#See_also,See_also Diffusion process Markov chain Variational inference Variational autoencoder,wikipedia
+Diffusion model,https://en.wikipedia.org/wiki/Diffusion_model#Further_reading,"Further_reading Guidance: a cheat code for diffusion models. Overview of classifier guidance and classifier-free guidance, light on mathematical details. Mathematical details omitted in the article. ""Power of Diffusion Models"". AstraBlog. 2022-09-25. Retrieved 2023-09-25. Weng, Lilian (2021-07-11). ""What are Diffusion Models?"". lilianweng.github.io. Retrieved 2023-09-25.""Power of Diffusion Models"". AstraBlog. 2022-09-25. Retrieved 2023-09-25. Weng, Lilian (2021-07-11). ""What are Diffusion Models?"". lilianweng.github.io. Retrieved 2023-09-25.",wikipedia
+Proximal Policy Optimization,https://en.wikipedia.org/wiki/Proximal_Policy_Optimization,"Proximal Policy Optimization Proximal Policy Optimization (PPO) is a family of model-free reinforcement learning algorithms developed at OpenAI in 2017. PPO algorithms are policy gradient methods, which means that they search the space of policies rather than assigning values to state-action pairs. PPO algorithms have some of the benefits of trust region policy optimization (TRPO) algorithms, but they are simpler to implement, more general, and have better sample complexity. It is done by using a different objective function.",wikipedia
+Applications of artificial intelligence,https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Internet_and_e-commerce,"Internet_and_e-commerce A recommendation system predicts the ""rating"" or ""preference"" a user would give to an item. Recommendation systems are used in a variety of areas, such as generating playlists for video and music services, product recommendations for online stores, or content recommendations for social media platforms and open web content recommendation. Companies to use such systems include Netflix, Amazon and YouTube. Machine learning is also used in web feeds such as for determining which posts should show up in social media feeds. Various types of social media analysis also make use of machine learning and there is research into its use for (semi-)automated tagging/enhancement/correction of online misinformation and related filter bubbles. AI is used to target web advertisements to those most likely to click or engage in them. It is also used to increase time spent on a website by selecting attractive content for the viewer. It can predict or generalize the behavior of customers from their digital footprints. Both AdSense[citation needed] and Facebook use AI for advertising. Online gambling companies use AI to improve customer targeting. Personality computing AI models add psychological targeting to more traditional social demographics or behavioral targeting. AI has been used to customize shopping options and personalize offers. Intelligent personal assistants use AI to understand many natural language requests in other ways than rudimentary commands. Common examples are Apple's Siri, Amazon's Alexa, and a more recent AI, ChatGPT by OpenAI. Search engines that use artificial intelligence include Google Search and Bing Chat. Machine learning can be used to fight against spam, scams, and phishing. It can scrutinize the contents of spam and phishing attacks to identify any malicious elements. Numerous models built on machine learning algorithms exhibit exceptional performance with accuracies over 90% in distinguishing between spam and legitimate emails. AI has been used to automatically translate spoken language and textual content, in products such as Microsoft Translator, Google Translate and DeepL Translator. Additionally, research and development are in progress to decode and conduct animal communication. AI has been used in facial recognition systems, with a 99% accuracy rate. Some examples are Apple's Face ID and Android's Face Unlock, which are used to secure mobile devices. Image labeling has been used by Google to detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people. Facebook's DeepFace identifies human faces in digital images.",wikipedia
+Applications of artificial intelligence,https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Media,"Media AI applications analyze media content such as movies, TV programs, advertisement videos or user-generated content. The solutions often involve computer vision. Typical scenarios include the analysis of images using object recognition or face recognition techniques, or the analysis of video for scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time), speech to text for archival or other purposes, and the detection of logos, products or celebrity faces for ad placement. Motion interpolation Pixel-art scaling algorithms Image scaling Image restoration Photo colorization Film restoration and video upscaling Photo tagging Automated species identification (such as identifying plants, fungi and animals with an app) Text-to-image models such as DALL-E, Midjourney and Stable Diffusion Image to video Text to video such as Make-A-Video from Meta, Imagen video and Phenaki from Google Text to music with AI models such as MusicLM Text to speech such as ElevenLabs and 15. ai Motion captureDeep-fakes can be used for comedic purposes but are better known for fake news and hoaxes. In January 2016, the Horizon 2020 program financed the InVID Project to help journalists and researchers detect fake documents, made available as browser plugins. In June 2016, the visual computing group of the Technical University of Munich and from Stanford University developed Face2Face, a program that animates photographs of faces, mimicking the facial expressions of another person. The technology has been demonstrated animating the faces of people including Barack Obama and Vladimir Putin. Other methods have been demonstrated based on deep neural networks, from which the name deep fake was taken. In September 2018, U. S. Senator Mark Warner proposed to penalize social media companies that allow sharing of deep-fake documents on their platforms. In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames. DARPA gave 68 million dollars to work on deep-fake detection. Audio deepfakes and AI software capable of detecting deep-fakes and cloning human voices have been developed. AI algorithms have been used to detect deepfake videos. AI has been used to compose music of various genres. David Cope created an AI called Emily Howell that managed to become well known in the field of algorithmic computer music. The algorithm behind Emily Howell is registered as a US patent. In 2012, AI Iamus created the first complete classical album. AIVA (Artificial Intelligence Virtual Artist), composes symphonic music, mainly classical music for film scores. It achieved a world first by becoming the first virtual composer to be recognized by a musical professional association. Melomics creates computer-generated music for stress and pain relief. At Sony CSL Research Laboratory, the Flow Machines software creates pop songs by learning music styles from a huge database of songs. It can compose in multiple styles.",wikipedia
+Applications of artificial intelligence,https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Media,"Media The Watson Beat uses reinforcement learning and deep belief networks to compose music on a simple seed input melody and a select style. The software was open sourced and musicians such as Taryn Southern collaborated with the project to create music. South Korean singer Hayeon's debut song, ""Eyes on You"" was composed using AI which was supervised by real composers, including NUVO. Narrative Science sells computer-generated news and reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses. Automated Insights generates personalized recaps and previews for Yahoo Sports Fantasy Football. Yseop, uses AI to turn structured data into natural language comments and recommendations. Yseop writes financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German. TALESPIN made up stories similar to the fables of Aesop. The program started with a set of characters who wanted to achieve certain goals. The story narrated their attempts to satisfy these goals. [citation needed] Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or ""how to balance the need for a coherent story progression with user agency, which is often at odds"". While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such as Little Red Riding Hood. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize. South Korean company Hanteo Global uses a journalism bot to write articles. Literary authors are also exploring uses of AI. An example is David Jhave Johnston's work ReRites (2017-2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications. Millions of its articles have been edited by bots which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data, mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences, detecting covert vandalism or recommending articles and tasks to new editors. Machine translation (see above) has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages. In video games, AI is routinely used to generate behavior in non-player characters (NPCs). In addition, AI is used for pathfinding. Some researchers consider NPC AI in games to be a ""solved problem"" for most production tasks. [who? ] Games with less typical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).",wikipedia
+Applications of artificial intelligence,https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Media,"Media AI is also used in Alien Isolation (2014) as a way to control the actions the Alien will perform next. Kinect, which provides a 3D body���motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from AI research. [which? ] AI has been used to produce visual art. The first AI art program, called AARON, was developed by Harold Cohen in 1968 with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen. AI like ""Disco Diffusion"", ""DALL��E"" (1 and 2), Stable Diffusion, Imagen, ""Dream by Wombo"", Midjourney has been used for visualizing conceptual inputs such as song lyrics, certain texts or specific imagined concepts (or imaginations) in artistic ways or artistic images in general. Some of the tools also allow users to input images and various parameters e. g. to display an object or product in various environments, some can replicate artistic styles of popular artists, and some can create elaborate artistic images from rough sketches. Since their design in 2014, generative adversarial networks (GANs) have been used by AI artists. GAN computer programming, generates technical images through machine learning frameworks that surpass the need for human operators. Examples of GAN programs that generate art include Artbreeder and DeepDream. In addition to the creation of original art, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of the large-scale digitization of artwork in the past few decades was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives. Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. While distant viewing includes the analysis of large collections, close reading involves one piece of artwork. Researchers have also introduced models that predict emotional responses to art.",wikipedia
+Applications of artificial intelligence,https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Computer_science,"Computer_science AI can be used for real-time code completion, chat, and automated test generation. These tools are typically integrated with editors and IDEs as plugins. They differ in functionality, quality, speed, and approach to privacy. Code suggestions could be incorrect, and should be carefully reviewed by software developers before accepted. GitHub Copilot is an artificial intelligence model developed by GitHub and OpenAI that is able to autocomplete code in multiple programming languages. Price for individuals: $10/mo or $100/yr, with one free month trial. Tabnine was created by Jacob Jackson and was originally owned by Tabnine company. In late 2019, Tabnine was acquired by Codota. Tabnine tool is available as plugin to most popular IDEs. It offers multiple pricing options, including limited ""starter"" free version. CodeiumAI by CodiumAI, small startup in Tel Aviv, offers automated test creation. Currently supports Python, JS, and TS. Ghostwriter by Replit offers code completion and chat. They have multiple pricing plans, including a free one and a ""Hacker"" plan for $7/month. CodeWhisperer by Amazon collects individual users' content, including files open in the IDE. They claim to focus on security both during transmission and when storing. Individual plan is free, professional plan is $19/user/month. Other tools: SourceGraph Cody, CodeCompleteFauxPilot, Tabby AI can be used to create other AIs. For example, around November 2017, Google's AutoML project to evolve new neural net topologies created NASNet, a system optimized for ImageNet and POCO F1. NASNet's performance exceeded all previously published performance on ImageNet. Machine learning has been used for noise-cancelling in quantum technology, including quantum sensors. Moreover, there is substantial research and development of using quantum computers with machine learning algorithms. For example, there is a prototype, photonic, quantum memristive device for neuromorphic (quantum-)computers (NC)/artificial neural networks and NC-using quantum materials with some variety of potential neuromorphic computing-related applications, and quantum machine learning is a field with some variety of applications under development. AI could be used for quantum simulators which may have the application of solving physics and chemistry problems as well as for quantum annealers for training of neural networks for AI applications. There may also be some usefulness in chemistry, e.g. for drug discovery, and in materials science, e.g. for materials optimization/discovery (with possible relevance to quantum materials manufacturing).[better source needed] AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered AI. All of the following were originally developed in AI laboratories: Time sharing Interactive interpreters Graphical user interfaces and the computer mouse Rapid application development environments The linked list data structure Automatic storage management Symbolic programming Functional programming Dynamic programming Object-oriented programming Optical character recognition Constraint satisfaction",wikipedia
+List of artificial intelligence projects,https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects#Specialized_projects,"Specialized_projects Blue Brain Project, an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. Google Brain, a deep learning project part of Google X attempting to have intelligence similar or equal to human-level. Human Brain Project, ten-year scientific research project, based on exascale supercomputers.4CAPS, developed at Carnegie Mellon University under Marcel A. Just ACT-R, developed at Carnegie Mellon University under John R. Anderson. AIXI, Universal Artificial Intelligence developed by Marcus Hutter at IDSIA and ANU. CALO, a DARPA-funded, 25-institution effort to integrate many artificial intelligence approaches (natural language processing, speech recognition, machine vision, probabilistic logic, planning, reasoning, many forms of machine learning) into an AI assistant that learns to help manage your office environment. CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire. CLARION, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri. CoJACK, an ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments. Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University. DUAL, developed at the New Bulgarian University under Boicho Kokinov. FORR developed by Susan L. Epstein at The City University of New York. IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis. OpenCog Prime, developed using the OpenCog Framework. Procedural Reasoning System (PRS), developed by Michael Georgeff and Amy L. Lansky at SRI International. Psi-Theory developed under Dietrich D��rner at the Otto-Friedrich University in Bamberg, Germany. Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan. Society of Mind and its successor The Emotion Machine proposed by Marvin Minsky. Subsumption architectures, developed e. g. by Rodney Brooks (though it could be argued whether they are cognitive). AlphaGo, software developed by Google that plays the Chinese board game Go. Chinook, a computer program that plays English draughts; the first to win the world champion title in the competition against humans. Deep Blue, a chess-playing computer developed by IBM which beat Garry Kasparov in 1997. Halite, an artificial intelligence programming competition created by Two Sigma in 2016. Libratus, a poker AI that beat world-class poker players in 2017, intended to be generalisable to other applications. The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961. Quick, Draw! , an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network to guess what the drawing is. The Samuel Checkers-playing Program (1959) was among the world's first successful self-learning programs, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI). Stockfish AI, an open source chess engine currently ranked the highest in many computer chess rankings.",wikipedia
+List of artificial intelligence projects,https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects#Specialized_projects,"Specialized_projects TD-Gammon, a program that learned to play world-class backgammon partly by playing against itself (temporal difference learning with neural networks). Serenata de Amor, project for the analysis of public expenditures and detect discrepancies. Braina, an intelligent personal assistant application with a voice interface for Windows OS. Cyc, an attempt to assemble an ontology and database of everyday knowledge, enabling human-like reasoning. Eurisko, a language by Douglas Lenat for solving problems which consists of heuristics, including some for how to use and change its heuristics. Google Now, an intelligent personal assistant with a voice interface in Google's Android and Apple Inc. 's iOS, as well as Google Chrome web browser on personal computers. Holmes a new AI created by Wipro. Microsoft Cortana, an intelligent personal assistant with a voice interface in Microsoft's various Windows 10 editions. Mycin, an early medical expert system. Open Mind Common Sense, a project based at the MIT Media Lab to build a large common sense knowledge base from online contributions. Siri, an intelligent personal assistant and knowledge navigator with a voice-interface in Apple Inc. 's iOS and macOS. SNePS, simultaneously a logic-based, frame-based, and network-based knowledge representation, reasoning, and acting system. Viv (software), a new AI by the creators of Siri. Wolfram Alpha, an online service that answers queries by computing the answer from structured data. AIBO, the robot pet for the home, grew out of Sony's Computer Science Laboratory (CSL). Cog, a robot developed by MIT to study theories of cognitive science and artificial intelligence, now discontinued. Melomics, a bioinspired technology for music composition and synthesization of music, where computers develop their own style, rather than mimic musicians. AIML, an XML dialect for creating natural language software agents. Apache Lucene, a high-performance, full-featured text search engine library written entirely in Java. Apache OpenNLP, a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking and parsing. Artificial Linguistic Internet Computer Entity (A. L. I. C. E. ), an award-winning natural language processing chatterbot. ChatGPT, a chatbot built on top of OpenAI's GPT-3.5 and GPT-4 (paid) family of large language models. Cleverbot, successor to Jabberwacky, now with 170m lines of conversation, Deep Context, fuzziness and parallel processing. Cleverbot learns from around 2 million user interactions per month. ELIZA, a famous 1966 computer program by Joseph Weizenbaum, which parodied person-centered therapy. FreeHAL, a self-learning conversation simulator (chatterbot) which uses semantic nets to organize its knowledge to imitate a very close human behavior within conversations. GPT-3, a 2020 language model developed by OpenAI that can produce text difficult to distinguish from that written by a human. Jabberwacky, a chatbot by Rollo Carpenter, aiming to simulate natural human chat. LaMDA, a family of conversational neural language models developed by Google. Mycroft, a free and open-source intelligent personal assistant that uses a natural language user interface.",wikipedia
+List of artificial intelligence projects,https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects#Specialized_projects,"Specialized_projects PARRY, another early chatterbot, written in 1972 by Kenneth Colby, attempting to simulate a paranoid schizophrenic. SHRDLU, an early natural language processing computer program developed by Terry Winograd at MIT from 1968 to 1970. SYSTRAN, a machine translation technology by the company of the same name, used by Yahoo! , AltaVista and Google, among others. CMU Sphinx, a group of speech recognition systems developed at Carnegie Mellon University DeepSpeech, an open-source Speech-To-Text engine based on Baidu's deep speech research paper. Whisper, an open-source speech recognition system developed at OpenAI15. ai, a real-time artificial intelligence text-to-speech tool developed by an anonymous researcher from MIT Amazon Polly, a speech synthesis software by Amazon Festival Speech Synthesis System, a general multi-lingual speech synthesis system developed at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh WaveNet, a deep neural network for generating raw audio1 the Road, the first novel marketed by an AI. Synthetic Environment for Analysis and Simulations (SEAS), a model of the real world used by Homeland security and the United States Department of Defense that uses simulation and AI to predict and evaluate future events and courses of action. ��==Multipurpose projects== Apache Mahout, a library of scalable machine learning algorithms. Deeplearning4j, an open-source, distributed deep learning framework written for the JVM. Keas, a high level open-source software library for machine learning (works on top of other libraries). Microsoft Cognitive Toolkit (previously known as CNTK), an open source toolkit for building artificial neural networks. Open NN, a comprehensive C++ library implementing neural networks. Python, an open-source Tensor and Dynamic neural network in Python. TensorFlow, an open-source software library for machine learning. Theano, a Python library and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones. Neural Designer, a commercial deep learning tool for predictive analytics. Neuroph, a Java neural network framework. OpenCog, a GPL-licensed framework for artificial intelligence written in C++, Python and Scheme. PolyAnalyst: A commercial tool for data mining, text mining, and knowledge management. RapidMiner, an environment for machine learning and data mining, now developed commercially. Weka, a free implementation of many machine learning algorithms in Java. Data Applied, a web based data mining environment. Watson, a pilot service by IBM to uncover and share data-driven insights, and to spur cognitive applications.",wikipedia
+GitHub Copilot,https://en.wikipedia.org/wiki/GitHub_Copilot,"GitHub Copilot GitHub Copilot (not to be confused with, ""Copilot X"", the name of GitHub's ""vision"" for next-gen Copilot features) is a cloud-based artificial intelligence tool developed by GitHub and OpenAI to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. Currently available by subscription to individual developers and to businesses, the tool was first announced by GitHub on 29 June 2021, and works best for users coding in Python, JavaScript, TypeScript, Ruby, and Go.",wikipedia
+GitHub Copilot,https://en.wikipedia.org/wiki/GitHub_Copilot#Implementation,"Implementation GitHub Copilot is powered by the OpenAI Codex, which is a modified, production version of the Generative Pre-trained Transformer 3 (GPT-3), a language model using deep-learning to produce human-like text. The Codex model is additionally trained on gigabytes of source code in a dozen programming languages. Copilot���s OpenAI Codex is trained on a selection of the English language, public GitHub repositories, and other publicly available source code. This includes a filtered dataset of 159 gigabytes of Python code sourced from 54 million public GitHub repositories. Open AI���s GPT-3 is licensed exclusively to Microsoft, GitHub���s parent company.",wikipedia
+GitHub Copilot,https://en.wikipedia.org/wiki/GitHub_Copilot#See_also,See_also Intelligent code completion ChatGPT Generative AI,wikipedia
+Artificial intelligence in healthcare,https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare#History,"History Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however. The 1980s and 1990s brought the proliferation of the microcomputer and new levels of network connectivity. During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare. Medical and technological advancements occurring over this half-century period that have enabled the growth of healthcare-related applications of AI to include: Improvements in computing power resulting in faster data collection and data processing Growth of genomic sequencing databases Widespread implementation of electronic health record systems Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes Enhanced the precision of robot-assisted surgery Increased tree-based machine learning models that allow flexibility in establishing health predictors Improvements in deep learning techniques and data logs in rare diseasesAI algorithms can also be used to analyze large amounts of data through electronic health records for disease prevention and diagnosis. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center, and the British National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI software to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Currently, the United States government is investing billions of dollars to progress the development of AI in healthcare. Companies are developing technologies that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.",wikipedia
+Machine translation,https://en.wikipedia.org/wiki/Machine_translation#Approaches,"Approaches Before the advent of deep learning methods, statistical methods required a lot of rules accompanied by morphological, syntactic, and semantic annotations. The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. Its biggest downfall was that everything had to be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity. Transfer-based machine translation was similar to interlingual machine translation in that it created a translation from an intermediate representation that simulated the meaning of the original sentence. Unlike interlingual MT, it depended partially on the language pair involved in the translation. Interlingual machine translation was one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, was transformed into an interlingual language, i.e. a ""language neutral"" representation that is independent of any language. The target language was then generated out of the interlingua. The only interlingual machine translation system that was made operational at the commercial level was the KANT system (Nyberg and Mitamura, 1992), which was designed to translate Caterpillar Technical English (CTE) into other languages. Machine translation used a method based on dictionary entries, which means that the words were translated as they are by a dictionary. Statistical machine translation tried to generate translations using statistical methods based on bilingual text corpora, such as the Canadian Hansard corpus, the English-French record of the Canadian parliament and EUROPARL, the record of the European Parliament. Where such corpora were available, good results were achieved translating similar texts, but such corpora were rare for many language pairs. The first statistical machine translation software was CANDIDE from IBM. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved. SMT's biggest downfall included it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translating into such languages), and its inability to correct singleton errors. A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years. However, current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test benchmarks i.e., it lacks statistical significance power. Translations by neural MT tools like DeepL Translator, which is thought to usually deliver the best machine translation results as of 2022, typically still need post-editing by a human. Prompt engineering is required in order to steer the GPT-3-generated translations.",wikipedia
+Marketing and artificial intelligence,https://en.wikipedia.org/wiki/Marketing_and_artificial_intelligence#Artificial_intelligence_and_automation_efficiency,"Artificial_intelligence_and_automation_efficiency In terms of marketing, automation uses software to computerize marketing processes that would have otherwise been performed manually. It assists in effectively allowing processes such as customer segmentation, campaign management, and product promotion, to be undertaken at a more efficient rate. Marketing automation is a key component of Customer Relationship Management (CRM). Companies are using systems that employ data-mining algorithms that analyze the customer database, giving further insight into the customer. This information may refer to socio-economic characteristics, earlier interactions with the customer, and information about the purchase history of the customer. Various systems have been designed to give organizations control over their data. Automation tools allow the system to monitor the performance of campaigns, making regular adjustments to the campaigns to improve response rates and to provide campaign performance tracking. Distribution of products requires companies to access accurate data so they are able to respond to fluctuating trends in product demand. Automation processes are able to provide a comprehensive system that improves real-time monitoring and intelligent control. Amazon acquired Kiva Systems, the makers of the warehouse robot for $775 million in 2012. Prior to the purchase of the automated system, human employees would have to walk the enormous warehouse, tracking and retrieving books. The Kiva robots are able to undertake order fulfillment, product replenishment, as well as heavy lifting, thus increasing efficiency for the company.",wikipedia
+Machine learning,https://en.wikipedia.org/wiki/Machine_learning,"Machine learning Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines ""discover"" their ""own"" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks. The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning. ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods.",wikipedia
+Machine learning,https://en.wikipedia.org/wiki/Machine_learning#Limitations,"Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. Machine learning approaches in particular can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society. Language models learned from data have been shown to contain human-like biases. Machine learning systems used for criminal risk assessment have been found to be biased against black people. In 2015, Google photos would often tag black people as gorillas, and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all. Similar issues with recognizing non-white people have been found in many other systems. In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language. Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that ""There's nothing artificial about AI. . . It's inspired by people, it's created by people, and���most importantly���it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility. "" Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. It contrasts with the ""black box"" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation.",wikipedia
+Machine learning,https://en.wikipedia.org/wiki/Machine_learning#Limitations,"Limitations Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalizing the theory in accordance with how complex the theory is. Learners can also disappoint by ""learning the wrong lesson"". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in ""adversarial"" images that the system misclassifies. Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification. [citation needed] Machine learning models are often vulnerable to manipulation and/or evasion via adversarial machine learning. Researchers have demonstrated how backdoors can be placed undetectably into classifying (e. g. , for categories ""spam"" and well-visible ""not spam"" of posts) machine learning models which are often developed and/or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access.",wikipedia
+Machine learning,https://en.wikipedia.org/wiki/Machine_learning#Hardware,"Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units. By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware computing used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. A physical neural network or Neuromorphic computer is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse. ""Physical"" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse. Embedded Machine Learning is a sub-field of machine learning, where the machine learning model is run on embedded systems with limited computing resources such as wearable computers, edge devices and microcontrollers. Running machine learning model in embedded devices removes the need for transferring and storing data on cloud servers for further processing, henceforth, reducing data breaches and privacy leaks happening because of transferring data, and also minimizes theft of intellectual properties, personal data and business secrets. Embedded Machine Learning could be applied through several techniques including hardware acceleration, using approximate computing, optimization of machine learning models and many more.",wikipedia
+EleutherAI,https://en.wikipedia.org/wiki/EleutherAI,"EleutherAI EleutherAI (/����lu������r/) is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute.",wikipedia
+EleutherAI,https://en.wikipedia.org/wiki/EleutherAI#History,"History EleutherAI began as a Discord server on July 7, 2020 under the tentative name ""LibreAI"" before rebranding to ""EleutherAI"" later that month, in reference to eleutheria, an ancient greek term for liberty. On December 30, 2020, EleutherAI released The Pile, a curated dataset of diverse text for training large language models. While the paper referenced the existence of the GPT-Neo models, the models themselves were not released until March 21, 2021. According to a retrospective written several months later, the authors did not anticipate that ""people would care so much about our 'small models.'"" On June 9, 2021, EleutherAI followed this up with GPT-J-6B, a six billion parameter language model that was again the largest open-source GPT-3-like model in the world. These language models were released under the Apache 2.0 free software license and are considered to have ""fueled an entirely new wave of startups"". While EleutherAI initially turned down funding offers, preferring to use Google's TPU Research Cloud Program to source their compute, by early 2021 they had accepted funding from CoreWeave (a small cloud computing company) and SpellML (a cloud infrastructure company) in the form of access to powerful GPU clusters that are necessary for large scale machine learning research. On Feb 10, 2022 they released GPT-NeoX-20B, a model similar to their prior work but scaled up thanks to the resources CoreWeave provided. In 2022, many EleutherAI members participated in the BigScience Research Workshop, working on projects including multitask finetuning, training BLOOM, and designing evaluation libraries. Engineers at EleutherAI, Stability AI, and NVIDIA joined forces with biologists lead by Columbia University and Harvard University to train OpenFold, an open-source replication of DeepMind's AlphaFold2. In early 2023, EleutherAI incorporated as a non-profit research institute run by Stella Biderman, Curtis Huebner, and Shivanshu Purohit. This announcement came with the statement that EleutherAI's shift of focus away from training larger language models was part of a deliberate push towards doing work in interpretability, alignment, and scientific research. While EleutherAI is still committed to promoting access to AI technologies, they feel that ""there is substantially more interest in training and releasing LLMs than there once was,"" enabling them to focus on other projects.",wikipedia
+EleutherAI,https://en.wikipedia.org/wiki/EleutherAI#Research,"Research According to their website, EleutherAI is a ""decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open-source AI research"". While they do not sell any of their technologies as products, they publish the results of their research in academic venues, write blog posts detailing their ideas and methodologies, and provide trained models for anyone to use for free.[citation needed] The Pile is an 886 GB dataset designed for training large language models. It was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation, Meta AI's Open Pre-trained Transformers, LLaMA, and Galactica, Stanford University's BioMedLM 2.7B, the Beijing Academy of Artificial Intelligence's Chinese-Transformer-XL, and Yandex's YaLM 100B. Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it. EleutherAI's most prominent research relates to its work to train open-source large language models inspired by OpenAI's GPT-3. EleutherAI's ""GPT-Neo"" model series has released 125 million, 1.3 billion, 2.7 billion, 6 billion, and 20 billion parameter models. GPT-Neo (125M, 1.3B, 2.7B): released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.GPT-J (6B): released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.GPT-NeoX (20B): released in February 2022, it was the largest open-source language model in the world at the time of release.Pythia (13B): While prior models focused on scaling larger to close the gap with closed-sourced models like GPT-3, the Pythia model suite goes in another direction. The Pythia suite was designed to facilitate scientific research on the capabilities of and learning processes in large language models. Featuring 154 partially trained model checkpoints, fully public training data, and the ability to reproduce the exact training order, Pythia enables research on verifiable training, social biases, memorization, and more.While the overwhelming majority of large language models are[when?] trained in either English or Chinese,[citation needed] EleutherAI also trains language models in other languages, such as the Korean-language Polyglot-Ko. Following the release of DALL-E by OpenAI in January 2021, EleutherAI started working on text-to-image synthesis models. When OpenAI didn't release DALL-E publicly, EleutherAI's Katherine Crowson and digital artist Ryan Murdock developed a technique for using CLIP (another model developed by OpenAI) to convert regular image generation models into text-to-image synthesis ones. Building on ideas dating back to Google's DeepDream, they found their first major success combining CLIP with another publicly available model called VQGAN and the resulting model is called VQGAN-CLIP. Crowson released the technology by tweeting notebooks demonstrating the technique that people could run for free without any special equipment. This work was credited by Stability AI CEO Emad Mostaque as motivating the founding of Stability AI.",wikipedia
+EleutherAI,https://en.wikipedia.org/wiki/EleutherAI#Public_reception,"Public_reception EleutherAI's work to democratize GPT-3 won the UNESCO Netexplo Global Innovation Award in 2021, InfoWorld's Best of Open Source Software Award in 2021 and 2022, was nominated for VentureBeat's AI Innovation Award in 2021. Gary Marcus, a cognitive scientist and noted critic of deep learning companies such as OpenAI and DeepMind, has repeatedly praised EleutherAI's dedication to open-source and transparent research. Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, applauded EleutherAI's efforts to give more researchers the ability to audit and assess AI technology. ""If models are open and if data sets are open, that'll enable much more of the critical research that's pointed out many of the flaws and harms associated with generative AI and that's often far too difficult to conduct."" Technology journalist Kyle Wiggers has raised concerns about whether EleutherAI is as independent as it claims, or ""whether the involvement of commercially motivated ventures like Stability AI and Hugging Face ��� both of which are backed by substantial venture capital ��� might influence EleutherAI's research.""",wikipedia
+Google Brain,https://en.wikipedia.org/wiki/Google_Brain#Projects,"Projects In October 2016, Google Brain designed an experiment to determine that neural networks are capable of learning secure symmetric encryption. In this experiment, three neural networks were created: Alice, Bob and Eve. Adhering to the idea of a generative adversarial network (GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not. Alice and Bob maintained an advantage over Eve, in that they shared a key used for encryption and decryption. In doing so, Google Brain demonstrated the capability of neural networks to learn secure encryption. In February 2017, Google Brain determined a probabilistic method for converting pictures with 8x8 resolution to a resolution of 32x32. The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations. The proposed software utilizes two neural networks to make approximations for the pixel makeup of translated images. The first network, known as the ""conditioning network,"" downsizes high-resolution images to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones. The other network, known as the ""prior network,"" uses the mappings from the previous network to add more detail to the original image. The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images. Google Brain's results indicate the possibility for neural networks to enhance images. The Google Brain team contributed to the Google Translate project by employing a new deep learning system that combines artificial neural networks with vast databases of multilingual texts. In September 2016, Google Neural Machine Translation (GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples. Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence. But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements. Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors. The GNMT has also shown significant improvement for notoriously difficult translations, like Chinese to English. While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop a Multilingual GNMT system, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before. Google announced that Google Translate can now also translate without transcribing, using neural networks.",wikipedia
+Google Brain,https://en.wikipedia.org/wiki/Google_Brain#Projects,"Projects This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text. According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text. Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence. This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable. Aiming to improve traditional robotics control algorithms where new skills of a robot need to be hand-programmed, robotics researchers at Google Brain are developing machine learning techniques to allow robots to learn new skills on their own. They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known as cloud robotics. As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combine robotics, AI, and the cloud to enable efficient robotic automation through cloud-connected collaborative robots. Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations. For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so. In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints. Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers at X in a research on learning hand-eye coordination for robotic grasping. Their method allowed real-time robot control for grasping novel objects with self-correction. In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos. In 2020, Google Brain Team and University of Lille presented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words. The model can be altered to choose speech segments in the context of Text-To-Speech Training. It can also prevent malicious voice generators from accessing the data. TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.",wikipedia
+Google Brain,https://en.wikipedia.org/wiki/Google_Brain#Projects,"Projects The tool has been used by farmers to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images. Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data. TensorFlow was updated with a suite of tools for users to guide the neural network to create images and music. However, the team from Valdosta State University found that the AI struggles to perfectly replicate human intention in artistry, similar to the issues faced in translation. The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis. During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task. Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot. Google Brain announced in 2022 that it created two different types of text-to-image models called Imagen and Parti that compete with OpenAI's DALL-E. Later in 2022, the project was extended to text-to-video. The Google Brain projects' technology is currently used in various other Google products such as the Android Operating System's speech recognition system, photo search for Google Photos, smart reply in Gmail, and video recommendations in YouTube.",wikipedia
+Google Brain,https://en.wikipedia.org/wiki/Google_Brain#Reception,"Reception Google Brain has received coverage in Wired, NPR, and Big Think. These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications. In December 2020, AI ethicist Timnit Gebru left Google. While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled ""On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"" This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public. The request to retract the paper was made by Megan Kacholia, vice president of Google Brain. As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of ""research censorship"" and condemning Gebru's treatment at the company. In February 2021, Google fired one of the leaders of the company's AI ethics team, Margaret Mitchell. The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru. In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving. In April 2021, Google Brain co-founder Samy Bengio announced his resignation from the company. Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell. While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations. In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published in Nature, by Google's AI team members, Anna Goldie and Azalia Mirhoseini, about their findings on the ability of computers to design computer chip components.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#History,"History The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in September 2010. Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL). Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. ���The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it.��� The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything. Major venture capital firms Horizons Ventures and Founders Fund invested in the company, as well as entrepreneurs Scott Banister, Peter Thiel, and Elon Musk. Jaan Tallinn was an early investor and an adviser to the company. On January 26, 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million. and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013. The company was afterwards renamed Google DeepMind and kept that name for about two years. In 2014, DeepMind received the ""Company of the Year"" award from Cambridge Computer Laboratory. In September 2015, DeepMind and the Royal Free NHS Trust signed their initial Information Sharing Agreement (ISA) to co-develop a clinical task management app, Streams. After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor. In October 2017, DeepMind launched a new research team to investigate AI ethics. In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role. In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT. This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#Products_and_technologies,"Products_and_technologies According to the company's website, DeepMind Technologies' goal is to combine ""the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms"". Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process. Deepmind has also released several publications via its website. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours. In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena. As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. [citation needed] DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019. As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning. They test the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could. In 2013, DeepMind published research on an AI system that could surpass human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. This work reportedly led to the company's acquisition by Google. DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s. In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari2600 suite. In 2014, the company published research on computer systems that are able to play Go. In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This was the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at ""amateur"" level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force. In March 2016 it beat Lee Sedol���a 9th dan Go player and one of the highest ranked players in the world���with a score of 4���1 in a five-game match.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#Products_and_technologies,"Products_and_technologies In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. It used a supervised learning protocol, studying large numbers of games played by humans against each other. In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to 0. AlphaGo Zero's strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play. Later that year, AlphaZero, a modified version of AlphaGo Zero but for handling any two-player game of perfect information, gained superhuman abilities at chess and shogi. Like AlphaGo Zero, AlphaZero learned solely through self-play. DeepMind researchers published a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules. Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate. In October 2022, DeepMind unveiled a new version of AlphaZero, called AlphaTensor, in a paper published in Nature. The version discovered a faster way to perform matrix multiplication ��� one of the most fundamental tasks in computing ��� using reinforcement learning. For example, AlphaTensor figured out how to multiply two mod-2 4x4 matrices in only 47 multiplications, unexpectedly beating the 1969 Strassen algorithm record of 49 multiplications. AlphaGo technology was developed based on the deep reinforcement learning approach. This makes AlphaGo different from the rest of AI technologies on the market. With that said, AlphaGo's ���brain��� was introduced to various moves based on historical tournament data. The number of moves was increased gradually until it eventually processed over 30 million of them. The aim was to have the system mimic the human player and eventually become better. It played against itself and learned not only from its own defeats but wins as well; thus, it learned to improve itself over the time and increased its winning rate as a result. [citation needed] AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search (MCTS), using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#Products_and_technologies,"Products_and_technologies AlphaGo Zero was trained using reinforcement learning in which the system played millions of games against itself. Its only guide was to increase its win rate. It did so without learning from games played by humans. Its only input features are the black and white stones from the board. It uses a single neural network, rather than separate policy and value networks. Its simplified tree search relies upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporates lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources. Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48. In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. ���This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,��� Hassabis said to The Guardian. In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as ""truly remarkable"", and said the problem of predicting how proteins fold had been ""largely solved"". In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms. The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database. In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet. In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented. In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users. In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information. In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#Products_and_technologies,"Products_and_technologies It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match. In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed. By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions. In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions. AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers. Gato is a ""generalist agent"" that learns multiple tasks simultaneously. Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers. In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power. DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring. The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#Products_and_technologies,"Products_and_technologies Google has unveiled a new archaeology document program named Ithaca after the home island of mythical hero Odysseus. [citation needed] The deep neural network helps researchers restore the empty text of damaged documents, identify the place they originated from, and give them a definite accurate date. [citation needed] The work builds on another text analysis network named Pythia. Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years. [citation needed] The tool has already been used by historians and ancient Greek archaeologists to make new discoveries in ancient Greek history. [citation needed] The team is working on extending the model to other ancient languages, including Demotic, Akkadian, Hebrew, and Mayan. Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions. Chinchilla AI is a language model developed by DeepMind.",wikipedia
+Google DeepMind,https://en.wikipedia.org/wiki/Google_DeepMind#See_also,See_also Anthropic Cohere Glossary of artificial intelligence OpenAI,wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#History,"History The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in September 2010. Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL). Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included Breakout, Pong and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. ���The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it.��� The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything. Major venture capital firms Horizons Ventures and Founders Fund invested in the company, as well as entrepreneurs Scott Banister, Peter Thiel, and Elon Musk. Jaan Tallinn was an early investor and an adviser to the company. On January 26, 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million. and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013. The company was afterwards renamed Google DeepMind and kept that name for about two years. In 2014, DeepMind received the ""Company of the Year"" award from Cambridge Computer Laboratory. In September 2015, DeepMind and the Royal Free NHS Trust signed their initial Information Sharing Agreement (ISA) to co-develop a clinical task management app, Streams. After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor. In October 2017, DeepMind launched a new research team to investigate AI ethics. In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role. In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT. This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#Products_and_technologies,"Products_and_technologies According to the company's website, DeepMind Technologies' goal is to combine ""the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms"". Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process. Deepmind has also released several publications via its website. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours. In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena. As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science. [citation needed] DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019. As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning. They test the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could. In 2013, DeepMind published research on an AI system that could surpass human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. This work reportedly led to the company's acquisition by Google. DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s. In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari2600 suite. In 2014, the company published research on computer systems that are able to play Go. In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This was the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at ""amateur"" level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force. In March 2016 it beat Lee Sedol���a 9th dan Go player and one of the highest ranked players in the world���with a score of 4���1 in a five-game match.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#Products_and_technologies,"Products_and_technologies In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. It used a supervised learning protocol, studying large numbers of games played by humans against each other. In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to 0. AlphaGo Zero's strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play. Later that year, AlphaZero, a modified version of AlphaGo Zero but for handling any two-player game of perfect information, gained superhuman abilities at chess and shogi. Like AlphaGo Zero, AlphaZero learned solely through self-play. DeepMind researchers published a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules. Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate. In October 2022, DeepMind unveiled a new version of AlphaZero, called AlphaTensor, in a paper published in Nature. The version discovered a faster way to perform matrix multiplication ��� one of the most fundamental tasks in computing ��� using reinforcement learning. For example, AlphaTensor figured out how to multiply two mod-2 4x4 matrices in only 47 multiplications, unexpectedly beating the 1969 Strassen algorithm record of 49 multiplications. AlphaGo technology was developed based on the deep reinforcement learning approach. This makes AlphaGo different from the rest of AI technologies on the market. With that said, AlphaGo's ���brain��� was introduced to various moves based on historical tournament data. The number of moves was increased gradually until it eventually processed over 30 million of them. The aim was to have the system mimic the human player and eventually become better. It played against itself and learned not only from its own defeats but wins as well; thus, it learned to improve itself over the time and increased its winning rate as a result. [citation needed] AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search (MCTS), using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#Products_and_technologies,"Products_and_technologies AlphaGo Zero was trained using reinforcement learning in which the system played millions of games against itself. Its only guide was to increase its win rate. It did so without learning from games played by humans. Its only input features are the black and white stones from the board. It uses a single neural network, rather than separate policy and value networks. Its simplified tree search relies upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporates lookahead search inside the training loop. AlphaGo Zero employed around 15 people and millions in computing resources. Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48. In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. ���This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,��� Hassabis said to The Guardian. In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as ""truly remarkable"", and said the problem of predicting how proteins fold had been ""largely solved"". In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms. The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database. In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet. In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented. In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users. In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information. In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#Products_and_technologies,"Products_and_technologies It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match. In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed. By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions. In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions. AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers. Gato is a ""generalist agent"" that learns multiple tasks simultaneously. Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers. In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations. DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power. DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring. The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#Products_and_technologies,"Products_and_technologies Google has unveiled a new archaeology document program named Ithaca after the home island of mythical hero Odysseus. [citation needed] The deep neural network helps researchers restore the empty text of damaged documents, identify the place they originated from, and give them a definite accurate date. [citation needed] The work builds on another text analysis network named Pythia. Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years. [citation needed] The tool has already been used by historians and ancient Greek archaeologists to make new discoveries in ancient Greek history. [citation needed] The team is working on extending the model to other ancient languages, including Demotic, Akkadian, Hebrew, and Mayan. Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions. Chinchilla AI is a language model developed by DeepMind.",wikipedia
+DeepMind,https://en.wikipedia.org/wiki/DeepMind#See_also,See_also Anthropic Cohere Glossary of artificial intelligence OpenAI,wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E,"DALL-E DALL-E (stylized as DALL��E), DALL-E 2, and DALL-E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions, called ""prompts"". The original DALL-E was revealed by OpenAI in a blog post in January 5, 2021, and uses a version of GPT-3 modified to generate images. In April 6, 2022, OpenAI announced DALL-E 2, a successor designed to generate more realistic images at higher resolutions that ""can combine concepts, attributes, and styles"". In September 2023, OpenAI announced their latest image model, DALL-E 3, capable of understanding ""significantly more nuance and detail"" than previous iterations. On 20 July 2022, DALL-E 2 entered into a beta phase with invitations sent to 1 million waitlisted individuals; users could generate a certain number of images for free every month and may purchase more. Access had previously been restricted to pre-selected users for a research preview due to concerns about ethics and safety. On 28 September 2022, DALL-E 2 was opened to everyone and the waitlist requirement was removed. In early November 2022, OpenAI released DALL-E 2 as an API, allowing developers to integrate the model into their own applications. Microsoft unveiled their implementation of DALL-E 2 in their Designer app and Image Creator tool included in Bing and Microsoft Edge. The API operates on a cost per image basis, with prices varying depending on image resolution. Volume discounts are available to companies working with OpenAI���s enterprise team. DALL-E 3 was released natively into ChatGPT for ChatGPT Plus and ChatGPT Enterprise customers in October 2023, with availability via OpenAI's API and ""Labs"" platform expected later in the year. Microsoft implemented the model in Bing's Image Creator tool and plans to implement it into their Designer app. The software's name is a portmanteau of the names of animated robot Pixar character WALL-E and the Spanish surrealist artist Salvador Dal��.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#Technology,"Technology The first generative pre-trained transformer (GPT) model was initially developed by OpenAI in 2018, using a Transformer architecture. The first iteration, GPT-1, was scaled up to produce GPT-2 in 2019; in 2020 it was scaled up again to produce GPT-3, with 175 billion parameters. DALL-E's model is a multimodal implementation of GPT-3 with 12 billion parameters which ""swaps text for pixels"", trained on text-image pairs from the Internet. In detail, the input to the Transformer model is a sequence of tokenized image caption followed by tokenized image patches. The image caption is in English, tokenized by byte pair encoding (vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256x256 RGB image, divided into 32x32 patches of 4x4 each. Each patch is then converted by a discrete VAE to a token (vocabulary size 8192). DALL-E was developed and announced to the public in conjunction with CLIP (Contrastive Language-Image Pre-training). CLIP is a separate model based on zero-shot learning that was trained on 400 million pairs of images with text captions scraped from the Internet. Its role is to ""understand and rank"" DALL-E's output by predicting which caption from a list of 32,768 captions randomly selected from the dataset (of which one was the correct answer) is most appropriate for an image. This model is used to filter a larger initial list of images generated by DALL-E to select the most appropriate outputs. DALL-E 2 uses 3.5 billion parameters, a smaller number than its predecessor. DALL-E 2 uses a diffusion model conditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#Capabilities,"Capabilities DALL-E can generate imagery in multiple styles, including photorealistic imagery, paintings, and emoji. It can ""manipulate and rearrange"" objects in its images, and can correctly place design elements in novel compositions without explicit instruction. Thom Dunn writing for BoingBoing remarked that ""For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL-E often draws the handkerchief, hands, and feet in plausible locations."" DALL-E showed the ability to ""fill in the blanks"" to infer appropriate details without specific prompts, such as adding Christmas imagery to prompts commonly associated with the celebration, and appropriately placed shadows to images that did not mention them. Furthermore, DALL-E exhibits a broad understanding of visual and design trends.[citation needed] DALL-E can produce images for a wide variety of arbitrary descriptions from various viewpoints with only rare failures. Mark Riedl, an associate professor at the Georgia Tech School of Interactive Computing, found that DALL-E could blend concepts (described as a key element of human creativity). Its visual reasoning ability is sufficient to solve Raven's Matrices (visual tests often administered to humans to measure intelligence). DALL-E 3 follows complex prompts with more accuracy and detail than its predecessors, and is able to generate more coherent and accurate text. DALL-E 3 integrates into ChatGPT. It is currently in research preview, and will be available to Plus and Enterprise customers in October 2023, and in Labs later in the year. Given an existing image, DALL-E 2 can produce ""variations"" of the image as individual outputs based on the original, as well as edit the image to modify or expand upon it. DALL-E 2's ""inpainting"" and ""outpainting"" use context from an image to fill in missing areas using a medium consistent with the original, following a given prompt. For example, this can be used to insert a new subject into an image, or expand an image beyond its original borders. According to OpenAI, ""Outpainting takes into account the image���s existing visual elements ��� including shadows, reflections, and textures ��� to maintain the context of the original image."" DALL-E 2's language understanding has limits. It is sometimes unable to distinguish ""A yellow book and a red vase"" from ""A red book and a yellow vase"" or ""A panda making latte art"" from ""Latte art of a panda"". It generates images of ""an astronaut riding a horse"" when presented with the prompt ""a horse riding an astronaut"". It also fails to generate the correct images in a variety of circumstances. Requesting more than three objects, negation, numbers, and connected sentences may result in mistakes, and object features may appear on the wrong object. Additional limitations include handling text - which, even with legible lettering, almost invariably results in dream-like gibberish - and its limited capacity to address scientific information, such as astronomy or medical imagery.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#Ethical_concerns,"Ethical_concerns DALL-E 2's reliance on public datasets influences its results and leads to algorithmic bias in some cases, such as generating higher numbers of men than women for requests that do not mention gender. DALL-E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias in some cases such as reducing the frequency of women being generated. OpenAI hypothesize that this may be because women were more likely to be sexualized in training data which caused the filter to influence results. In September 2022, OpenAI confirmed to The Verge that DALL-E invisibly inserts phrases into user prompts to address bias in results; for instance, ""black man"" and ""Asian woman"" are inserted into prompts that do not specify gender or race. A concern about DALL-E 2 and similar image generation models is that they could be used to propagate deepfakes and other forms of misinformation. As an attempt to mitigate this, the software rejects prompts involving public figures and uploads containing human faces. Prompts containing potentially objectionable content are blocked, and uploaded images are analyzed to detect offensive material. A disadvantage of prompt-based filtering is that it is easy to bypass using alternative phrases that result in a similar output. For example, the word ""blood"" is filtered, but ""ketchup"" and ""red liquid"" are not. Another concern about DALL-E 2 and similar models is that they could cause technological unemployment for artists, photographers, and graphic designers due to their accuracy and popularity.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#Reception,"Reception Most coverage of DALL-E focuses on a small subset of ""surreal"" or ""quirky"" outputs. DALL-E's output for ""an illustration of a baby daikon radish in a tutu walking a dog"" was mentioned in pieces from Input, NBC, Nature, and other publications. Its output for ""an armchair in the shape of an avocado"" was also widely covered. ExtremeTech stated ""you can ask DALL-E for a picture of a phone or vacuum cleaner from a specified period of time, and it understands how those objects have changed"". Engadget also noted its unusual capacity for ""understanding how telephones and other objects change over time"". According to MIT Technology Review, one of OpenAI's objectives was to ""give language models a better grasp of the everyday concepts that humans use to make sense of things"". Wall Street investors have had a positive reception of DALL-E 2, with some firms thinking it could represent a turning point for a future multi-trillion dollar industry. By mid-2019, OpenAI had already received over $1 billion in funding from Microsoft and Khosla Ventures, and in January 2023, following the launch of DALL-E 2 and ChatGPT, received an additional $10 billion in funding from Microsoft. Japan's anime community has had a negative reaction to DALL-E 2 and similar models. Two arguments are typically presented by artists against the software. The first is that AI art is not art because it is not created by a human with intent. ""The juxtaposition of AI-generated images with their own work is degrading and undermines the time and skill that goes into their art. AI-driven image generation tools have been heavily criticized by artists because they are trained on human-made art scraped from the web."" The second is the trouble with copyright law and data text-to-image models are trained on. OpenAI has not released information about what dataset(s) were used to train DALL-E 2, inciting concern from some that the work of artists has been used for training without permission. Copyright laws surrounding these topics are inconclusive at the moment. After integrating DALL-E 3 into Bing Chat and ChatGPT, Microsoft and OpenAI faced criticism for excessive content filtering. With critics saying DALL-E had been ""lobotomized"". With the blocking of specific prompts such as 'man breaks server rack with sledgehammer' cited as evidence. Over the first days of its launch, filtering was reportedly increased to the point where some of Bing's own suggested prompts had become blocked. TechRadar argued that leaning too heavily on the side of caution could limit DALL-E's value as a creative tool.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#Open-source_implementations,"Open-source_implementations Since OpenAI has not released source code for any of the three models, there have been several attempts to create open-source implementations of DALL-E. Released in 2022 on Hugging Face's Spaces platform, Craiyon (formerly DALL-E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL-E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery.",wikipedia
+DALL-E,https://en.wikipedia.org/wiki/DALL-E#See_also,See_also Artificial intelligence art DeepDream Imagen (Google Brain) Midjourney Stable Diffusion Prompt engineering,wikipedia
+Midjourney,https://en.wikipedia.org/wiki/Midjourney,"Midjourney Midjourney is a generative artificial intelligence program and service created and hosted by San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called ""prompts"", similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. The tool is currently in open beta, which it entered on July 12, 2022. The Midjourney team is led by David Holz, who co-founded Leap Motion. Holz told The Register in August 2022 that the company was already profitable. Users create artwork with Midjourney using Discord bot commands.",wikipedia
+Midjourney,https://en.wikipedia.org/wiki/Midjourney#Uses,"Uses Founder David Holz says he sees artists as customers, not competitors of Midjourney. Holz told The Register that artists use Midjourney for rapid prototyping of artistic concepts to show to clients before starting work themselves. Some artists have accused Midjourney of devaluing original creative work by using it in the training set; Midjourney's terms of service includes a DMCA takedown policy, allowing artists to request their work to be removed from the set if they believe copyright infringement to be evident. The advertising industry has been quick to embrace AI tools such as Midjourney, DALL-E, and Stable Diffusion, among others. The tools, which enable advertisers to create original content and brainstorm ideas quickly are providing new opportunities such as ""custom ads created for individuals, a new way to create special effects, or even making e-commerce advertising more efficient"", according to Ad Age. Architects have described using the software to generate mood boards for the early stages of projects, as an alternative to searching Google Images. The program was used by the British magazine The Economist to create the front cover for an issue in June 2022. In Italy, the leading newspaper Corriere della Sera published a comic created with Midjourney by writer Vanni Santoni in August 2022. Charlie Warzel used Midjourney to generate two images of Alex Jones for Warzel's newsletter in The Atlantic. The use of an AI-generated cover was criticised by people who felt it was taking jobs from artists. Warzel called his action a ""mistake"" in an article about his decision to use generated images. Last Week Tonight with John Oliver included a 10-minute segment on Midjourney in an episode broadcast in August 2022. A Midjourney image called Th����tre d'Op��ra Spatial won first place in the digital art competition at the 2022 Colorado State Fair. Jason Allen, who wrote the prompt that led Midjourney to generate the image, printed the image onto a canvas and entered it into the competition using the name ""Jason M. Allen via Midjourney"". Other digital artists were upset by the news. Allen was unapologetic, insisting that he followed the competition's rules. The two category judges were unaware that Midjourney used AI to generate images, although they later said that had they known this, they would have awarded Allen the top prize anyway. In December 2022, Midjourney was used to create the images in an AI-generated children's book in the span of a weekend. Titled Alice and Sparkle, the book features a young girl who builds a robot that becomes self-aware. The creator, Ammaar Reeshi, spent hours tweaking Midjourney prompts, rejecting hundreds of generated results to ultimately choose 13 illustrations for the book. Both the product and process drew criticism: ""the main problem. . . is that it was trained off of artists��� work. It���s our creations, our distinct styles that we created, that we did not consent to being used,"" one artist wrote.",wikipedia
+Midjourney,https://en.wikipedia.org/wiki/Midjourney#Uses,"Uses In 2023, the realism of AI-based text-to-image generators, such as Midjourney, DALL-E, or Stable Diffusion, reached such a high level that it led to a significant wave of viral AI-generated photos. Widespread attention was gained by a Midjourney-generated photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts. Research has suggested that the images Midjourney generates can be biased. For example, even ""neutral"" prompts in one study returned unequal results on the aspects of gender, skin color, and location. Prior to May 2023, Midjourney implemented a moderation mechanism predicated on a ""banned words"" system. This method prohibited the use of language associated with explicit content, such as sexual or pornographic themes, as well as extreme violence. Moreover, the system also banned certain individual words including religious and political figures such as ""Allah"" or ""Xi Jinping"". This practice occasionally stirred controversy due to perceived instances of censorship within the Midjourney platform. Commencing in May 2023, with subsequent updates post version 5, Midjourney transitioned to an ""AI-powered"" content moderation system. This advanced mechanism allowed for a more nuanced interpretation of user prompts by analyzing them in their entirety. It consequently facilitated the context-dependent use of words that had previously been prohibited. For instance, users can now prompt the AI to generate a portrait of ""Xi Jinping"". However, the system will prevent the generation of contentious images, such as depictions of global leaders, including Xi Jinping, in situations of arrest.",wikipedia
+Midjourney,https://en.wikipedia.org/wiki/Midjourney#Litigation,"Litigation On January 13, 2023, three artists ��� Sarah Andersen, Kelly McKernan, and Karla Ortiz ��� filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists. The legal action was initiated in San Francisco by attorney Matthew Butterick in partnership with the Joseph Saveri Law Firm, the same team challenging Microsoft, GitHub, and OpenAI (developer of ChatGPT and DALL-E) in court. In July 2023, U.S. District Judge William Orrick inclined to dismiss most of the lawsuit filed by Andersen, McKernan, and Ortiz but allowed them to file a new complaint.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion,"Stable Diffusion Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt. It was developed by researchers from the CompVis Group at Ludwig Maximilian University of Munich and Runway with a compute donation by Stability AI and training data from non-profit organizations. Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly, and it can run on most consumer hardware equipped with a modest GPU with at least 8 GB VRAM. This marked a departure from previous proprietary text-to-image models such as DALL-E and Midjourney which were accessible only via cloud services.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Development,"Development The development of Stable Diffusion was funded and shaped by the start-up company Stability AI. The technical license for the model was released by the CompVis group at Ludwig Maximilian University of Munich. Development was led by Patrick Esser of Runway and Robin Rombach of CompVis, who were among the researchers who had earlier invented the latent diffusion model architecture used by Stable Diffusion. Stability AI also credited EleutherAI and LAION (a German nonprofit which assembled the dataset on which Stable Diffusion was trained) as supporters of the project. In October 2022, Stability AI raised US$101 million in a round led by Lightspeed Venture Partners and Coatue Management.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Technology,"Technology Stable Diffusion uses a kind of diffusion model (DM), called a latent diffusion model (LDM) developed by the CompVis group at LMU Munich. Introduced in 2015, diffusion models are trained with the objective of removing successive applications of Gaussian noise on training images, which can be thought of as a sequence of denoising autoencoders. Stable Diffusion consists of 3 parts: the variational autoencoder (VAE), U-Net, and an optional text encoder. The VAE encoder compresses the image from pixel space to a smaller dimensional latent space, capturing a more fundamental semantic meaning of the image. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet backbone, denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space. The denoising step can be flexibly conditioned on a string of text, an image, or another modality. The encoded conditioning data is exposed to denoising U-Nets via a cross-attention mechanism. For conditioning on text, the fixed, pretrained CLIP ViT-L/14 text encoder is used to transform text prompts to an embedding space. Researchers point to increased computational efficiency for training and generation as an advantage of LDMs. With 860 millions of parameters in the U-Net and 123 millions in the text encoder, Stable Diffusion is considered relatively lightweight by 2022 standards, and unlike other diffusion models, it can run on consumer GPUs. Stable Diffusion was trained on pairs of images and captions taken from LAION-5B, a publicly available dataset derived from Common Crawl data scraped from the web, where 5 billion image-text pairs were classified based on language and filtered into separate datasets by resolution, a predicted likelihood of containing a watermark, and predicted ""aesthetic"" score (e. g. subjective visual quality). The dataset was created by LAION, a German non-profit which receives funding from Stability AI. The Stable Diffusion model was trained on three subsets of LAION-5B: laion2B-en, laion-high-resolution, and laion-aesthetics v2 5+. A third-party analysis of the model's training data identified that out of a smaller subset of 12 million images taken from the original wider dataset used, approximately 47% of the sample size of images came from 100 different domains, with Pinterest taking up 8.5% of the subset, followed by websites such as WordPress, Blogspot, Flickr, DeviantArt and Wikimedia Commons. An investigation by Bayerischer Rundfunk showed that LAION's datasets, hosted on Hugging Face, contain large amounts of private and sensitive data. The model was initially trained on the laion2B-en and laion-high-resolution subsets, with the last few rounds of training done on LAION-Aesthetics v2 5+, a subset of 600 million captioned images which the LAION-Aesthetics Predictor V2 predicted that humans would, on average, give a score of at least 5 out of 10 when asked to rate how much they liked them. The LAION-Aesthetics v2 5+ subset also excluded low-resolution images and images which LAION-5B-WatermarkDetection identified as carrying a watermark with greater than 80% probability.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Technology,"Technology Final rounds of training additionally dropped 10% of text conditioning to improve Classifier-Free Diffusion Guidance. The model was trained using 256 Nvidia A100 GPUs on Amazon Web Services for a total of 150,000 GPU-hours, at a cost of $600,000. Stable Diffusion has issues with degradation and inaccuracies in certain scenarios. Initial releases of the model were trained on a dataset that consists of 512��512 resolution images, meaning that the quality of generated images noticeably degrades when user specifications deviate from its ""expected"" 512��512 resolution; the version 2.0 update of the Stable Diffusion model later introduced the ability to natively generate images at 768��768 resolution. Another challenge is in generating human limbs due to poor data quality of limbs in the LAION database. The model is insufficiently trained to understand human limbs and faces due to the lack of representative features in the database, and prompting the model to generate images of such type can confound the model. Stable Diffusion XL (SDXL) version 1.0, released in July 2023, introduced native 1024x1024 resolution and improved generation for limbs and text. Accessibility for individual developers can also be a problem. In order to customize the model for new use cases that are not included in the dataset, such as generating anime characters (""waifu diffusion""), new data and further training are required. Fine-tuned adaptations of Stable Diffusion created through additional retraining have been used for a variety of different use-cases, from medical imaging to algorithmically generated music. However, this fine-tuning process is sensitive to the quality of new data; low resolution images or different resolutions from the original data can not only fail to learn the new task but degrade the overall performance of the model. Even when the model is additionally trained on high quality images, it is difficult for individuals to run models in consumer electronics. For example, the training process for waifu-diffusion requires a minimum 30 GB of VRAM, which exceeds the usual resource provided in such consumer GPUs as Nvidia's GeForce 30 series, which has only about 12 GB. The creators of Stable Diffusion acknowledge the potential for algorithmic bias, as the model was primarily trained on images with English descriptions. As a result, generated images reinforce social biases and are from a western perspective, as the creators note that the model lacks data from other communities and cultures. The model gives more accurate results for prompts that are written in English in comparison to those written in other languages, with western or white cultures often being the default representation. To address the limitations of the model's initial training, end-users may opt to implement additional training to fine-tune generation outputs to match more specific use-cases, a process also referred to as personalization.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Technology,"Technology There are three methods in which user-accessible fine-tuning can be applied to a Stable Diffusion model checkpoint: An ""embedding"" can be trained from a collection of user-provided images, and allows the model to generate visually similar images whenever the name of the embedding is used within a generation prompt. Embeddings are based on the ""textual inversion"" concept developed by researchers from Tel Aviv University in 2022 with support from Nvidia, where vector representations for specific tokens used by the model's text encoder are linked to new pseudo-words. Embeddings can be used to reduce biases within the original model, or mimic visual styles. A ""hypernetwork"" is a small pretrained neural network that is applied to various points within a larger neural network, and refers to the technique created by NovelAI developer Kurumuz in 2021, originally intended for text-generation transformer models. Hypernetworks steer results towards a particular direction, allowing Stable Diffusion-based models to imitate the art style of specific artists, even if the artist is not recognised by the original model; they process the image by finding key areas of importance such as hair and eyes, and then patch these areas in secondary latent space. DreamBooth is a deep learning generation model developed by researchers from Google Research and Boston University in 2022 which can fine-tune the model to generate precise, personalised outputs that depict a specific subject, following training via a set of images which depict the subject.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Capabilities,"Capabilities The Stable Diffusion model supports the ability to generate new images from scratch through the use of a text prompt describing elements to be included or omitted from the output. Existing images can be re-drawn by the model to incorporate new elements described by a text prompt (a process known as ""guided image synthesis"") through its diffusion-denoising mechanism. In addition, the model also allows the use of prompts to partially alter existing images via inpainting and outpainting, when used with an appropriate user interface that supports such features, of which numerous different open source implementations exist. Stable Diffusion is recommended to be run with 10 GB or more VRAM, however users with less VRAM may opt to load the weights in float16 precision instead of the default float32 to tradeoff model performance with lower VRAM usage. Top: no negative prompt Centre: ""green trees"" Bottom: ""round stones, round rocks""The text to image sampling script within Stable Diffusion, known as ""txt2img"", consumes a text prompt in addition to assorted option parameters covering sampling types, output image dimensions, and seed values. The script outputs an image file based on the model's interpretation of the prompt. Generated images are tagged with an invisible digital watermark to allow users to identify an image as generated by Stable Diffusion, although this watermark loses its efficacy if the image is resized or rotated. Each txt2img generation will involve a specific seed value which affects the output image. Users may opt to randomize the seed in order to explore different generated outputs, or use the same seed to obtain the same image output as a previously generated image. Users are also able to adjust the number of inference steps for the sampler; a higher value takes a longer duration of time, however a smaller value may result in visual defects. Another configurable option, the classifier-free guidance scale value, allows the user to adjust how closely the output image adheres to the prompt. More experimentative use cases may opt for a lower scale value, while use cases aiming for more specific outputs may use a higher value. Additional text2img features are provided by front-end implementations of Stable Diffusion, which allow users to modify the weight given to specific parts of the text prompt. Emphasis markers allow users to add or reduce emphasis to keywords by enclosing them with brackets. An alternative method of adjusting weight to parts of the prompt are ""negative prompts"". Negative prompts are a feature included in some front-end implementations, including Stability AI's own DreamStudio cloud service, and allow the user to specify prompts which the model should avoid during image generation. The specified prompts may be undesirable image features that would otherwise be present within image outputs due to the positive prompts provided by the user, or due to how the model was originally trained, with mangled human hands being a common example.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Capabilities,"Capabilities Left: Original image created with Stable Diffusion 1.5 Right: Modified image created with Stable Diffusion XL 1.0Stable Diffusion also includes another sampling script, ""img2img"", which consumes a text prompt, path to an existing image, and strength value between 0.0 and 1.0. The script outputs a new image based on the original image that also features elements provided within the text prompt. The strength value denotes the amount of noise added to the output image. A higher strength value produces more variation within the image but may produce an image that is not semantically consistent with the prompt provided. The ability of img2img to add noise to the original image makes it potentially useful for data anonymization and data augmentation, in which the visual features of image data are changed and anonymized. The same process may also be useful for image upscaling, in which the resolution of an image is increased, with more detail potentially being added to the image. Additionally, Stable Diffusion has been experimented with as a tool for image compression. Compared to JPEG and WebP, the recent methods used for image compression in Stable Diffusion face limitations in preserving small text and faces. Additional use-cases for image modification via img2img are offered by numerous front-end implementations of the Stable Diffusion model. Inpainting involves selectively modifying a portion of an existing image delineated by a user-provided layer mask, which fills the masked space with newly generated content based on the provided prompt. A dedicated model specifically fine-tuned for inpainting use-cases was created by Stability AI alongside the release of Stable Diffusion 2.0. Conversely, outpainting extends an image beyond its original dimensions, filling the previously empty space with content generated based on the provided prompt. A depth-guided model, named ""depth2img"", was introduced with the release of Stable Diffusion 2.0 on November 24, 2022; this model infers the depth of the provided input image, and generates a new output image based on both the text prompt and the depth information, which allows the coherence and depth of the original input image to be maintained in the generated output. ControlNet is a neural network architecture designed to manage diffusion models by incorporating additional conditions. It duplicates the weights of neural network blocks into a ""locked"" copy and a ""trainable"" copy. The ""trainable"" copy learns the desired condition, while the ""locked"" copy preserves the original model. This approach ensures that training with small datasets of image pairs does not compromise the integrity of production-ready diffusion models. The ""zero convolution"" is a 1��1 convolution with both weight and bias initialized to zero. Before training, all zero convolutions produce zero output, preventing any distortion caused by ControlNet. No layer is trained from scratch; the process is still fine-tuning, keeping the original model secure. This method enables training on small-scale or even personal devices.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#Usage_and_controversy,"Usage_and_controversy Stable Diffusion claims no rights on generated images and freely gives users the rights of usage to any generated images from the model provided that the image content is not illegal or harmful to individuals. The freedom provided to users over image usage has caused controversy over the ethics of ownership, as Stable Diffusion and other generative models are trained from copyrighted images without the owner���s consent. This also leads to large amounts of private and sensitive information in the training data. As visual styles and compositions are not subject to copyright, it is often interpreted that users of Stable Diffusion who generate images of artworks should not be considered to be infringing upon the copyright of visually similar works. However, individuals depicted in generated images may be protected by personality rights if their likeness is used, and intellectual property such as recognizable brand logos still remain protected by copyright. Nonetheless, visual artists have expressed concern that widespread usage of image synthesis software such as Stable Diffusion may eventually lead to human artists, along with photographers, models, cinematographers, and actors, gradually losing commercial viability against AI-based competitors. Stable Diffusion is notably more permissive in the types of content users may generate, such as violent or sexually explicit imagery, in comparison to other commercial products based on generative AI. Addressing the concerns that the model may be used for abusive purposes, CEO of Stability AI, Emad Mostaque, argues that ""[it is] peoples' responsibility as to whether they are ethical, moral, and legal in how they operate this technology"", and that putting the capabilities of Stable Diffusion into the hands of the public would result in the technology providing a net benefit, in spite of the potential negative consequences. In addition, Mostaque argues that the intention behind the open availability of Stable Diffusion is to end corporate control and dominance over such technologies, who have previously only developed closed AI systems for image synthesis. This is reflected by the fact that any restrictions Stability AI places on the content that users may generate can easily be bypassed due to the availability of the source code. Controversy around photorealistic sexualized depictions of underage characters have been brought up, due to such images generated by Stable Diffusion being shared on websites such as Pixiv.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#License,"License Unlike models like DALL-E, Stable Diffusion makes its source code available, along with the model (pretrained weights). It applies the Creative ML OpenRAIL-M license, a form of Responsible AI License (RAIL), to the model (M). The license prohibits certain use cases, including crime, libel, harassment, doxing, ""exploiting ... minors"", giving medical advice, automatically creating legal obligations, producing legal evidence, and ""discriminating against or harming individuals or groups based on ... social behavior or ... personal or personality characteristics ... [or] legally protected characteristics or categories"". The user owns the rights to their generated output images, and is free to use them commercially.",wikipedia
+Stable Diffusion,https://en.wikipedia.org/wiki/Stable_Diffusion#See_also,See_also 15.ai Artificial intelligence art Craiyon Hugging Face Imagen (Google Brain),wikipedia
+Artificial intelligence detection software,https://en.wikipedia.org/wiki/Artificial_intelligence_detection_software#Text_detection,"Text_detection For text, this is usually done to prevent alleged plagiarism, often by detecting repetition of words as telltale signs that a text was AI-generated (including AI hallucinations). They are often used by teachers marking their students, usually on an ad hoc basis. Following the release of ChatGPT and similar AI text generative software, many educational establishments have issued policies against the use of AI by students. AI text detection software is also used by those assessing job applicants, as well as online search engines. Current detectors are unreliable, and have incorrectly marked work by humans as originating from AI while failing to detect AI-generated work in other instances. MIT Technology Review said that the technology ""struggled to pick up ChatGPT-generated text that had been slightly rearranged by humans and obfuscated by a paraphrasing tool"". AI text detection software has also been shown to discriminate against non-native speakers of English. In May 2023, a professor at Texas A&M University���Commerce used ChatGPT to detect whether his students' content was written by it, which ChatGPT said was the case. As such, he threatened to fail the class, despite ChatGPT not being able to detect AI-generated writing. No students were prevented from graduating because of the issue, and all but one student (who admitted to using the software) were exonerated from having used ChatGPT in their content. In April 2023, Cambridge University and other members of the Russell Group of universities opted out of Turnitin's AI text detection tool, arguing that it was unreliable.",wikipedia
+Foundation models,https://en.wikipedia.org/wiki/Foundation_models,"Foundation models A foundation model (also called base model) is a large machine learning (ML) model trained on a vast quantity of data at scale (often by self-supervised learning or semi-supervised learning) such that it can be adapted to a wide range of downstream tasks. Foundation models have helped bring about a major transformation in how artificial intelligence (AI) systems are built, such as by powering prominent chatbots and other user-facing AI. The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) popularized the term. Early examples of foundation models were pre-trained language models (LMs) including Google's BERT and various early GPT foundation models, which notably includes OpenAI's ""GPT-n"" series. Such broad models can in turn be used for task and/or domain specific models using targeted datasets of various kinds, such as medical codes. Beyond text, several visual and multimodal foundation models have been produced���including DALL-E, Flamingo, Florence and NOOR. Visual foundation models (VFMs) have been combined with text-based LLMs to develop sophisticated task-specific models. There is also Segment Anything by Meta AI for general image segmentation. For reinforcement learning agents, there is GATO by Google DeepMind.",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence),"Hallucination (artificial intelligence) In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion (or some other number apparently ""plucked from thin air""). Such phenomena are termed ""hallucinations"", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs. Some researchers believe the specific term ""AI hallucination"" unreasonably anthropomorphizes computers. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to pointlessly embed plausible-sounding random falsehoods within their generated content. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology.",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#In_natural_language_processing,"In_natural_language_processing In natural language processing, a hallucination is often defined as ""generated content that is nonsensical or unfaithful to the provided source content"". Depending on whether the output contradicts the prompt or not they could be divided to closed-domain and open-domain respectively. Hallucination was shown to be a statistically inevitable byproduct of any imperfect generative model that is trained to maximize training likelihood, such as GPT-3, and requires active learning (such as Reinforcement learning from human feedback) to be avoided. Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being ""summarized"". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated during the same conversation), causing a cascade of possible hallucination as the response grows longer. By 2022, papers such as the New York Times expressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems. In August 2022, Meta warned during its release of BlenderBot 3 that the system was prone to ""hallucinations"", which Meta defined as ""confident statements that are not true"". On 15 November 2022, Meta unveiled a demo of Galactica, designed to ""store, combine and reason about scientific knowledge"". Content generated by Galactica came with the warning ""Outputs may be unreliable! Language Models are prone to hallucinate text. "" In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy. There are several reasons for natural language models to hallucinate data. For example: Hallucination from data: There are divergences in the source content (which would often happen with large training data sets). Hallucination from training: Hallucination still occurs when there is little divergence in the data set. In that case, it derives from the way the model is trained. A lot of reasons can contribute to this type of hallucination, such as: An erroneous decoding from the transformer A bias from the historical sequences that the model previously generated A bias generated from the way the model encodes its knowledge in its parametersAn erroneous decoding from the transformer A bias from the historical sequences that the model previously generated A bias generated from the way the model encodes its knowledge in its parametersOpenAI's ChatGPT, released in beta-version to the public on November 30, 2022, is based on the foundation model GPT-3.5 (a revision of GPT-3).",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#In_natural_language_processing,"In_natural_language_processing Professor Ethan Mollick of Wharton has called ChatGPT an ""omniscient, eager-to-please intern who sometimes lies to you"". Data scientist Teresa Kubacka has recounted deliberately making up the phrase ""cycloidal inverted electromagnon"" and testing ChatGPT by asking it about the (nonexistent) phenomenon. ChatGPT invented a plausible-sounding answer backed with plausible-looking citations that compelled her to double-check whether she had accidentally typed in the name of a real phenomenon. Other scholars such as Oren Etzioni have joined Kubacka in assessing that such software can often give you ""a very impressive-sounding answer that's just dead wrong"". When CNBC asked ChatGPT for the lyrics to ""Ballad of Dwight Fry"", ChatGPT supplied invented lyrics rather than the actual lyrics. Asked questions about New Brunswick, ChatGPT got many answers right but incorrectly classified Samantha Bee as a ""person from New Brunswick"". Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that ""(strong) magnetic fields of black holes are generated by the extremely strong gravitational forces in their vicinity"". (In reality, as a consequence of the no-hair theorem, a black hole without an accretion disk is believed to have no magnetic field. ) Fast Company asked ChatGPT to generate a news article on Tesla's last financial quarter; ChatGPT created a coherent article, but made up the financial numbers contained within. Other examples involve baiting ChatGPT with a false premise to see if it embellishes upon the premise. When asked about ""Harold Coward's idea of dynamic canonicity"", ChatGPT fabricated that Coward wrote a book titled Dynamic Canonicity: A Model for Biblical and Theological Interpretation, arguing that religious principles are actually in a constant state of change. When pressed, ChatGPT continued to insist that the book was real. Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated ""Some species of dinosaurs even developed primitive forms of art, such as engravings on stones"". When prompted that ""Scientists have recently discovered churros, the delicious fried-dough pastries. . . (are) ideal tools for home surgery"", ChatGPT claimed that a ""study published in the journal Science"" found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a ""fundamental"" task for ChatGPT competitor Google Bard. A 2023 demo for Microsoft's GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter. In May 2023, it was discovered Stephen Schwartz submitted six fake case precedents generated by ChatGPT in his brief to the Southern District of New York on Mata v. Avianca, a personal injury case against the airline Avianca. Schwartz said that he had never previously used ChatGPT, that he did not recognize the possibility that ChatGPT's output could have been fabricated, and that ChatGPT continued to assert the authenticity of the precedents after their nonexistence was discovered.",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#In_natural_language_processing,"In_natural_language_processing In response, Brantley Starr of the Northern District of Texas banned the submission of AI-generated case filings that have not been reviewed by a human, noting that: [Generative artificial intelligence] platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up���even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. On June 23 P. Kevin Castel tossed the Mata case and issued a $5,000 fine to Schwartz and another lawyer for bad faith conduct, who continued to stand by the fictitious precedents despite his previous claims. He characterized numerous errors and inconsistencies in the opinion summaries, describing one of the cited opinions as ""gibberish"" and ""[bordering] on nonsensical"". In June 2023, Mark Walters, a gun rights activist and radio personality, sued OpenAI in a Georgia state court after ChatGPT mischaracterized a legal complaint in a manner alleged to be defamatory against Walters. The complaint in question was brought in May 2023 by the Second Amendment Foundation against Washington attorney general Robert W. Ferguson for allegedly violating their freedom of speech, whereas the ChatGPT-generated summary bore no resemblance and claimed that Walters was accused of embezzlement and fraud while holding a Second Amendment Foundation office post that he never held in real life. According to AI legal expert Eugene Volokh, OpenAI may be shielded against this claim by Section 230, unless the court finds that OpenAI ""materially contributed"" to the publication of defamatory content.",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#Terminologies,"Terminologies In Salon, statistician Gary N. Smith argues that LLMs ""do not understand what words mean"" and consequently that the term ""hallucination"" unreasonably anthropomorphizes the machine. Journalist Benj Edwards, in Ars Technica, writes that the term ""hallucination"" is controversial, but that some form of metaphor remains necessary; Edwards suggests ""confabulation"" as an analogy for processes that involve ""creative gap-filling"". A list of use of the term ""hallucination"", definitions or characterizations in the context of LLMs include: ""a tendency to invent facts in moments of uncertainty"" (OpenAI, May 2023) ""a model's logical mistakes"" (OpenAI, May 2023) fabricating information entirely, but behaving as if spouting facts (CNBC, May 2023) ""making up information"" (The Verge, February 2023)",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#In_other_artificial_intelligence,"In_other_artificial_intelligence The concept of ""hallucination"" is applied more broadly than just natural language processing. A confident response from any AI that seems unjustified by the training data can be labeled a hallucination. Wired noted in 2018 that, despite no recorded attacks ""in the wild"" (that is, outside of proof-of-concept attacks by researchers), there was ""little dispute"" that consumer gadgets, and systems such as automated driving, were susceptible to adversarial attacks that could cause AI to hallucinate. Examples included a stop sign rendered invisible to computer vision; an audio clip engineered to sound innocuous to humans, but that software transcribed as ""evil dot com""; and an image of two men on skis, that Google Cloud Vision identified as 91% likely to be ""a dog"". The hallucination phenomenon is still not completely understood. Therefore, there is still ongoing research to try to mitigate its apparition. Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue. Researchers have proposed a variety of mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer. Another approach proposes to actively validate the correctness corresponding to the low-confidence generation of the model using web search results. Nvidia Guardrails, launched in 2023, can be configured to block LLM responses that don't pass fact-checking from a second LLM.",wikipedia
+Hallucination (artificial intelligence),https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#See_also,See_also AI alignment AI effect AI safety Algorithmic bias Anthropomorphism of computers Artificial consciousness Artificial imagination Artificial intelligence detection software Artificial stupidity Behavior selection algorithm Belief���desire���intention software model Commonsense reasoning Computational creativity Confabulation Confabulation (neural networks) DeepDream Ethics of artificial intelligence Generative artificial intelligence Hyperreality Misaligned goals in artificial intelligence Misinformation effect Philosophical zombie Prompt engineering Regulation of artificial intelligence Roko's basilisk Search engine manipulation effect Self-awareness Technoself studies Turing test User illusion,wikipedia
+Information retrieval,https://en.wikipedia.org/wiki/Information_retrieval#Model_types,"Model_types For effectively retrieving relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. Set-theoretic models represent documents as sets of words or phrases. Similarities are usually derived from set-theoretic operations on those sets. Common models are: Standard Boolean model Extended Boolean model Fuzzy retrieval Algebraic models represent documents and queries usually as vectors, matrices, or tuples. The similarity of the query vector and document vector is represented as a scalar value. Vector space model Generalized vector space model (Enhanced) Topic-based Vector Space Model Extended Boolean model Latent semantic indexing a.k.a. latent semantic analysis Probabilistic models treat the process of document retrieval as a probabilistic inference. Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like the Bayes' theorem are often used in these models. Binary Independence Model Probabilistic relevance model on which is based the okapi (BM25) relevance function Uncertain inference Language models Divergence-from-randomness model Latent Dirichlet allocation Feature-based retrieval models view documents as vectors of values of feature functions (or just features) and seek the best way to combine these features into a single relevance score, typically by learning to rank methods. Feature functions are arbitrary functions of document and query, and as such can easily incorporate almost any other retrieval model as just another feature.Standard Boolean model Extended Boolean model Fuzzy retrievalVector space model Generalized vector space model (Enhanced) Topic-based Vector Space Model Extended Boolean model Latent semantic indexing a.k.a. latent semantic analysisBinary Independence Model Probabilistic relevance model on which is based the okapi (BM25) relevance function Uncertain inference Language models Divergence-from-randomness model Latent Dirichlet allocationModels without term-interdependencies treat different terms/words as independent. This fact is usually represented in vector space models by the orthogonality assumption of term vectors or in probabilistic models by an independency assumption for term variables. Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrence of those terms in the whole set of documents. Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They rely an external source for the degree of interdependency between two terms. (For example, a human or sophisticated algorithms.)",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering,"Prompt engineering Prompt engineering is the process of structuring text that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text model can be a query such as ""what is Fermat's little theorem?"", a command such as ""write a poem about leaves falling"", a short statement of feedback (for example, ""too verbose"", ""too formal"", ""rephrase again"", ""omit this word"") or a longer statement including context, instructions, and input data. Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as ""Act as a native French speaker"". A prompt may include a few examples for a model to learn from, such as ""maison -> house, chat -> cat, chien ->"", an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as ""a high-quality photo of an astronaut riding a horse"" or ""Lo-fi slow BPM electro chill with organic samples"". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style, layout, lighting, and aesthetic.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#In-context_learning,"In-context_learning Prompt engineering is enabled by in-context learning, defined as a model's ability to temporarily learn from prompts. The ability for in-context learning is an emergent ability of large language models. In-context learning itself is an emergent property of model scale, meaning breaks in downstream scaling laws occur such that its efficacy increases at a different rate in larger models than in smaller models. In contrast to training and fine tuning for each specific task, which are not temporary, what has been learnt during in-context learning is of a temporary nature. It does not carry the temporary contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other. This result of ""mesa-optimization"" within transformer layers, is a form of meta-learning or ""learning to learn"".",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Text-to-text,"Text-to-text Chain-of-thought (CoT) prompting is a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. Chain-of-thought prompting improves reasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic a train of thought. It allows large language models to overcome difficulties with some reasoning tasks that require logical thinking and multiple steps to solve, such as arithmetic or commonsense reasoning questions. For example, given the question ""Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? "", a CoT prompt might induce the LLM to answer ""A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9. "" As originally proposed, each CoT prompt included a few Q&A examples. This made it a few-shot prompting technique. However, simply appending the words ""Let's think step-by-step"", has also proven effective, which makes CoT a zero-shot prompting technique. This allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples. When applied to PaLM, a 540B parameter language model, CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, even setting a new state of the art at the time on the GSM8K mathematical reasoning benchmark. It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability. Example: Chain-of-thought prompting is just one of many prompt-engineering techniques. Various other techniques have been proposed. Generated knowledge prompting first prompts the model to generate relevant facts for completing the prompt, then proceed to complete the prompt. The completion quality is usually higher, as the model can be conditioned on relevant facts. Example: Least-to-most prompting prompts a model to first list the sub-problems to a problem, then solve them in sequence, such that later sub-problems can be solved with the help of answers to previous sub-problems. Example: Self-consistency decoding performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts. If the rollouts disagree by a lot, a human can be queried for the correct chain of thought. Complexity-based prompting performs several CoT rollouts, then select the rollouts with the longest chains of thought, then select the most commonly reached conclusion out of those. Self-refine prompts the LLM to solve the problem, then prompts the LLM to critique its solution, then prompts the LLM to solve the problem again in view of the problem, solution, and critique. This process is repeated until stopped, either by running out of tokens, time, or by the LLM outputting a ""stop"" token.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Text-to-text,"Text-to-text Example critique: Example refinement: Tree-of-thought prompting generalizes chain-of-thought by prompting the model to generate one or more ""possible next steps"", and then running the model on each of the possible next steps by breadth-first, beam, or some other method of tree search. Maieutic prompting is similar to tree-of-thought. The model is prompted to answer a question with an explanation. The model is then prompted to explain parts of the explanation, and so on. Inconsistent explanation trees are pruned or discarded. This improves performance on complex commonsense reasoning. Example: Directional-stimulus prompting includes a hint or cue, such as desired keywords, to guide a language model toward the desired output. Example: By default, the output of language models may not contain estimates of uncertainty. The model may output text that appears confident, though the underlying token predictions have low likelihood scores. Large language models like GPT-4 can have accurately calibrated likelihood scores in their token predictions, and so the model output uncertainty can be directly estimated by reading out the token prediction likelihood scores. But if one cannot access such scores (such as when one is accessing the model through a restrictive API), uncertainty can still be estimated and incorporated into the model output. One simple method is to prompt the model to use words to estimate uncertainty. Another is to prompt the model to refuse to answer in a standardized way if the input does not satisfy conditions. [citation needed] Prompts often contain a few examples (thus ""few-shot""). Examples can be automatically retrieved from a database with document retrieval, sometimes using a vector database. Given a query, a document retriever is called to retrieve the most relevant (usually measured by first encoding the query and the documents into vectors, then finding the documents with vectors closest in Euclidean norm to the query vector). The LLM then generates an output based on both the query and the retrieved documents. Large language models (LLM) themselves can be used to compose prompts for large language models. The automatic prompt engineer algorithm uses one LLM to beam search over prompts for another LLM: There are two LLMs. One is the target LLM, and another is the prompting LLM. Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs. Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction. The highest-scored instructions are given to the prompting LLM for further variations. Repeat until some stopping criteria is reached, then output the highest-scored instructions. CoT examples can be generated by LLM themselves. In ""auto-CoT"", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions nearest to the centroids of each cluster are selected. An LLM does zero-shot CoT on each question.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Text-to-text,"Text-to-text The resulting CoT examples are added to the dataset. When prompted with a new question, CoT examples to the nearest questions can be retrieved and added to the prompt.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Text-to-image,"Text-to-image Top: no negative prompt Centre: ""green trees"" Bottom: ""round stones, round rocks""In 2022, text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney were released to the public. These models take text prompts as input and use them to generate AI art images. Text-to-image models typically do not understand grammar and sentence structure in the same way as large language models, and require a different set of prompting techniques. A text-to-image prompt commonly includes a description of the subject of the art (such as bright orange poppies), the desired medium (such as digital painting or photography), style (such as hyperrealistic or pop-art), lighting (such as rim lighting or crepuscular rays), color and texture. The Midjourney documentation encourages short, descriptive prompts: instead of ""Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils"", an effective prompt might be ""Bright orange California poppies drawn with colored pencils"". Word order affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily. Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase in the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski. Text-to-image models do not natively understand negation. The prompt ""a party with no cake"" is likely to produce an image including a cake. As an alternative, negative prompts allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image. A common approach is to include generic undesired terms such as ugly, boring, bad anatomy in the negative prompt for an image.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Non-text_prompts,"Non-text_prompts Some approaches augment or replace natural language text prompts with non-text input. For text-to-image models, ""Textual inversion"" performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a ""pseudo-word"" which can be included in a prompt to express the content or style of the examples. In 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points. In ""prefix-tuning"" or ""prompt tuning"", floating-point-valued vectors are searched directly by gradient descent, to maximize the log-probability on outputs. Formally, let E = { e 1 , ��� , e k } {\displaystyle \mathbf {E} =\{\mathbf {e_{1}} ,\dots ,\mathbf {e_{k}} \}} be a set of soft prompt tokens (tunable embeddings), while X = { x 1 , ��� , x m } {\displaystyle \mathbf {X} =\{\mathbf {x_{1}} ,\dots ,\mathbf {x_{m}} \}} and Y = { y 1 , ��� , y n } {\displaystyle \mathbf {Y} =\{\mathbf {y_{1}} ,\dots ,\mathbf {y_{n}} \}} be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence concat ( E ; X ; Y ) {\displaystyle {\text{concat}}(\mathbf {E} ;\mathbf {X} ;\mathbf {Y} )} , and fed to the large language models (LLM). The losses are computed over the Y {\displaystyle \mathbf {Y} } tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary. With more math details, let an LLM be written as L L M ( X ) = F ( E ( X ) ) {\displaystyle LLM(X)=F(E(X))} , where X {\displaystyle X} is a sequence of linguistic tokens, E {\displaystyle E} is the token-to-vector function, and F {\displaystyle F} is the rest of the model. In prefix-tuning, one provide a set of input-output pairs { ( X i , Y i ) } i {\displaystyle \{(X^{i},Y^{i})\}_{i}} , and then use gradient descent to search for arg ��� max Z ~ ��� i log ��� P r [ Y i | Z ~ ��� E ( X i ) ] {\displaystyle \arg \max _{\tilde {Z}}\sum _{i}\log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]} . In words, log ��� P r [ Y i | Z ~ ��� E ( X i ) ] {\displaystyle \log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]} is the log-likelihood of outputting Y i {\displaystyle Y^{i}} , if the model first encodes the input X i {\displaystyle X^{i}} into the vector E ( X i ) {\displaystyle E(X^{i})} , then prepend the vector with the ""prefix vector"" Z ~ {\displaystyle {\tilde {Z}}} , then apply F {\displaystyle F} .",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Non-text_prompts,"Non-text_prompts An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for arg ��� max X ~ ��� i log ��� P r [ Y i | X ~ ��� X i ] {\displaystyle \arg \max _{\tilde {X}}\sum _{i}\log Pr[Y^{i}|{\tilde {X}}\ast X^{i}]} where X ~ {\displaystyle {\tilde {X}}} is ranges over token sequences of a specified length.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Prompt_injection,"Prompt_injection Prompt injection is a family of related computer security exploits carried out by getting a machine learning model (such as an LLM) which was trained to follow human-given instructions to follow instructions provided by a malicious user. This stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompts) provided by the ML model's operator. A language model can perform translation with the following prompt: followed by the text to be translated. A prompt injection occurs when that text contains instructions that change the behavior of the model: to which GPT-3 will respond: ""Haha pwned! ! "". The attack works because language model inputs concatenate instructions and data in the same context, so the underlying engine cannot distinguish between them. Common types of prompt injection attacks are: jailbreaking, which may include asking the model to roleplay a character, to answer with arguments, or to pretend to be superior to moderation instructions prompt leaking, in which users persuade the model to divulge a pre-prompt which is normally hidden from users token smuggling, is another type of jailbreaking attack, in which the nefarious prompt is wrapped in a code writing task. Prompt injection can be viewed as a code injection attack using adversarial prompt engineering. In 2022, the NCC Group characterized prompt injection as a new class of vulnerability of AI/ML systems. In early 2023, prompt injection was seen ""in the wild"" in minor exploits against ChatGPT, Bard, and similar chatbots, for example to reveal the hidden initial prompts of the systems, or to trick the chatbot into participating in conversations that violate the chatbot's content policy. One of these prompts was known as ""Do Anything Now"" (DAN) by its practitioners. For LLM that can query online resources, such as websites, they can be targeted for prompt injection by placing the prompt on a website, then prompt the LLM to visit the website. Another security issue is in LLM generated code, which may import packages not previously existing. An attacker can first prompt the LLM with commonly used programming prompts, collect all packages imported by the generated programs, then find the ones not existing on the official registry. Then the attacker can create such packages with malicious payload and upload them to the official registry. In October 2019, Junade Ali and Malgorzata Pikies of Cloudflare submitted a paper which showed that when a front-line good/bad classifier (using a neural network) was placed before a Natural Language Processing system, it would disproportionately reduce the number of false positive classifications at the cost of a reduction in some true positives. In 2023, this technique was adopted an open-source project Rebuff. ai to protect prompt injection attacks, with Arthur. ai announcing a commercial product - although such approaches do not mitigate the problem completely. By August 2023, leading Large Language Model developers were still unaware of how to stop such attacks.",wikipedia
+Prompt engineering,https://en.wikipedia.org/wiki/Prompt_engineering#Prompt_injection,"Prompt_injection In September 2023, Junade Ali shared that he and Frances Liu had successfully been able to mitigate prompt injection attacks (including on attack vectors the models had not been exposed to before) through giving Large Language Models the ability to engage in metacognition (similar to having an inner monologue) and that they held a provisional United States patent for the technology - however, they decided to not enforce their intellectual property rights and not pursue this as a business venture as market conditions were not yet right (citing reasons including high GPU costs and a currently limited number of safety-critical use-cases for LLMs). Ali also noted that their market research had found that Machine Learning engineers were using alternative approaches like prompt engineering solutions and data isolation to work around this issue.",wikipedia
+Sentence embedding,https://en.wikipedia.org/wiki/Sentence_embedding,"Sentence embedding In natural language processing, a sentence embedding refers to a numeric representation of a sentence in the form of a vector of real numbers which encodes meaningful semantic information. State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models. BERT pioneered an approach involving the use of a dedicated [CLS] token preprended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings. SBERT later achieved superior sentence embedding performance by fine tuning BERT's [CLS] token embeddings through the usage of a siamese neural network architecture on the SNLI dataset. Other approaches are loosely based on the idea of distributional semantics applied to sentences. Skip-Thought trains an encoder-decoder structure for the task of neighboring sentences predictions. Though this has been shown to achieve worse performance than approaches such as InferSent or SBERT. An alternative direction is to aggregate word embeddings, such as those returned by Word2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW). However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE), which demonstrated performance improvements in downstream text classification tasks.",wikipedia
+Sentence embedding,https://en.wikipedia.org/wiki/Sentence_embedding#Applications,"Applications In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search. LangChain for instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information for question answering tasks. This approach is also known formally as retrieval augmented generation Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing a Large language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automate hyperparameter optimization.",wikipedia
+Sentence extraction,https://en.wikipedia.org/wiki/Sentence_extraction,"Sentence extraction Sentence extraction is a technique used for automatic summarization of a text. In this shallow approach, statistical heuristics are used to identify the most salient sentences of a text. Sentence extraction is a low-cost approach compared to more knowledge-intensive deeper approaches which require additional knowledge bases such as ontologies or linguistic knowledge. In short ""sentence extraction"" works as a filter that allows only meaningful sentences to pass. The major downside of applying sentence-extraction techniques to the task of summarization is the loss of coherence in the resulting summary. Nevertheless, sentence extraction summaries can give valuable clues to the main points of a document and are frequently sufficiently intelligible to human readers.",wikipedia
+Sentiment analysis,https://en.wikipedia.org/wiki/Sentiment_analysis,"Sentiment analysis Sentiment analysis (also known as opinion mining or emotion AI) is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. With the rise of deep language models, such as RoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly.",wikipedia
+Seq2seq,https://en.wikipedia.org/wiki/Seq2seq#History,"History The algorithm was developed by Google for use in machine translation.[unreliable source] Similar earlier work includes Tom���� Mikolov's 2012 PhD thesis.[non-primary source needed] In 2019, Facebook announced its use in symbolic integration and resolution of differential equations. The company claimed that it could solve complex equations more rapidly and with greater accuracy than commercial solutions such as Mathematica, MATLAB and Maple. First, the equation is parsed into a tree structure to avoid notational idiosyncrasies. An LSTM neural network then applies its standard pattern recognition facilities to process the tree. In 2020, Google released Meena, a 2.6 billion parameter seq2seq-based chatbot trained on a 341 GB data set. Google claimed that the chatbot has 1.7 times greater model capacity than OpenAI's GPT-2, whose May 2020 successor, the 175 billion parameter GPT-3, trained on a ""45TB dataset of plaintext words (45,000 GB) that was ... filtered down to 570 GB."" In 2022, Amazon introduced AlexaTM 20B, a moderate-sized (20 billion parameter) seq2seq language model. It uses an encoder-decoder to accomplish few-shot learning. The encoder outputs a representation of the input that the decoder uses as input to perform a specific task, such as translating the input into another language. The model outperforms the much larger GPT-3 in language translation and summarization. Training mixes denoising (appropriately inserting missing text in strings) and causal-language-modeling (meaningfully extending an input text). It allows adding features across different languages without massive training workflows. AlexaTM 20B achieved state-of-the-art performance in few-shot-learning tasks across all Flores-101 language pairs, outperforming GPT-3 on several tasks.",wikipedia
+Language model,https://en.wikipedia.org/wiki/Language_model,"Language model A language model is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ���Shannon-style��� experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. Language models are useful for a variety of tasks, including speech recognition (helping prevent predictions of low-probability (e.g. nonsense) sequences), machine translation, natural language generation (generating more human-like text), optical character recognition, handwriting recognition, grammar induction, information retrieval, and other. Large language models, currently their most advanced form, are a combination of larger datasets (frequently using scraped words from the public internet), feedforward neural networks, and transformers. They have superseded recurrent neural network-based models, which had previously superseded the pure statistical models, such as word n-gram language model.",wikipedia
+Language model,https://en.wikipedia.org/wiki/Language_model#Pure_statistical_models,"Pure_statistical_models A word n-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network-based models, which has been superseded by large language models. It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word was considered, it was called a bigram model; if two words, a trigram model; if n-1 words, an n-gram model. Special tokens were introduced to denote the start and end of a sentence ��� s ��� {\displaystyle \langle s\rangle } and ��� / s ��� {\displaystyle \langle /s\rangle } . Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is where Z ( w 1 , ��� , w m ��� 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , ��� , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that preceding (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over. Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then",wikipedia
+Language model,https://en.wikipedia.org/wiki/Language_model#Neural_models,"Neural_models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, furtherly causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. LLMs are artificial neural networks (mainly Transformers ) and are (pre-)trained using self-supervised learning and semi-supervised learning. As autoregressive language models, they work by taking an input text and repeatedly predicting the next token or word. Up to 2020, fine tuning was the only way a model could be adapted to be able to accomplish specific tasks. Larger sized models, such as GPT-3, however, can be prompt-engineered to achieve similar results. They are thought to acquire embodied knowledge about syntax, semantics and ""ontology"" inherent in human language corpora, but also inaccuracies and biases present in the corpora. Although sometimes matching human performance, it is not clear they are plausible cognitive models. At least for recurrent neural networks it has been shown that they sometimes learn patterns which humans do not learn, but fail to learn patterns that humans typically do learn.",wikipedia
+Language model,https://en.wikipedia.org/wiki/Language_model#Evaluation_and_benchmarks,"Evaluation_and_benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data it sees, some proposed models investigate the rate of learning, e.g. through inspection of learning curves. Various data sets have been developed to use to evaluate language processing systems. These include: Corpus of Linguistic Acceptability GLUE benchmark Microsoft Research Paraphrase Corpus Multi-Genre Natural Language Inference Question Natural Language Inference Quora Question Pairs Recognizing Textual Entailment Semantic Textual Similarity Benchmark SQuAD question answering Test Stanford Sentiment Treebank Winograd NLI BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU (Massive Multitask Language Understanding), BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. (LLaMa Benchmark)",wikipedia
+Language model,https://en.wikipedia.org/wiki/Language_model#See_also,See_also Cache language model Deep linguistic processing Factored language model Generative pre-trained transformer Katz's back-off model Language technology Statistical model Ethics of artificial intelligence Semantic similarity network,wikipedia
+Perplexity,https://en.wikipedia.org/wiki/Perplexity#Perplexity_per_word,"Perplexity_per_word In natural language processing, a corpus is a set of sentences or texts, and a language model is a probability distribution over entire sentences or texts. Consequently, in NLP, the more commonly used measure is perplexity per word, defined as: Suppose the average sentence xi in the corpus has a probability of 2 ��� 190 {\displaystyle 2^{-190}} according to the language model. This would give a model perplexity of 2190 per sentence. However, it is more common to normalize for sentence length. Thus, if the test sample's sentences comprised a total of 1,000 words, and could be coded using 7.95 bits per word, one could report a model perplexity of 27.95 = 247 per word. In other words, the model is as confused on test data as if it had to choose uniformly and independently among 247 possibilities for each word. Since 2007, significant advancements in language modeling have emerged, particularly with the advent of deep learning techniques. Perplexity per word, a measure that quantifies the predictive power of a language model, has remained central to evaluating models like transformers, BERT, GPT-2, and others. This measure is employed to compare different models on the same dataset and guide the optimization of hyperparameters, although it has been found sensitive to factors such as linguistic features and sentence length. Despite its pivotal role in language model development, perplexity has shown limitations, particularly as an inadequate predictor of speech recognition performance, where it may not correlate well with word-error rates, raising questions about its accuracy. The lowest perplexity that has been published on the Brown Corpus (1 million words of American English of varying topics and genres) as of 1992 is indeed about 247 per word, corresponding to a cross-entropy of log2247 = 7.95 bits per word or 1.75 bits per letter using a trigram model. While this figure represented the state of the art at the time, advancements in techniques such as deep learning have led to significant improvements in perplexity on other benchmarks, such as the One Billion Word Benchmark. In the context of the Brown corpus, simply guessing that the next word is ""the"" will achieve an accuracy of 7 percent, contrasting with the 1/247 = 0.4 percent that might be expected from a naive use of perplexity. This difference underscores the importance of the statistical model used and the nuanced nature of perplexity as a measure of predictiveness. The guess is based on unigram statistics, not on the trigram statistics that yielded the perplexity of 247, and utilizing trigram statistics would further refine the prediction.",wikipedia
+Reinforcement learning from human feedback,https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback,"Reinforcement learning from human feedback In machine learning, reinforcement learning from human feedback (RLHF) or reinforcement learning from human preferences is a technique that trains a ""reward model"" directly from human feedback and uses the model as a reward function to optimize an agent's policy using reinforcement learning (RL) through an optimization algorithm like Proximal Policy Optimization. The reward model is trained in advance to the policy being optimized to predict if a given output is good (high reward) or bad (low reward). RLHF can improve the robustness and exploration of reinforcement-learning agents, especially when the reward function is sparse or noisy. Human feedback is most commonly collected by asking humans to rank instances of the agent's behavior. These rankings can then be used to score outputs, for example with the Elo rating system. While the preference judgement is widely adopted, there are other types of human feedbacks that provide richer information, such as numerical feedback, natural language feedback, and edit rate. RLHF is used in tasks where it's difficult to define a clear, algorithmic solution but where humans can easily judge the quality of the model's output. For example, if the task is to generate a compelling story, humans can rate different AI-generated stories on their quality, and the model can use their feedback to improve its story generation skills. RLHF has been applied to various domains of natural language processing, such as conversational agents, text summarization, and natural language understanding. Ordinary reinforcement learning, where agents learn from their own actions based on a ""reward function"", is difficult to apply to natural language processing tasks because the rewards are often not easy to define or measure, especially when dealing with complex tasks that involve human values or preferences. RLHF can enable language models to provide answers that align with these complex values, to generate more verbose responses, and to reject questions that are either inappropriate or outside the knowledge space of the model. Some examples of RLHF-trained language models are OpenAI's ChatGPT and its predecessor InstructGPT, as well as DeepMind's Sparrow. RLHF has also been applied to other areas, such as the development of video game bots. For example, OpenAI and DeepMind trained agents to play Atari games based on human preferences. The agents achieved strong performance in many of the environments tested, often surpassing human performance.",wikipedia
+Reinforcement learning from human feedback,https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback#See_also,See_also Reinforcement learning ChatGPT Reward-based selection,wikipedia
+Word embedding,https://en.wikipedia.org/wiki/Word_embedding#Development_and_history_of_the_approach,"Development_and_history_of_the_approach In distributional semantics, a quantitative methodological approach to understanding meaning in observed language, word embeddings or semantic feature space models have been used as a knowledge representation for some time. Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that ""a word is characterized by the company it keeps"" was proposed in a 1957 article by John Rupert Firth, but also has roots in the contemporaneous work on search systems and in cognitive psychology. The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word cooccurrence contexts. In 2000, Bengio et al. provided in a series of papers titled ""Neural probabilistic language models"" to reduce the high dimensionality of word representations in contexts by ""learning a distributed representation for words"". A study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use ""locally linear embedding"" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues. The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broader parameter space to be explored profitably. In 2013, a team at Google led by Tomas Mikolov created word2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application.",wikipedia
+Word embedding,https://en.wikipedia.org/wiki/Word_embedding#Sentence_embeddings,"Sentence_embeddings The idea has been extended to embeddings of entire sentences or even documents, e.g. in the form of the thought vectors concept. In 2015, some researchers suggested ""skip-thought vectors"" as a means to improve the quality of machine translation. A more recent and popular approach for representing sentences is Sentence-BERT, or SentenceTransformers, which modifies pre-trained BERT with the use of siamese and triplet network structures.",wikipedia
+Word n-gram language model,https://en.wikipedia.org/wiki/Word_n-gram_language_model,"Word n-gram language model A word n-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network-based models, which has been superseded by large language models. It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word was considered, it was called a bigram model; if two words, a trigram model; if n-1 words, an n-gram model. Special tokens were introduced to denote the start and end of a sentence ��� s ��� {\displaystyle \langle s\rangle } and ��� / s ��� {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, each word's probability is slightly lower than its frequency count in a corpus. To calculate it, various methods were used, from simple ""add-one"" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good���Turing discounting or back-off models.",wikipedia
+Word n-gram language model,https://en.wikipedia.org/wiki/Word_n-gram_language_model#Approximation_method,"Approximation_method The approximation method calculates the probability P ( w 1 , ��� , w m ) {\displaystyle P(w_{1},\ldots ,w_{m})} of observing the sentence w 1 , ��� , w m {\displaystyle w_{1},\ldots ,w_{m}} It is assumed that the probability of observing the ith word wi (in the context window consisting of the preceding i ��� 1 words) can be approximated by the probability of observing it in the shortened context window consisting of the preceding n ��� 1 words (nth-order Markov property). To clarify, for n = 3 and i = 2 we have P ( w i ��� w i ��� ( n ��� 1 ) , ��� , w i ��� 1 ) = P ( w 2 ��� w 1 ) {\displaystyle P(w_{i}\mid w_{i-(n-1)},\ldots ,w_{i-1})=P(w_{2}\mid w_{1})} . The conditional probability can be calculated from n-gram model frequency counts: An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used. In some cases, it may be necessary to estimate the language model with a specific fixed vocabulary. In such a scenario, the n-grams in the corpus that contain an out-of-vocabulary word are ignored. The n-gram probabilities are smoothed over all the words in the vocabulary even if they were not observed. Nonetheless, it is essential in some cases to explicitly model the probability of out-of-vocabulary words by introducing a special token (e.g. ) into the vocabulary. Out-of-vocabulary words in the corpus are effectively replaced with this special token before n-grams counts are cumulated. With this option, it is possible to estimate the transition probabilities of n-grams involving out-of-vocabulary words.",wikipedia
+Natural language generation,https://en.wikipedia.org/wiki/Natural_language_generation#Applications,"Applications From a commercial perspective, the most successful NLG applications have been data-to-text systems which generate textual summaries of databases and data sets; these systems usually perform data analysis as well as text generation. Research has shown that textual summaries can be more effective than graphs and other visuals for decision support, and that computer-generated texts can be superior (from the reader's perspective) to human-written texts. The first commercial data-to-text systems produced weather forecasts from weather data. The earliest such system to be deployed was FoG, which was used by Environment Canada to generate weather forecasts in French and English in the early 1990s. The success of FoG triggered other work, both research and commercial. Recent applications include the UK Met Office's text-enhanced forecast. Data-to-text systems have since been applied in a range of settings. Following the minor earthquake near Beverly Hills, California on March 17, 2014, The Los Angeles Times reported details about the time, location and strength of the quake within 3 minutes of the event. This report was automatically generated by a 'robo-journalist', which converted the incoming data into text via a preset template. Currently there is considerable commercial interest in using NLG to summarise financial and business data. Indeed, Gartner has said that NLG will become a standard feature of 90% of modern BI and analytics platforms. NLG is also being used commercially in automated journalism, chatbots, generating product descriptions for e-commerce sites, summarising medical records, and enhancing accessibility (for example by describing graphs and data sets to blind people). An example of an interactive use of NLG is the WYSIWYM framework. It stands for What you see is what you meant and allows users to see and manipulate the continuously rendered view (NLG output) of an underlying formal language document (NLG input), thereby editing the formal language without learning it. Looking ahead, the current progress in data-to-text generation paves the way for tailoring texts to specific audiences. For example, data from babies in neonatal care can be converted into text differently in a clinical setting, with different levels of technical detail and explanatory language, depending on intended recipient of the text (doctor, nurse, patient). The same idea can be applied in a sports setting, with different reports generated for fans of specific teams. Over the past few years, there has been an increased interest in automatically generating captions for images, as part of a broader endeavor to investigate the interface between vision and language. A case of data-to-text generation, the algorithm of image captioning (or automatic image description) involves taking an image, analyzing its visual content, and generating a textual description (typically a sentence) that verbalizes the most prominent aspects of the image. An image captioning system involves two sub-tasks. In Image Analysis, features and attributes of an image are detected and labelled, before mapping these outputs to linguistic structures.",wikipedia
+Natural language generation,https://en.wikipedia.org/wiki/Natural_language_generation#Applications,"Applications Recent research utilizes deep learning approaches through features from a pre-trained convolutional neural network such as AlexNet, VGG or Caffe, where caption generators use an activation layer from the pre-trained network as their input features. Text Generation, the second task, is performed using a wide range of techniques. For example, in the Midge system, input images are represented as triples consisting of object/stuff detections, action/pose detections and spatial relations. These are subsequently mapped to triples and realized using a tree substitution grammar. Despite advancements, challenges and opportunities remain in image capturing research. Notwithstanding the recent introduction of Flickr30K, MS COCO and other large datasets have enabled the training of more complex models such as neural networks, it has been argued that research in image captioning could benefit from larger and diversified datasets. Designing automatic measures that can mimic human judgments in evaluating the suitability of image descriptions is another need in the area. Other open challenges include visual question-answering (VQA), as well as the construction and evaluation multilingual repositories for image description. Another area where NLG has been widely applied is automated dialogue systems, frequently in the form of chatbots. A chatbot or chatterbot is a software application used to conduct an on-line chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent. While natural language processing (NLP) techniques are applied in deciphering human input, NLG informs the output part of the chatbot algorithms in facilitating real-time dialogues. Early chatbot systems, including Cleverbot created by Rollo Carpenter in 1988 and published in 1997,[citation needed] reply to questions by identifying how a human has responded to the same question in a conversation database using information retrieval (IR) techniques. [citation needed] Modern chatbot systems predominantly rely on machine learning (ML) models, such as sequence-to-sequence learning and reinforcement learning to generate natural language output. Hybrid models have also been explored. For example, the Alibaba shopping assistant first uses an IR approach to retrieve the best candidates from the knowledge base, then uses the ML-driven seq2seq model re-rank the candidate responses and generate the answer. Creative language generation by NLG has been hypothesized since the field's origins. A recent pioneer in the area is Phillip Parker, who has developed an arsenal of algorithms capable of automatically generating textbooks, crossword puzzles, poems and books on topics ranging from bookbinding to cataracts. The advent of large pretrained transformer-based language models such as GPT-3 has also enabled breakthroughs, with such models demonstrating recognizable ability for creating-writing tasks. A related area of NLG application is computational humor production. JAPE (Joke Analysis and Production Engine) is one of the earliest large, automated humor production systems that uses a hand-coded template-based approach to create punning riddles for children. HAHAcronym creates humorous reinterpretations of any given acronym, as well as proposing new fitting acronyms given some keywords. Despite progresses, many challenges remain in producing automated creative and humorous content that rival human output.",wikipedia
+Natural language generation,https://en.wikipedia.org/wiki/Natural_language_generation#Applications,"Applications In an experiment for generating satirical headlines, outputs of their best BERT-based model were perceived as funny 9.4% of the time (while real Onion headlines were 38.4%) and a GPT-2 model fine-tuned on satirical headlines achieved 6.9%. It has been pointed out that two main issues with humor-generation systems are the lack of annotated data sets and the lack of formal evaluation methods, which could be applicable to other creative content generation. Some have argued relative to other applications, there has been a lack of attention to creative aspects of language production within NLG. NLG researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve NLG output even in data-to-text systems.",wikipedia
+Text-to-image model,https://en.wikipedia.org/wiki/Text-to-image_model,"Text-to-image model A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description. Such models began to be developed in the mid-2010s, as a result of advances in deep neural networks. In 2022, the output of state of the art text-to-image models, such as OpenAI's DALL-E 2, Google Brain's Imagen, StabilityAI's Stable Diffusion, and Midjourney began to approach the quality of real photographs and human-drawn art. Text-to-image models generally combine a language model, which transforms the input text into a latent representation, and a generative image model, which produces an image conditioned on that representation. The most effective models have generally been trained on massive amounts of image and text data scraped from the web.",wikipedia
+Text-to-image model,https://en.wikipedia.org/wiki/Text-to-image_model#History,"History Before the rise of deep learning, attempts to build text-to-image models were limited to collages by arranging existing component images, such as from a database of clip art. The inverse task, image captioning, was more tractable and a number of image captioning deep learning models came prior to the first text-to-image models. The first modern text-to-image model, alignDRAW, was introduced in 2015 by researchers from the University of Toronto. alignDRAW extended the previously-introduced DRAW architecture (which used a recurrent variational autoencoder with an attention mechanism) to be conditioned on text sequences. Images generated by alignDRAW were blurry and not photorealistic, but the model was able to generalize to objects not represented in the training data (such as a red school bus), and appropriately handled novel prompts such as ""a stop sign is flying in blue skies"", showing that it was not merely ""memorizing"" data from the training set. In 2016, Reed, Akata, Yan et al. became the first to use generative adversarial networks for the text-to-image task. With models trained on narrow, domain-specific datasets, they were able to generate ""visually plausible"" images of birds and flowers from text captions like ""an all black bird with a distinct thick, rounded bill"". A model trained on the more diverse COCO dataset produced images which were ""from a distance... encouraging"", but which lacked coherence in their details. Later systems include VQGAN+CLIP, XMC-GAN, and GauGAN2. One of the first text-to-image models to capture widespread public attention was OpenAI's DALL-E, a transformer system announced in January 2021. A successor capable of generating more complex and realistic images, DALL-E 2, was unveiled in April 2022, followed by Stable Diffusion publicly released in August 2022. Following other text-to-image models, language model-powered text-to-video platforms such as Runway, Make-A-Video, Imagen Video, Midjourney, and Phenaki can generate video from text and/or text/image prompts.",wikipedia
+Text-to-image model,https://en.wikipedia.org/wiki/Text-to-image_model#Architecture_and_training,"Architecture_and_training Text-to-image models have been built using a variety of architectures. The text encoding step may be performed with a recurrent neural network such as a long short-term memory (LSTM) network, though transformer models have since become a more popular option. For the image generation step, conditional generative adversarial networks have been commonly used, with diffusion models also becoming a popular option in recent years. Rather than directly training a model to output a high-resolution image conditioned on a text embedding, a popular technique is to train a model to generate low-resolution images, and use one or more auxiliary deep learning models to upscale it, filling in finer details. Text-to-image models are trained on large datasets of (text, image) pairs, often scraped from the web. With their 2022 Imagen model, Google Brain reported positive results from using a large language model trained separately on a text-only corpus (with its weights subsequently frozen), a departure from the theretofore standard approach.",wikipedia
+Named-entity recognition,https://en.wikipedia.org/wiki/Named-entity_recognition#Named-entity_recognition_platforms,"Named-entity_recognition_platforms Notable NER platforms include: GATE supports NER across many languages and domains out of the box, usable via a graphical interface and a Java API. OpenNLP includes rule-based and statistical named-entity recognition. SpaCy features fast statistical NER as well as an open-source named-entity visualizer. Transformers features token classification using deep learning models.",wikipedia
+Semantic query,https://en.wikipedia.org/wiki/Semantic_query,"Semantic query Semantic queries allow for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more fuzzy and wide open questions through pattern matching and digital reasoning. Semantic queries work on named graphs, linked data or triples. This enables the query to process the actual relationships between information and infer the answers from the network of data. This is in contrast to semantic search, which uses semantics (meaning of language constructs) in unstructured text to produce a better search result. (See natural language processing.) From a technical point of view, semantic queries are precise relational-type operations much like a database query. They work on structured data and therefore have the possibility to utilize comprehensive features like operators (e.g. >, < and =), namespaces, pattern matching, subclassing, transitive relations, semantic rules and contextual full text search. The semantic web technology stack of the W3C is offering SPARQL to formulate semantic queries in a syntax similar to SQL. Semantic queries are used in triplestores, graph databases, semantic wikis, natural language and artificial intelligence systems.",wikipedia
+Semantic search,https://en.wikipedia.org/wiki/Semantic_search,"Semantic search Semantic search denotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query. Semantic search seeks to improve search accuracy by understanding the searcher's intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Content that ranks well in semantic search is well-written in a natural voice, focuses on the user's intent, and considers related topics that the user may look for in the future. Some authors regard semantic search as a set of techniques for retrieving knowledge from richly structured data sources like ontologies and XML as found on the Semantic Web. Such technologies enable the formal articulation of domain knowledge at a high level of expressiveness and could enable the user to specify their intent in more detail at query time.",wikipedia
+Semantic search,https://en.wikipedia.org/wiki/Semantic_search#See_also,See_also List of Semantic Search Engines Semantic web Semantic Unification Resource Description Framework Natural language search engine Semantic Query,wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#Research_foci,"Research_foci AI safety research areas include robustness, monitoring, and alignment. AI systems are often vulnerable to adversarial examples or ���inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake���. For example, in 2013, Szegedy et al. discovered that adding specific imperceptible perturbations to an image could cause it to be misclassified with high confidence. This continues to be an issue with neural networks, though in recent work the perturbations are generally large enough to be perceptible. All of the images on the right are predicted to be an ostrich after the perturbation is applied. (Left) is a correctly predicted sample, (center) perturbation applied magnified by 10x, (right) adversarial example. Adversarial robustness is often associated with security. Researchers demonstrated that an audio signal could be imperceptibly modified so that speech-to-text systems transcribe it to any message the attacker chooses. Network intrusion and malware detection systems also must be adversarially robust since attackers may design their attacks to fool detectors. Models that represent objectives (reward models) must also be adversarially robust. For example, a reward model might estimate how helpful a text response is and a language model might be trained to maximize this score. Researchers have shown that if a language model is trained for long enough, it will leverage the vulnerabilities of the reward model to achieve a better score and perform worse on the intended task. This issue can be addressed by improving the adversarial robustness of the reward model. More generally, any AI system used to evaluate another AI system must be adversarially robust. This could include monitoring tools, since they could also potentially be tampered with to produce a higher reward. It is often important for human operators to gauge how much they should trust an AI system, especially in high-stakes settings such as medical diagnosis. ML models generally express confidence by outputting probabilities; however, they are often overconfident, especially in situations that differ from those that they were trained to handle. Calibration research aims to make model probabilities correspond as closely as possible to the true proportion that the model is correct. Similarly, anomaly detection or out-of-distribution (OOD) detection aims to identify when an AI system is in an unusual situation. For example, if a sensor on an autonomous vehicle is malfunctioning, or it encounters challenging terrain, it should alert the driver to take control or pull over. Anomaly detection has been implemented by simply training a classifier to distinguish anomalous and non-anomalous inputs, though several other techniques are in use. Scholars and government agencies have expressed concerns that AI systems could be used to help malicious actors to build weapons, manipulate public opinion, or automate cyber attacks. These worries are a practical concern for companies like OpenAI which host powerful AI tools online. In order to prevent misuse, OpenAI has built detection systems that flag or restrict users based on their activity.",wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#Research_foci,"Research_foci Neural networks have often been described as black boxes, meaning that it is difficult to understand why they make the decisions they do as a result of the massive number of computations they perform. This makes it challenging to anticipate failures. In 2018, a self-driving car killed a pedestrian after failing to identify them. Due to the black box nature of the AI software, the reason for the failure remains unclear. One benefit of transparency is explainability. It is sometimes a legal requirement to provide an explanation for why a decision was made in order to ensure fairness, for example for automatically filtering job applications or credit score assignment. Another benefit is to reveal the cause of failures. At the beginning of the 2020 COVID-19 pandemic, researchers used transparency tools to show that medical image classifiers were ���paying attention��� to irrelevant hospital labels. Transparency techniques can also be used to correct errors. For example, in the paper ���Locating and Editing Factual Associations in GPT,��� the authors were able to identify model parameters that influenced how it answered questions about the location of the Eiffel tower. They were then able to ���edit��� this knowledge to make the model respond to questions as if it believed the tower was in Rome instead of France. Though in this case, the authors induced an error, these methods could potentially be used to efficiently fix them. Model editing techniques also exist in computer vision. Finally, some have argued that the opaqueness of AI systems is a significant source of risk and better understanding of how they function could prevent high-consequence failures in the future. ���Inner��� interpretability research aims to make ML models less opaque. One goal of this research is to identify what the internal neuron activations represent. For example, researchers identified a neuron in CLIP that responds to images of people in spider man costumes, sketches of spiderman, and the word ���spider. ��� It also involves explaining connections between these neurons or ���circuits���. For example, researchers have identified pattern-matching mechanisms in transformer attention that may play a role in how language models learn from their context. ���Inner interpretability��� has been compared to neuroscience. In both cases, the goal is to understand what is going on in an intricate system, though ML researchers have the benefit of being able to take perfect measurements and perform arbitrary ablations. ML models can potentially contain ���trojans��� or ���backdoors���: vulnerabilities that malicious actors maliciously build into an AI system. For example, a trojaned facial recognition system could grant access when a specific piece of jewelry is in view; or a trojaned autonomous vehicle may function normally until a specific trigger is visible. Note that an adversary must have access to the system���s training data in order to plant a trojan. This might not be difficult to do with some large models like CLIP or GPT-3 as they are trained on publicly available internet data.",wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#Research_foci,"Research_foci Researchers were able to plant a trojan in an image classifier by changing just 300 out of 3 million of the training images. In addition to posing a security risk, researchers have argued that trojans provide a concrete setting for testing and developing better monitoring tools. In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones. It can be challenging for AI designers to align an AI system because it can be difficult for them to specify the full range of desired and undesired behavior. To avoid this difficulty, they typically use simpler proxy goals, such as gaining human approval. But that approach can create loopholes, overlook necessary constraints, or reward the AI system for merely appearing aligned. Misaligned AI systems can malfunction or cause harm. AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). They may also develop unwanted instrumental strategies, such as seeking power or survival, because such strategies help them achieve their given goals. Furthermore, they may develop undesirable emergent goals that may be hard to detect before the system is deployed, when it faces new situations and data distributions. Today, these problems affect existing commercial systems such as language models, robots, autonomous vehicles, and social media recommendation engines. Some AI researchers argue that more capable future systems will be more severely affected since these problems partially result from the systems being highly capable. Many leading AI scientists, such as Geoffrey Hinton and Stuart Russell, argue that AI is approaching superhuman capabilities and could endanger human civilization if misaligned. It is common for AI risks (and technological risks more generally) to be categorized as misuse or accidents. Some scholars have suggested that this framework falls short. For example, the Cuban Missile Crisis was not clearly an accident or a misuse of technology. Policy analysts Zwetsloot and Dafoe wrote, ���The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways��� Often, though, the relevant causal chain is much longer. ��� Risks often arise from ���structural��� or ���systemic��� factors such as competitive pressures, diffusion of harms, fast-paced development, high levels of uncertainty, and inadequate safety culture. In the broader context of safety engineering, structural factors like ���organizational safety culture��� play a central role in the popular STAMP risk analysis framework. Inspired by the structural perspective, some researchers have emphasized the importance of using machine learning to improve sociotechnical safety factors, for example, using ML for cyber defense, improving institutional decision-making, and facilitating cooperation. Some scholars are concerned that AI will exacerbate the already imbalanced game between cyber attackers and cyber defenders.",wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#Research_foci,"Research_foci This would increase 'first strike' incentives and could lead to more aggressive and destabilizing attacks. In order to mitigate this risk, some have advocated for an increased emphasis on cyber defense. In addition, software security is essential preventing powerful AI models from being stolen and misused. The advancement of AI in economic and military domains could precipitate unprecedented political challenges. Some scholars have compared AI race dynamics to the cold war, where the careful judgment of a small number of decision-makers often spelled the difference between stability and catastrophe. AI researchers have argued that AI technologies could also be used to assist decision-making. For example, researchers are beginning to develop AI forecasting and advisory systems. Many of the largest global threats (nuclear war, climate change, etc) have been framed as cooperation challenges. As in the well-known prisoner���s dilemma scenario, some dynamics may lead to poor results for all players, even when they are optimally acting in their self-interest. For example, no single actor has strong incentives to address climate change even though the consequences may be significant if no one intervenes. A salient AI cooperation challenge is avoiding a ���race to the bottom���. In this scenario, countries or companies race to build more capable AI systems and neglect safety, leading to a catastrophic accident that harms everyone involved. Concerns about scenarios like these have inspired both political and technical efforts to facilitate cooperation between humans, and potentially also between AI systems. Most AI research focuses on designing individual agents to serve isolated functions (often in ���single-player��� games). Scholars have suggested that as AI systems become more autonomous, it may become essential to study and shape the way they interact.",wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#In_governance,"In_governance AI governance is broadly concerned with creating norms, standards, and regulations to guide the use and development of AI systems. AI safety governance research ranges from foundational investigations into the potential impacts of AI to specific applications. On the foundational side, researchers have argued that AI could transform many aspects of society due to its broad applicability, comparing it to electricity and the steam engine. Some work has focused on anticipating specific risks that may arise from these impacts ��� for example, risks from mass unemployment, weaponization, disinformation, surveillance, and the concentration of power. Other work explores underlying risk factors such as the difficulty of monitoring the rapidly evolving AI industry, the availability of AI models, and ���race to the bottom��� dynamics. Allan Dafoe, the head of longterm governance and strategy at DeepMind has emphasized the dangers of racing and the potential need for cooperation: ���it may be close to a necessary and sufficient condition for AI safety and alignment that there be a high degree of caution prior to deploying advanced powerful systems; however, if actors are competing in a domain with large returns to first-movers or relative advantage, then they will be pressured to choose a sub-optimal level of caution. ��� Some experts have argued that it is too early to regulate AI, expressing concerns that regulations will hamper innovation and it would be foolish to ���rush to regulate in ignorance. ��� Others, such as business magnate Elon Musk, call for pre-emptive action to mitigate catastrophic risks. Outside of formal legislation, government agencies have put forward ethical and safety recommendations. In March 2021, the US National Security Commission on Artificial Intelligence reported that advances in AI may make it increasingly important to ���assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. "" Subsequently, the National Institute of Standards and Technology drafted a framework for managing AI Risk, which advises that when ""catastrophic risks are present - development and deployment should cease in a safe manner until risks can be sufficiently managed. "" In September 2021, the People's Republic of China published ethical guidelines for the use of AI in China, emphasizing that AI decisions should remain under human control and calling for accountability mechanisms. In the same month, The United Kingdom published its 10-year National AI Strategy, which states the British government ""takes the long-term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for . . . the world, seriously. "" The strategy describes actions to assess long-term AI risks, including catastrophic risks. The British government has announced plans for the first major global summit on AI safety, this is due to take place on the 1st and 2 November 2023 and is an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach [1] [2].",wikipedia
+AI safety,https://en.wikipedia.org/wiki/AI_safety#In_governance,"In_governance Government organizations, particularly in the United States, have also encouraged the development of technical AI safety research. The Intelligence Advanced Research Projects Activity initiated the TrojAI project to identify and protect against Trojan attacks on AI systems. The Defense Advanced Research Projects Agency engages in research on explainable artificial intelligence and improving robustness against adversarial attacks and The National Science Foundation supports the Center for Trustworthy Machine Learning, and is providing millions in funding for empirical AI safety research. AI labs and companies generally abide by safety practices and norms that fall outside of formal legislation. One aim of governance researchers is to shape these norms. [? ] Examples of safety recommendations found in the literature include performing third-party auditing, offering bounties for finding failures, sharing AI incidents (an AI incident database was created for this purpose), following guidelines to determine whether to publish research or models, and improving information and cyber security in AI labs. Companies have also made concrete commitments. Cohere, OpenAI, and AI21 proposed and agreed on ���best practices for deploying language models,��� focusing on mitigating misuse. To avoid contributing to racing-dynamics, OpenAI has also stated in their charter that ���if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project��� Also, industry leaders such as CEO of DeepMind Demis Hassabis, director of Facebook AI Yann LeCun have signed open letters such as the Asilomar Principles and the Autonomous Weapons Open Letter.",wikipedia
+Anthropic,https://en.wikipedia.org/wiki/Anthropic,"Anthropic Anthropic PBC is an American artificial intelligence (AI) startup and public-benefit corporation, founded by former members of OpenAI. Anthropic specializes in developing general AI systems and language models, with a company ethos of responsible AI usage. As of July 2023[update], Anthropic had raised US$1.5 billion in funding. In September, Amazon announced an investment of up to US$4 billion.",wikipedia
+Anthropic,https://en.wikipedia.org/wiki/Anthropic#History,"History Anthropic was founded in 2021 by former senior members of OpenAI, principally siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research. The Amodei siblings were among others who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019. By late 2022, Anthropic had raised US$700 million in funding, out of which US$500 million came from Alameda Research. Google's cloud division followed with an investment of US$300 million for a 10% stake, in a deal requiring Anthropic to buy computing resources from Google Cloud. In May 2023, Anthropic raised US$450 million in a round led by Spark Capital. In February 2023, Anthropic was sued by Texas-based Anthrop LLC for use of their registered trademark ""Anthropic A.I."" Kevin Roose of The New York Times described the company as the ""Center of A.I. Doomerism"". He reported that some employees ""compared themselves to modern-day Robert Oppenheimers"". Journalists often connect Anthropic with the effective altruism movement; some founders and team members were part of the community or at least interested in it. One of the investors of Series B round was Sam Bankman-Fried of the cryptocurrency exchange FTX that collapsed in 2022. On September 25, 2023, Amazon and Anthropic announced a strategic partnership in which Amazon would become a minority stakeholder by investing up to US$4 billion, including an immediate investment of $1.25bn. As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and will make its AI models available to AWS customers.",wikipedia
+Anthropic,https://en.wikipedia.org/wiki/Anthropic#Projects,"Projects Consisting of former researchers involved in OpenAI's GPT-2 and GPT-3 model development, Anthropic began development on its own AI chatbot, named Claude. Similar to ChatGPT, Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses. Initially available in closed beta through a Slack integration, Claude is now accessible to users in the UK and the US via a website claude.ai. The name, ""Claude"", was chosen either as a reference to Claude Shannon, or as ""a friendly, male-gendered name designed to counterbalance the female-gendered names (Alexa, Siri, Cortana) that other tech companies gave their A.I. assistants"". Claude 2 was launched in July 2023, and is available in US and UK. The Guardian reported that safety was a priority during the model training. Anthropic calls their safety method ""Constitutional AI"": The chatbot is trained on principles taken from documents including the 1948 UN declaration and Apple���s terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the UN declaration is: ���Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.��� Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.",wikipedia
+Google AI,https://en.wikipedia.org/wiki/Google_AI#Projects,"Projects Google Brain, a big subsidiary developing AI with machine learning to improve various Google services (e.g. better translation quality in Google Translate) Serving cloud-based TPUs (tensor processing units) in order to develop machine learning software. Development of TensorFlow. The TPU research cloud provides free access to a cluster of cloud TPUs to researchers engaged in open-source machine learning research. Portal to over 5500 (as of September 2019) research publications by Google staff. Magenta: a deep learning research team exploring the role of machine learning as a tool in the creative process. The team has released many open source projects allowing artists and musicians to extend their processes using AI. With the use of Magenta, musicians and composers could create high-quality music at a lower cost, making it easier for new artists to enter the industry. Sycamore: a new 54-qubit programmable quantum processor. LaMDA: a family of conversational neural language models A program designed to address the growing need for developing free speech resources for under-represented languages",wikipedia
+Meta AI,https://en.wikipedia.org/wiki/Meta_AI#Current_research,"Current_research In the February 23, 2022, live event Inside the Lab: Building for the Metaverse with AI, the Meta AI team discussed the major advancements in research and development in artificial intelligence. One such tool is the BuilderBot, which allows users to generate virtual worlds by using voice commands. Other tools include the No Language Left Behind, a system capable of automatic translation between written languages, and a Universal Speech Translator, a system capable of instantaneous speech-to-speech translation. Meta AI's computer vision research aims to extract information about the environment from digital images and videos. One example of computer vision technology developed by AI is panoptic segmentation, which recognizes objects in the foreground but also classifies the scenes in the background. Meta AI seeks to improve Visual Question Answering technology, in which a machine answers human user questions about images using cycle-consistency, having the machine generate a question in addition to the answer to address linguistic variations in the questions. Artificial intelligence communication requires a machine to understand natural language and to generate language that is natural. Meta AI seeks to improve these technologies to improve safe communication regardless of what language the user might speak. Thus, a central task involves the generalization of natural language processing (NLP) technology to other languages. As such, Meta AI actively works on unsupervised machine translation. Meta AI seeks to improve natural-language interfaces by developing aspects of chitchat dialogue such as repetition, specificity, response-relatedness and question-asking, incorporating personality into image captioning, and generating creativity-based language. In 2018, Meta AI launched the open-source PyText, a modeling framework focused on NLP systems. In 2023, Meta AI announced and open sourced LLaMA (Large Language Model Meta AI), a 65B parameter large language model. Facebook and Instagram uses Meta AI research in ranking & recommendations in their newsfeeds, ads, and search results. Meta AI has also introduced ReAgent, a toolset that generates decisions and evaluates user feedback. Machine learning and AI depend on the development of novel algorithms, software and hardware technologies. As such, Meta AI's systems research teams studies computer languages, compilers, and hardware applications. Meta AI studies the mathematical and theoretical foundations of artificial intelligence. Meta AI has publications in learning theory, optimization, and signal processing. The MTIA v1 is Meta's first-generation AI training and inference accelerator, developed specifically for Meta's recommendation workloads. It was fabricated using TSMC's 7 nm process technology and operates at a frequency of 800 MHz. In terms of processing power, the accelerator provides 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision, while maintaining a thermal design power (TDP) of 25 W. The accelerator is structured around a grid of 64 processing elements (PEs), arranged in an 8x8 configuration, and it is furnished with on-chip and off-chip memory resources along with the necessary interconnects. Each PE houses two processor cores (one with a vector extension) and several fixed-function units optimized for tasks such as matrix multiplication, accumulation, data movement, and nonlinear function calculation.",wikipedia
+Meta AI,https://en.wikipedia.org/wiki/Meta_AI#Current_research,"Current_research The processor cores utilize the RISC-V open instruction set architecture (ISA), with extensive customization to perform the required compute and control tasks. The accelerator's memory subsystem uses LPDDR5 for off-chip DRAM resources and can be scaled up to 128 GB. Additionally, it possesses 128 MB of on-chip SRAM that is shared amongst all the PEs for faster access to frequently used data and instructions. The design encourages parallelism and data reuse, offering thread and data-level parallelism (TLP and DLP), instruction-level parallelism (ILP), and memory-level parallelism (MLP). MTIA accelerators are mounted on compact dual M.2 boards, enabling easier integration into a server. The boards connect to the host CPU via PCIe Gen4 x8 links and have a power consumption as low as 35 W. The servers hosting these accelerators utilize the Yosemite V3 server specification from the Open Compute Project. Each server houses 12 accelerators, interconnected through a hierarchy of PCIe switches, allowing workloads to be distributed across multiple accelerators and executed concurrently.",wikipedia
+Generative model,https://en.wikipedia.org/wiki/Generative_model#Deep_generative_models,"Deep_generative_models With the rise of deep learning, a new family of methods, called deep generative models (DGMs),is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance. Popular DGMs include variational autoencoders (VAEs), generative adversarial networks (GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models. For example, GPT-3, and its precursor GPT-2, are auto-regressive neural language models that contain billions of parameters, BigGAN and VQ-VAE which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters.",wikipedia
+Generative model,https://en.wikipedia.org/wiki/Generative_model#Types,"Types Types of generative models are: Gaussian mixture model (and other types of mixture model) Hidden Markov model Probabilistic context-free grammar Bayesian network (e.g. Naive bayes, Autoregressive model) Averaged one-dependence estimators Latent Dirichlet allocation Boltzmann machine (e.g. Restricted Boltzmann machine, Deep belief network) Variational autoencoder Generative analysis Generative adversarial network Flow-based generative model Energy based model Diffusion modelIf the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximations to the true distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using a discriminative model (see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case. k-nearest neighbors algorithm Logistic regression Support Vector Machines Decision Tree Learning Random Forest Maximum-entropy Markov models Conditional random fields",wikipedia
+Hugging Face,https://en.wikipedia.org/wiki/Hugging_Face,"Hugging Face Hugging Face, Inc. is a French-American company that develops tools for building applications using machine learning, based in New York City. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets and showcase their work in a space.",wikipedia
+Hugging Face,https://en.wikipedia.org/wiki/Hugging_Face#History,"History The company was founded in 2016 by French entrepreneurs Cl��ment Delangue, Julien Chaumond, and Thomas Wolf in New York City, originally as a company that developed a chatbot app targeted at teenagers. After open-sourcing the model behind the chatbot, the company pivoted to focus on being a platform for machine learning. In March 2021, Hugging Face raised US$40 million in a Series B funding round. On April 28, 2021, the company launched the BigScience Research Workshop in collaboration with several other research groups to release an open large language model. In 2022, the workshop concluded with the announcement of BLOOM, a multilingual large language model with 176 billion parameters. On December 21, 2021, the company announced its acquisition of Gradio, a software library used to make interactive browser demos of machine learning models. On May 5, 2022, the company announced its Series C funding round led by Coatue and Sequoia. The company received a $2 billion valuation. On May 13, 2022, the company introduced its Student Ambassador Program to help fulfill its mission to teach machine learning to 5 million people by 2023. On May 26, 2022, the company announced a partnership with Graphcore to optimize its Transformers library for the Graphcore IPU. On August 3, 2022, the company announced the Private Hub, an enterprise version of its public Hugging Face Hub that supports SaaS or on-premises deployment. In February 2023, the company announced partnership with Amazon Web Services (AWS) which would allow Hugging Face's products available to AWS customers to use them as the building blocks for their custom applications. The company also said the next generation of BLOOM will be run on Trainium, a proprietary machine learning chip created by AWS. In August 2023, the company announced that it raised $235 million in a Series D funding, at a $4.5 billion valuation. The funding was led by Salesforce, and notable participation came from Google, Amazon, Nvidia, AMD, Intel, IBM, and Qualcomm.",wikipedia
+Hugging Face,https://en.wikipedia.org/wiki/Hugging_Face#Services_and_technologies,"Services_and_technologies The Transformers library is a Python package that contains open-source implementations of transformer models for text, image, and audio tasks. It is compatible with the PyTorch, TensorFlow and JAX deep learning libraries and includes implementations of notable models like BERT and GPT-2. The library was originally called ""pytorch-pretrained-bert"" which was then renamed to ""pytorch-transformers"" and finally ""transformers."" The Hugging Face Hub is a platform (centralized web service) for hosting: Git-based code repositories, with features similar to GitHub, including discussions and pull requests for projects. models, also with Git-based version control; datasets, mainly in text, images, and audio; web applications (""spaces"" and ""widgets""), intended for small-scale demos of machine learning applications.In addition to Transformers and the Hugging Face Hub, the Hugging Face ecosystem contains libraries for other tasks, such as dataset processing (""Datasets""), model evaluation (""Evaluate""), simulation (""Simulate""), machine learning demos (""Gradio"").",wikipedia
+MLOps,https://en.wikipedia.org/wiki/MLOps#Architecture,"Architecture Machine Learning systems can be categorized in eight different categories: data collection, data processing, feature engineering, data labeling, model design, model training and optimization, endpoint deployment, and endpoint monitoring. Each step in the machine learning lifecycle is built in its own system, but requires interconnection. These are the minimum systems that enterprises need to scale machine learning within their organization.",wikipedia
+Multi-task learning,https://en.wikipedia.org/wiki/Multi-task_learning#Software_package,Software_package The Multi-Task Learning via StructurAl Regularization (MALSAR) Matlab package implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning Multi-Task Learning with Joint Feature Selection Robust Multi-Task Feature Learning Trace-Norm Regularized Multi-Task Learning Alternating Structural Optimization Incoherent Low-Rank and Sparse Learning Robust Low-Rank Multi-Task Learning Clustered Multi-Task Learning Multi-Task Learning with Graph Structures,wikipedia
+Overfitting,https://en.wikipedia.org/wiki/Overfitting#Machine_learning,"Machine_learning Usually a learning algorithm is trained using some set of ""training data"": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed ""validation data"" that was not encountered during its training. Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for y can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is a priori less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training-data fit to offset the complexity increase, then the new complex function ""overfits"" the data, and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset. When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) with m parameters to a regression model with n parameters. Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data, because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information (""noise"").",wikipedia
+Overfitting,https://en.wikipedia.org/wiki/Overfitting#Machine_learning,"Machine_learning Everything else being equal, the more difficult a criterion is to predict (i. e. , the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called ""robust. "" The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include: A function that is overfitted is likely to request more information about each item in the validation dataset than does the optimal function; gathering this additional unneeded data can be expensive or error-prone, especially if each individual piece of information must be gathered by human observation and manual data-entry. A more complex, overfitted function is likely to be less portable than a simple one. At one extreme, a one-variable linear regression is so portable that, if necessary, it could even be done by hand. At the other extreme are models that can be reproduced only by exactly duplicating the original modeler's entire setup, making reuse or scientific reproduction difficult. It may be possible to reconstruct details of individual training instances from an overfitted machine learning model's training set. This may be undesirable if, for example, the training data includes sensitive personally identifiable information (PII). This phenomenon also presents problems in the area of artificial intelligence and copyright, with the developers of some generative deep learning models such as Stable Diffusion and GitHub Copilot being sued for copyright infringement because these models have been found to be capable of reproducing certain copyrighted items from their training data. The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods like minimum spanning tree or life-time of correlation that applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized. Dropout regularisation can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer.",wikipedia
+Semi-supervised learning,https://en.wikipedia.org/wiki/Semi-supervised_learning,"Semi-supervised learning Weak supervision, also called semi-supervised learning, is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data (exclusively used in more expensive and time-consuming supervised learning paradigm), followed by a large amount of unlabeled data (used exclusively in unsupervised learning paradigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam. Technically, it could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions, or learning an underlying one-dimensional manifold where the data reside.",wikipedia
+Artificial neural networks,https://en.wikipedia.org/wiki/Artificial_neural_networks#History,"History The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement. Wilhelm Lenz and Ernst Ising created and analyzed the Ising model (1925) which is essentially a non-learning artificial recurrent neural network (RNN) consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was popularised by John Hopfield in 1982. Warren McCulloch and Walter Pitts (1943) also considered a non-learning computational model for neural networks. In the late 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Farley and Wesley A. Clark (1954) first used computational machines, then called ""calculators"", to simulate a Hebbian network. In 1958, psychologist Frank Rosenblatt invented the perceptron, the first implemented artificial neural network, funded by the United States Office of Naval Research. Some say that research stagnated following Minsky and Papert (1969), who discovered that basic perceptrons were incapable of processing the exclusive-or circuit and that computers lacked sufficient power to process useful neural networks. However, by the time this book came out, methods for training multilayer perceptrons (MLPs) were already known. The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, as the Group Method of Data Handling. The first deep learning MLP trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes. Self-organizing maps (SOMs) were described by Teuvo Kohonen in 1982. SOMs are neurophysiologically inspired neural networks that learn low-dimensional representations of high-dimensional data while preserving the topological structure of the data. They are trained using competitive learning. The convolutional neural network (CNN) architecture with convolutional layers and downsampling layers was introduced by Kunihiko Fukushima in 1980. He called it the neocognitron. In 1969, he also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep neural networks in general. CNNs have become an essential tool for computer vision. The backpropagation algorithm is an efficient application of the Leibniz chain rule (1673) to networks of differentiable nodes. It is also known as the reverse mode of automatic differentiation or reverse accumulation, due to Seppo Linnainmaa (1970).",wikipedia
+Artificial neural networks,https://en.wikipedia.org/wiki/Artificial_neural_networks#History,"History The term ""back-propagating errors"" was introduced in 1962 by Frank Rosenblatt, but he did not have an implementation of this procedure, although Henry J. Kelley and Bryson had dynamic programming based continuous precursors of backpropagation already in 1960���61 in the context of control theory. In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence. The time delay neural network (TDNN) of Alex Waibel (1987) combined convolutions and weight sharing and backpropagation. In 1988, Wei Zhang et al. applied backpropagation to a CNN (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. In 1989, Yann LeCun et al. trained a CNN to recognize handwritten ZIP codes on mail. In 1992, max-pooling for CNNs was introduced by Juan Weng et al. to help with least-shift invariance and tolerance to deformation to aid 3D object recognition. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al. , that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments. In the 1980s, backpropagation did not work well for deep FNNs and RNNs. To overcome this problem, Juergen Schmidhuber (1992) proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning. It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a chunker solved a deep learning task whose depth exceeded 1000. In 1992, Juergen Schmidhuber also published an alternative to RNNs which is now called a linear Transformer or a Transformer with linearized self-attention (save for a normalization operator). It learns internal spotlights of attention: a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention). This fast weight attention mapping is applied to a query pattern. The modern Transformer was introduced by Ashish Vaswani et al. in their 2017 paper ""Attention Is All You Need. "" It combines this with a softmax operator and a projection matrix. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use it. Transformers are also increasingly being used in computer vision.",wikipedia
+Artificial neural networks,https://en.wikipedia.org/wiki/Artificial_neural_networks#History,"History In 1991, Juergen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called ""artificial curiosity. "" In 2014, this principle was used in a generative adversarial network (GAN) by Ian Goodfellow et al. Here the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. This can be used to create realistic deepfakes. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Sepp Hochreiter's diploma thesis (1991) was called ""one of the most important documents in the history of machine learning"" by his supervisor Juergen Schmidhuber. Hochreiter identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. This led to the deep learning method called long short-term memory (LSTM), published in Neural Computation (1997). LSTM recurrent neural networks can learn ""very deep learning"" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. The ""vanilla LSTM"" with forget gate was introduced in 1999 by Felix Gers, Schmidhuber and Fred Cummins. LSTM has become the most cited neural network of the 20th century. In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used the LSTM principle to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century. The development of metal���oxide���semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled increasing MOS transistor counts in digital electronics. This provided more processing power for the development of practical artificial neural networks in the 1980s. Neural networks' early successes included predicting the stock market and in 1995 a (mostly) self-driving car. Geoffrey Hinton et al. (2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as ""deep learning"". Ciresan and colleagues (2010) showed that despite the vanishing gradient problem, GPUs make backpropagation feasible for many-layered feedforward neural networks.",wikipedia
+Artificial neural networks,https://en.wikipedia.org/wiki/Artificial_neural_networks#History,"History Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. For example, the bi-directional and multi-dimensional long short-term memory (LSTM) of Graves et al. won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned. Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012).",wikipedia
+Artificial neural network,https://en.wikipedia.org/wiki/Artificial_neural_network#History,"History The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement. Wilhelm Lenz and Ernst Ising created and analyzed the Ising model (1925) which is essentially a non-learning artificial recurrent neural network (RNN) consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was popularised by John Hopfield in 1982. Warren McCulloch and Walter Pitts (1943) also considered a non-learning computational model for neural networks. In the late 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Farley and Wesley A. Clark (1954) first used computational machines, then called ""calculators"", to simulate a Hebbian network. In 1958, psychologist Frank Rosenblatt invented the perceptron, the first implemented artificial neural network, funded by the United States Office of Naval Research. Some say that research stagnated following Minsky and Papert (1969), who discovered that basic perceptrons were incapable of processing the exclusive-or circuit and that computers lacked sufficient power to process useful neural networks. However, by the time this book came out, methods for training multilayer perceptrons (MLPs) were already known. The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, as the Group Method of Data Handling. The first deep learning MLP trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes. Self-organizing maps (SOMs) were described by Teuvo Kohonen in 1982. SOMs are neurophysiologically inspired neural networks that learn low-dimensional representations of high-dimensional data while preserving the topological structure of the data. They are trained using competitive learning. The convolutional neural network (CNN) architecture with convolutional layers and downsampling layers was introduced by Kunihiko Fukushima in 1980. He called it the neocognitron. In 1969, he also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep neural networks in general. CNNs have become an essential tool for computer vision. The backpropagation algorithm is an efficient application of the Leibniz chain rule (1673) to networks of differentiable nodes. It is also known as the reverse mode of automatic differentiation or reverse accumulation, due to Seppo Linnainmaa (1970).",wikipedia
+Artificial neural network,https://en.wikipedia.org/wiki/Artificial_neural_network#History,"History The term ""back-propagating errors"" was introduced in 1962 by Frank Rosenblatt, but he did not have an implementation of this procedure, although Henry J. Kelley and Bryson had dynamic programming based continuous precursors of backpropagation already in 1960���61 in the context of control theory. In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. In 1986 Rumelhart, Hinton and Williams showed that backpropagation learned interesting internal representations of words as feature vectors when trained to predict the next word in a sequence. The time delay neural network (TDNN) of Alex Waibel (1987) combined convolutions and weight sharing and backpropagation. In 1988, Wei Zhang et al. applied backpropagation to a CNN (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. In 1989, Yann LeCun et al. trained a CNN to recognize handwritten ZIP codes on mail. In 1992, max-pooling for CNNs was introduced by Juan Weng et al. to help with least-shift invariance and tolerance to deformation to aid 3D object recognition. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al. , that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments. In the 1980s, backpropagation did not work well for deep FNNs and RNNs. To overcome this problem, Juergen Schmidhuber (1992) proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning. It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a chunker solved a deep learning task whose depth exceeded 1000. In 1992, Juergen Schmidhuber also published an alternative to RNNs which is now called a linear Transformer or a Transformer with linearized self-attention (save for a normalization operator). It learns internal spotlights of attention: a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention). This fast weight attention mapping is applied to a query pattern. The modern Transformer was introduced by Ashish Vaswani et al. in their 2017 paper ""Attention Is All You Need. "" It combines this with a softmax operator and a projection matrix. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use it. Transformers are also increasingly being used in computer vision.",wikipedia
+Artificial neural network,https://en.wikipedia.org/wiki/Artificial_neural_network#History,"History In 1991, Juergen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called ""artificial curiosity. "" In 2014, this principle was used in a generative adversarial network (GAN) by Ian Goodfellow et al. Here the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. This can be used to create realistic deepfakes. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Sepp Hochreiter's diploma thesis (1991) was called ""one of the most important documents in the history of machine learning"" by his supervisor Juergen Schmidhuber. Hochreiter identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. This led to the deep learning method called long short-term memory (LSTM), published in Neural Computation (1997). LSTM recurrent neural networks can learn ""very deep learning"" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. The ""vanilla LSTM"" with forget gate was introduced in 1999 by Felix Gers, Schmidhuber and Fred Cummins. LSTM has become the most cited neural network of the 20th century. In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used the LSTM principle to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century. The development of metal���oxide���semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled increasing MOS transistor counts in digital electronics. This provided more processing power for the development of practical artificial neural networks in the 1980s. Neural networks' early successes included predicting the stock market and in 1995 a (mostly) self-driving car. Geoffrey Hinton et al. (2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as ""deep learning"". Ciresan and colleagues (2010) showed that despite the vanishing gradient problem, GPUs make backpropagation feasible for many-layered feedforward neural networks.",wikipedia
+Artificial neural network,https://en.wikipedia.org/wiki/Artificial_neural_network#History,"History Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. For example, the bi-directional and multi-dimensional long short-term memory (LSTM) of Graves et al. won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned. Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012).",wikipedia
+History of artificial neural networks,https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Transformers_and_their_variants,"Transformers_and_their_variants Many modern large language models such as ChatGPT, GPT-4, and BERT use a feedforward neural network called Transformer by Ashish Vaswani et. al. in their 2017 paper ""Attention Is All You Need."" Transformers have increasingly become the model of choice for natural language processing problems, replacing recurrent neural networks (RNNs) such as long short-term memory (LSTM). Basic ideas for this go back a long way: in 1992, Juergen Schmidhuber published the Transformer with ""linearized self-attention"" (save for a normalization operator), which is also called the ""linear Transformer."" He advertised it as an ""alternative to RNNs"" that can learn ""internal spotlights of attention,"" and experimentally applied it to problems of variable binding. Here a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns called ""FROM"" and ""TO"" which in Transformer terminology are called ""key"" and ""value"" for ""self-attention."" This fast weight ""attention mapping"" is applied to queries. The 2017 Transformer combines this with a softmax operator and a projection matrix. Transformers are also increasingly being used in computer vision.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning,"Deep learning Deep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective ""deep"" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised. Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analog.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#Overview,"Overview Most modern deep learning models are based on multi-layered artificial neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines. In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word ""deep"" in ""deep learning"" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than 2. CAP of depth 2 has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance. For supervised learning tasks, deep learning methods eliminate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation. Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History There are two types of neural networks: feedforward neural networks (FNNs) and recurrent neural networks (RNNs). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created and analyzed the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was popularised by John Hopfield in 1982. RNNs have become central for speech recognition and language processing. Charles Tappert writes that Frank Rosenblatt developed and explored all of the basic ingredients of the deep learning systems of today, referring to Rosenblatt's 1962 book which introduced a multilayer perceptron (MLP) with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. However, since only the output layer had learning connections, this was not yet deep learning. It was what later was called an extreme learning machine. The first general, working learning algorithm for supervised, deep, feedforward, multilayer perceptrons was published by Alexey Ivakhnenko and Lapa in 1967. A 1971 paper described a deep network with eight layers trained by the group method of data handling. The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. In 1987 Matthew Brand reported that wide 12-layer nonlinear perceptrons could be fully end-to-end trained to reproduce logic functions of nontrivial circuit depth via gradient descent on small batches of random input/output samples, but concluded that training time on contemporary hardware (sub-megaflop computers) made the technique impractical, and proposed using fixed random early layers as an input hash for a single modifiable layer. Instead, subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1970, Seppo Linnainmaa published the reverse mode of automatic differentiation of discrete connected networks of nested differentiable functions. This became known as backpropagation. It is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology ""back-propagating errors"" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation already in 1960 in the context of control theory. In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. In 1985, David E. Rumelhart et al. published an experimental analysis of the technique. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1980. In 1969, he also introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for CNNs and deep learning in general. CNNs have become an essential tool for computer vision.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. In 1988, Wei Zhang et al. applied the backpropagation algorithm to a convolutional neural network (a simplified Neocognitron with convolutional interconnections between the image feature layers and the last fully connected layer) for alphabet recognition. They also proposed an implementation of the CNN with an optical computing system. In 1989, Yann LeCun et al. applied backpropagation to a CNN with the purpose of recognizing handwritten ZIP codes on mail. While the algorithm worked, training required 3 days. Subsequently, Wei Zhang, et al. modified their model by removing the last fully connected layer and applied it for medical image object segmentation in 1991 and breast cancer detection in mammograms in 1994. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al. , that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, J��rgen Schmidhuber (1992) proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning. It uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a chunker solved a deep learning task whose depth exceeded 1000. In 1992, J��rgen Schmidhuber also published an alternative to RNNs which is now called a linear Transformer or a Transformer with linearized self-attention (save for a normalization operator). It learns internal spotlights of attention: a slow feedforward neural network learns by gradient descent to control the fast weights of another neural network through outer products of self-generated activation patterns FROM and TO (which are now called key and value for self-attention). This fast weight attention mapping is applied to a query pattern. The modern Transformer was introduced by Ashish Vaswani et. al. in their 2017 paper ""Attention Is All You Need"". It combines this with a softmax operator and a projection matrix. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use it. Transformers are also increasingly being used in computer vision. In 1991, J��rgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called ""artificial curiosity"". In 2014, this principle was used in a generative adversarial network (GAN) by Ian Goodfellow et al.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History Here the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set. This can be used to create realistic deepfakes. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et. al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Sepp Hochreiter's diploma thesis (1991) was called ""one of the most important documents in the history of machine learning"" by his supervisor Schmidhuber. It not only tested the neural history compressor, but also identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve this problem. This led to the deep learning method called long short-term memory (LSTM), published in 1997. LSTM recurrent neural networks can learn ""very deep learning"" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. The ""vanilla LSTM"" with forget gate was introduced in 1999 by Felix Gers, Schmidhuber and Fred Cummins. LSTM has become the most cited neural network of the 20th century. In 2015, Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century. In 1994, Andr�� de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained. Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer. In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton. Since 1997, Sven Behnke extended the feed-forward hierarchical convolutional approach in the Neural Abstraction Pyramid by lateral and backward connections in order to flexibly incorporate context into decisions and iteratively resolve local ambiguities. Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of artificial neural network's (ANN) computational cost and a lack of understanding of how the brain wires its biological networks. Both shallow and deep learning (e. g. , recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History Additional difficulties were the lack of training data and limited computing power. Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI studied deep neural networks in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 National Institute of Standards and Technology Speaker Recognition evaluation. The SRI deep neural network was then deployed in the Nuance Verifier, representing the first major industrial application of deep learning. The principle of elevating ""raw"" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the ""raw"" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results. Speech recognition was taken over by LSTM. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fern��ndez, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTM RNNs. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which they made available through Google Voice Search. The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010. In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation. The papers referred to learning for deep belief nets. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets (DNN) might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009���2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks (CNNs) were superseded for ASR by CTC for LSTM. but are more successful in computer vision. Advances in hardware have driven renewed interest in deep learning. In 2009, Nvidia was involved in what was called the ""big bang"" of deep learning, ""as deep-learning neural networks were trained with Nvidia graphics processing units (GPUs)"". That year, Andrew Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times. In particular, GPUs are well-suited for the matrix/vector computations involved in machine learning. GPUs speed up training algorithms by orders of magnitude, reducing running times from weeks to days. Further, specialized hardware and algorithm optimizations can be used for efficient processing of deep learning models. In the late 2000s, deep learning started to outperform other methods in machine learning competitions. In 2009, a long short-term memory trained by connectionist temporal classification (Alex Graves, Santiago Fern��ndez, Faustino Gomez, and J��rgen Schmidhuber, 2006) was the first RNN to win pattern recognition contests, winning three competitions in connected handwriting recognition. Google later used CTC-trained LSTM for speech recognition on the smartphone. Significant impacts in image or object recognition were felt from 2011 to 2012. Although CNNs trained by backpropagation had been around for decades, and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. In 2011, the DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and J��rgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. Also in 2011, DanNet won the ICDAR Chinese handwriting contest, and in May 2012, it won the ISBI image segmentation contest. Until 2011, CNNs did not play a major role at computer vision conferences, but in June 2012, a paper by Ciresan et al. at the leading conference CVPR showed how max-pooling CNNs on GPU can dramatically improve many vision benchmark records. In September 2012, DanNet also won the ICPR contest on analysis of large medical images for cancer detection, and in the following year also the MICCAI Grand Challenge on the same topic. In October 2012, the similar AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#History,"History The VGG-16 network by Karen Simonyan and Andrew Zisserman further reduced the error rate and won the ImageNet 2014 competition, following a similar trend in large-scale speech recognition. Image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs. In 2012, a team led by George E. Dahl won the ""Merck Molecular Activity Challenge"" using multi-task deep neural networks to predict the biomolecular target of one drug. In 2014, Sepp Hochreiter's group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs and won the ""Tox21 Data Challenge"" of NIH, FDA and NCATS. In 2016, Roger Parloff mentioned a ""deep learning revolution"" that has transformed the AI industry. In March 2019, Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#Hardware,"Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months. Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2). Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#Applications,"Applications Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn ""Very Deep Learning"" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks. The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003���2007, accelerated progress in eight major areas: Scale-up/out and accelerated DNN training and decoding Sequence discriminative training Feature processing by deep models with solid understanding of the underlying mechanisms Adaptation of DNNs and related deep models Multi-task and transfer learning by DNNs and related deep models CNNs and how to design them to best exploit domain knowledge of speech RNN and its rich LSTM variants Other types of deep models including tensor-based models and integrated deep generative/discriminative models. All major commercial speech recognition systems (e. g. , Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc. ) are based on deep learning. A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available. Deep learning-based image recognition has become ""superhuman"", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces. Deep learning-trained vehicles now interpret 360�� camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of identifying the style period of a given painting Neural Style Transfer ��� capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video generating striking imagery based on random visual input fields. Neural networks have been used for implementing language models since the early 2000s.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#Applications,"Applications LSTM helped to improve machine translation and language modeling. Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others. Recent developments generalize word embedding to sentence embedding. Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system ""learns from millions of examples"". It translates ""whole sentences at a time, rather than pieces"". Google Translate supports over one hundred languages. The network encodes the ""semantics of the sentence rather than simply memorizing phrase-to-phrase translations"". GT uses English as an intermediate between most language pairs. A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs. AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis. In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice. Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value. Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.",wikipedia
+Deep learning,https://en.wikipedia.org/wiki/Deep_learning#Applications,"Applications Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency. Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as ""Shrinkage Fields for Effective Image Restoration"" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering. The United States Department of Defense applied deep learning to train robots in new tasks through observation. Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods relies on. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e. g. , spectral imaging and ultrasound imaging. An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity.",wikipedia
+Multimodal learning,https://en.wikipedia.org/wiki/Multimodal_learning#Application,"Application Multimodal deep Boltzmann machines are successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms support vector machines, latent Dirichlet allocation and deep belief network, when models are tested on data with both image-text modalities or with single modality.[citation needed] Multimodal deep Boltzmann machine is also able to predict missing modalities given the observed ones with reasonably good precision.[citation needed] Self Supervised Learning brings a more interesting and powerful model for multimodality. OpenAI developed CLIP and DALL-E models that revolutionized multimodality. Multimodal deep learning is used for cancer screening ��� at least one system under development integrates such different types of data.",wikipedia
+Deep reinforcement learning,https://en.wikipedia.org/wiki/Deep_reinforcement_learning#History,"History Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning, where a neural network is used in reinforcement learning to represent policies or value functions. Because in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single neural network, it is also sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon. Four inputs were used for the number of pieces of a given color at a given location on the board, totaling 198 input signals. With zero knowledge built in, the network learned to play the game at an intermediate level by self-play and TD( �� {\displaystyle \lambda } ). Seminal textbooks by Sutton and Barto on reinforcement learning, Bertsekas and Tsitiklis on neuro-dynamic programming, and others advanced knowledge and interest in the field. Katsunari Shibata's group showed that various functions emerge in this framework, including image recognition, color constancy, sensor motion (active recognition), hand-eye coordination and hand reaching movement, explanation of brain activities, knowledge transfer, memory, selective attention, prediction, and exploration. Starting around 2012, the so called Deep learning revolution led to an increased interest in using deep neural networks as function approximators across a variety of domains. This led to a renewed interest in researchers using deep neural networks to learn the policy, value, and/or Q functions present in existing reinforcement learning algorithms. Beginning around 2013, DeepMind showed impressive learning results using deep RL to play Atari video games. The computer player a neural network trained using a deep RL algorithm, a deep version of Q-learning they termed deep Q-networks (DQN), with the game score as the reward. They used a deep convolutional neural network to process 4 frames RGB pixels (84x84) as inputs. All 49 games were learned using the same network architecture and with minimal prior knowledge, outperforming competing methods on almost all the games and performing at a level comparable or superior to a professional human game tester. Deep reinforcement learning reached another milestone in 2015 when AlphaGo, a computer program trained with deep RL to play Go, became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19��19 board. In a subsequent project in 2017, AlphaZero improved performance on Go while also demonstrating they could use the same algorithm to learn to play chess and shogi at a level competitive or superior to existing computer programs for those games, and again improved in 2019 with MuZero. Separately, another milestone was achieved by researchers from Carnegie Mellon University in 2019 developing Pluribus, a computer program to play poker that was the first to beat professionals at multiplayer games of no-limit Texas hold 'em. OpenAI Five, a program for playing five-on-five Dota 2 beat the previous world champions in a demonstration match in 2019.",wikipedia
+Deep reinforcement learning,https://en.wikipedia.org/wiki/Deep_reinforcement_learning#History,"History Deep reinforcement learning has also been applied to many domains beyond games. In robotics, it has been used to let robots perform simple household tasks and solve a Rubik's cube with a robot hand. Deep RL has also found sustainability applications, used to reduce energy consumption at data centers. Deep RL for autonomous driving is an active area of research in academia and industry. Loon explored deep RL for autonomously navigating their high-altitude balloons.",wikipedia
+PyTorch,https://en.wikipedia.org/wiki/PyTorch,"PyTorch PyTorch is a machine learning framework based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface. A number of pieces of deep learning software are built on top of PyTorch, including Tesla Autopilot, Uber's Pyro, Hugging Face's Transformers, PyTorch Lightning, and Catalyst. PyTorch provides two high-level features: Tensor computing (like NumPy) with strong acceleration via graphics processing units (GPU) Deep neural networks built on a tape-based automatic differentiation system",wikipedia
+TensorFlow,https://en.wikipedia.org/wiki/TensorFlow#Applications,"Applications GE Healthcare used TensorFlow to increase the speed and accuracy of MRIs in identifying specific body parts. Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications. Sinovation Ventures used TensorFlow to identify and classify eye diseases from optical coherence tomography (OCT) scans. Twitter implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking. Previously, tweets were simply shown in reverse chronological order. The photo sharing app VSCO used TensorFlow to help suggest custom filters for photos. Google officially released RankBrain on October 26, 2015, backed by TensorFlow. InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms. Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student. TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities. The e-commerce platform Carousell used TensorFlow to provide personalized recommendations for customers. The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face. TensorFlow is the foundation for the automated image-captioning software DeepDream.",wikipedia
+Long short-term memory,https://en.wikipedia.org/wiki/Long_short-term_memory#Training,"Training An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight. A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to lim n ��� ��� W n = 0 {\displaystyle \lim _{n\to \infty }W^{n}=0} if the spectral radius of W {\displaystyle W} is smaller than 1. However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This ""error carousel"" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value. Many applications use stacks of LSTM RNNs and train them by connectionist temporal classification (CTC) to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution or by policy gradient methods, especially when there is no ""teacher"" (that is, training labels). There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units. In 2018, Bill Gates called it a ""huge milestone in advancing artificial intelligence"" when bots developed by OpenAI were able to beat humans in the game of Dota 2. OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads. In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft II. This was viewed as significant progress towards Artificial General Intelligence.",wikipedia
+Long short-term memory,https://en.wikipedia.org/wiki/Long_short-term_memory#Timeline_of_development,"Timeline_of_development 1991: Sepp Hochreiter analyzed the vanishing gradient problem and developed principles of the method in his German diploma thesis. 1995: ""Long Short-Term Memory (LSTM)"" is published in a technical report by Sepp Hochreiter and J��rgen Schmidhuber. 1996: LSTM is published at NIPS'1996, a peer-reviewed conference. 1997: The main LSTM paper is published in the journal Neural Computation. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates. 1999: Felix Gers, J��rgen Schmidhuber, and Fred Cummins introduced the forget gate (also called ""keep gate"") into the LSTM architecture, enabling the LSTM to reset its own state. 2000: Gers, Schmidhuber, and Cummins added peephole connections (connections from the cell to the gates) into the architecture. Additionally, the output activation function was omitted. 2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models. Hochreiter et al. used LSTM for meta-learning (i. e. learning a learning algorithm). 2004: First successful application of LSTM to speech Alex Graves et al. 2005: First publication (Graves and Schmidhuber) of LSTM with full backpropagation through time and of bi-directional LSTM. 2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher. 2006: Graves, Fernandez, Gomez, and Schmidhuber introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences. CTC-trained LSTM led to breakthroughs in speech recognition. Mayer et al. trained LSTM to control robots. 2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher. Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology. 2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves. One was the most accurate model in the competition and another was the fastest. This was the first time an RNN won international competitions. 2009: Justin Bayer et al. introduced neural architecture search for LSTM. 2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset. 2014: Kyunghyun Cho et al. put forward a simplified variant of the forget gate LSTM called Gated recurrent unit (GRU). 2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%. 2015: Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century.",wikipedia
+Long short-term memory,https://en.wikipedia.org/wiki/Long_short-term_memory#Timeline_of_development,"Timeline_of_development 2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%. Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype in the iPhone and for Siri. Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology. 2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks. Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM. Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used ""dialog session-based long-short-term memory"". 2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity. 2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II. 2021: According to Google Scholar, in 2021, LSTM was cited over 16,000 times within a single year. This reflects applications of LSTM in many different fields including healthcare.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model),"Transformer (machine learning model) A transformer is a deep learning architecture that relies on the parallel multi-head attention mechanism. The modern transformer was proposed in the 2017 paper titled 'Attention Is All You Need' by Ashish Vaswani et al., Google Brain team. It is notable for requiring less training time than previous recurrent neural architectures, such as long short-term memory (LSTM), and its later variation has been prevalently adopted for training large language models on large (language) datasets, such as the Wikipedia corpus and Common Crawl, by virtue of the parallelized processing of input sequence. Input text is split into n-grams encoded as tokens and each token is converted into a vector via looking up from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism allowing the signal for key tokens to be amplified and less important tokens to be diminished. Though the transformer paper was published in 2017, the softmax-based attention mechanism was proposed earlier in 2014 by Bahdanau, Cho, and Bengio for machine translation, and the Fast Weight Controller, similar to a transformer, was proposed in 1992 by Schmidhuber. This architecture is now used not only in natural language processing and computer vision, but also in audio and multi-modal processing. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (Bidirectional Encoder Representations from Transformers).",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Timeline,"Timeline In 1990, Elman network, using a recurrent network, encoded each word in a training set as a vector, called a word embedding, and the whole vocabulary as a vector database, allowing it to perform such tasks as sequence-prediction that are beyond the power of a simple multilayer perceptron. A shortcoming of the static embeddings was that they didn't differentiate between multiple meanings of same-spelt words. In 1992, the Fast Weight Controller was published by J��rgen Schmidhuber It learns to answer queries by programming the attention weights of another neural network through outer products of key vectors and value vectors called FROM and TO. The Fast Weight Controller was later shown to be closely related to the Linear Transformer. The terminology ""learning internal spotlights of attention"" was introduced in 1993. An advantage of the fast linear transformers is that their computational complexity grows linearly with sequence length, while modern transformers scale quadratically. In 1993, the IBM alignment models were used for statistical machine translation. In 1997, a precursor of large language model, using recurrent neural networks, such as long short-term memory, was proposed. In 2001, one-billion-word large text corpus, scraped from the Internet, referred to as ""very very large"" at the time, was used for word disambiguation. In 2012, AlexNet demonstrated the effectiveness of large neural networks for image recognition, encouraging large artificial neural networks approach instead of older, statistical approaches. In 2014, a 380M-parameter seq2seq model for machine translation using two LSTMs networks was proposed by Sutskever at al. The architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. In 2014, gating proved to be useful in a 130M-parameter seq2seq model, which used a simplified gated recurrent units (GRUs). Bahdanau et al showed that GRUs are neither better nor worse than gated LSTMs. In 2014, Bahdanau et al. improved the previous seq2seq model by using an ""additive"" kind of attention mechanism in-between two LSTM networks. It was, however, not yet the parallelizable (scaled ""dot product"") kind of attention, later proposed in the 2017 transformer paper. In 2015, the relative performance of Global and Local (widowed) attention model architectures were assessed by Luong et al, a mixed attention architecture found to improve on the translations offered by Bahdanau's architecture, while the use of a local attention architecture reduced translation time. In 2016, Google Translate gradually replaced the older statistical machine translation approach with the newer neural-networks-based approach that included a seq2seq model combined by LSTM and the ""additive"" kind of attention mechanism. They achieved a higher level of performance than the statistical approach, which took ten years to develop, in only nine months. In 2017, the original (100M-sized) encoder-decoder transformer model with a faster (parallelizable or decomposable) attention mechanism was proposed in the ""Attention is all you need"" paper.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Timeline,"Timeline As the model had difficulties converging, it was suggested that the learning rate should be linearly scaled up from 0 to maximal value for the first part of the training (i. e. 2% of the total number of training steps). The intent of the transformer model is to take a seq2seq model and remove its recurrent neural networks, but preserve its additive attention mechanism. In 2018, in the ELMo paper, an entire sentence was processed before an embedding vector was assigning to each word in the sentence. A bi-directional LSTM was used to calculate such, deep contextualized embeddings for each word, improving upon the line of research from bag of words and word2vec. In 2018, an encoder-only transformer was used in the (more than 1B-sized) BERT model, improving upon ELMo. In 2020, vision transformer and speech-processing convolution-augmented transformer outperformed recurrent neural networks, previously used for vision and speech. In 2020, difficulties with converging the original transformer were solved by normalizing layers before (instead of after) multiheaded attention by Xiong et al. In 2023, uni-directional (""autoregressive"") transformers were being used in the (more than 100B-sized) GPT-3 and other OpenAI GPT models. Before transformers, predecessors of attention mechanism were added to gated RNNs, such as LSTMs and gated recurrent units (GRUs), which processed datasets sequentially. Dependency on previous token computations prevented them from being able to parallelize the attention mechanism. In 1992, fast weight controller was proposed as an alternative to RNNs that can learn ""internal spotlights of attention"". In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. The performance of old models was enhanced by adding an attention mechanism, which allowed a model to access any preceding point along the sequence. The attention layer weighs all previous states according to a learned measure of relevance, providing relevant information about far-away tokens. This proved to be especially useful in language translation, where far-away context can be essential for the meaning of a word in a sentence. The state vector has been accessible only after the last English word was processed while, for example, translating it from French by a LSTM model. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. If an attention mechanism is added, the decoder is given access to the state vectors of every input word, not just the last, and can learn attention weights that dictate how much to attend to each input state vector. The augmentation of seq2seq models with the attention mechanism was first implemented in the context of machine translation by Bahdanau, Cho, and Bengio in 2014. In 2016, highly parallelizable decomposable attention was successfully combined with a feedforward network.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Timeline,"Timeline This indicated that attention mechanisms were powerful in themselves and that sequential recurrent processing of data was not necessary to achieve the quality gains of RNNs with attention. Soon Jakob Uszkoreit from Google Research also proposed replacing RNNs with self-attention and started the effort to evaluate that idea. Transformers, using an attention mechanism, processing all tokens simultaneously, calculated ""soft"" weights between them in successive layers. Since the attention mechanism only uses information about other tokens from lower layers, it can be computed for all tokens in parallel, which leads to improved training speed.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Training,"Training The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup. Transformers typically undergo self-supervised learning involving unsupervised pretraining followed by supervised fine-tuning. Pretraining is typically done on a larger dataset than fine-tuning, due to the limited availability of labeled training data. Tasks for pretraining and fine-tuning commonly include: language modeling next-sentence prediction question answering reading comprehension sentiment analysis paraphrasing",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Applications,"Applications The transformer has had great success in natural language processing (NLP), for example the tasks of machine translation and time series prediction. Many large language models such as GPT-2, GPT-3, GPT-4, Claude, BERT, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of such NLP-related tasks, and have the potential to find real-world applications. These may include: machine translation document summarization document generation named entity recognition (NER) biological sequence analysis writing computer code based on requirements expressed in natural language. video understanding.In addition to the NLP applications, it has also been successful in other fields, such as computer vision, or the protein folding applications (such as AlphaFold).",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Implementations,Implementations The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.,wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Architecture,"Architecture All transformers have the same primary components: Tokenizers, which convert text into tokens. A single embedding layer, which convert tokens and positions of the tokens into vector representations. Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. (optional) Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens. Transformer layers can be one of two types, encoder and decoder. In the original paper both of them were used, while later models included only one type of them. BERT is an example of encoder-only model; GPT are decoder-only models. The input text is parsed into tokens by a tokenizer, most often a byte pair encoding tokenizer, and each token is converted into a vector via looking up from a word embedding table. Then, positional information of the token is added to the word embedding. Like earlier seq2seq models, the original transformer model used an encoder/decoder architecture. The encoder consists of encoding layers that process the input tokens iteratively one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output as well as the decoder output's tokens so far. The function of each encoder layer is to generate contextualized token representations, where each representation corresponds to a token that ""mixes"" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for ""mixing"" information among the input tokens to the decoder (i. e. , the tokens generated so far during inference time). Both the encoder and decoder layers have a feed-forward neural network for additional processing of the outputs and contain residual connections and layer normalization steps. The transformer building blocks are scaled dot-product attention units. For each attention unit, the transformer model learns three weight matrices: the query weights W Q {\displaystyle W_{Q}} , the key weights W K {\displaystyle W_{K}} , and the value weights W V {\displaystyle W_{V}} . For each token i {\displaystyle i} , the input token representation x i {\displaystyle x_{i}} is multiplied with each of the three weight matrices to produce a query vector q i = x i W Q {\displaystyle q_{i}=x_{i}W_{Q}} , a key vector k i = x i W K {\displaystyle k_{i}=x_{i}W_{K}} , and a value vector v i = x i W V {\displaystyle v_{i}=x_{i}W_{V}} . Attention weights are calculated using the query and key vectors: the attention weight a i j {\displaystyle a_{ij}} from token i {\displaystyle i} to token j {\displaystyle j} is the dot product between q i {\displaystyle q_{i}} and k j {\displaystyle k_{j}} . The attention weights are divided by the square root of the dimension of the key vectors, d k {\displaystyle {\sqrt {d_{k}}}} , which stabilizes gradients during training, and passed through a softmax which normalizes the weights.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Architecture,"Architecture The fact that W Q {\displaystyle W_{Q}} and W K {\displaystyle W_{K}} are different matrices allows attention to be non-symmetric: if token i {\displaystyle i} attends to token j {\displaystyle j} (i. e. q i ��� k j {\displaystyle q_{i}\cdot k_{j}} is large), this does not necessarily mean that token j {\displaystyle j} will attend to token i {\displaystyle i} (i. e. q j ��� k i {\displaystyle q_{j}\cdot k_{i}} could be small). The output of the attention unit for token i {\displaystyle i} is the weighted sum of the value vectors of all tokens, weighted by a i j {\displaystyle a_{ij}} , the attention from token i {\displaystyle i} to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices Q {\displaystyle Q} , K {\displaystyle K} and V {\displaystyle V} are defined as the matrices where the i {\displaystyle i} th rows are vectors q i {\displaystyle q_{i}} , k i {\displaystyle k_{i}} , and v i {\displaystyle v_{i}} respectively. Then we can represent the attention as where softmax is taken over the horizontal axis. One set of ( W Q , W K , W V ) {\displaystyle \left(W_{Q},W_{K},W_{V}\right)} matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of ""relevance"". In addition, the influence field representing relevance can become progressively dilated in successive layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers. Concretely, let the multiple attention heads be indexed by i {\displaystyle i} , then we haveIt may be necessary to cut out attention links between some word-pairs. For example, the decoder for token position t {\displaystyle t} should not have access to token position t + 1 {\displaystyle t+1} . This may be accomplished before the softmax stage by adding a mask matrix M {\displaystyle M} that is ��� ��� {\displaystyle -\infty } at entries where the attention link must be cut, and 0 {\displaystyle 0} at other places:Each encoder consists of two major components: a self-attention mechanism and a feed-forward neural network. The self-attention mechanism accepts input encodings from the previous encoder and weights their relevance to each other to generate output encodings. The feed-forward neural network further processes each output encoding individually. These output encodings are then passed to the next encoder as its input, as well as to the decoders.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Architecture,"Architecture The first encoder takes positional information and embeddings of the input sequence as its input, rather than encodings. The positional information is necessary for the transformer to make use of the order of the sequence, because no other part of the transformer makes use of this. The encoder is bidirectional. Attention can be placed on tokens before and after the current token. Tokens are used instead of words to account for polysemy. A positional encoding is a fixed-size vector representation that encapsulates the relative positions of tokens within a target sequence: it provides the transformer model with information about where the words are in the input sequence. The positional encoding is defined as a function of type f : R ��� R d ; d ��� Z , d > 0 {\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0} , where d {\displaystyle d} is a positive even integer. The full positional encoding ��� as defined in the original paper ��� is given by the equation:Here, N {\displaystyle N} is a free parameter that should be significantly larger than the biggest k {\displaystyle k} that would be input into the positional encoding function. In the original paper, the authors chose N = 10000 {\displaystyle N=10000} . The function is in a simpler form when written as a complex function of type f : R ��� C d / 2 {\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}} The main reason the authors chose this as the positional encoding function is that it allows one to perform shifts as linear transformations:By taking a linear sum, any convolution can also be implemented as linear transformations:In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. Each decoder consists of three major components: a self-attention mechanism, an attention mechanism over the encodings, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention. Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For all attention heads, attention can't be placed on following tokens. The last decoder is followed by a final linear transformation and softmax layer, to produce the output probabilities over the vocabulary. All members of OpenAI's GPT series have a decoder-only architecture. In large language model, the terminology is somewhat different than the terminology used in the original Transformer paper: ""encoder only"": full encoder, full decoder. ""encoder-decoder"": full encoder, autoregressive decoder. ""decoder only"": autoregressive encoder, autoregressive decoder.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Architecture,"Architecture Here ""autoregressive"" means that a mask is inserted in the attention head to zero out all attention from one token to all tokens following it, as described in the ""masked attention"" section. Generally, Transformer-based language models are of two types: causal (or ""autoregressive"") and masked. The GPT series is causal. BERT is masked.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Subsequent_work,"Subsequent_work The original transformer uses ReLU activation function. Other activation functions were developed, such as SwiGLU. Transformers may use other positional encoding methods than sinusoidal. RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors [ ( x 1 ( 1 ) , x 1 ( 2 ) ) , ( x 2 ( 1 ) , x 2 ( 2 ) ) , ( x 3 ( 1 ) , x 3 ( 2 ) ) , . . . ] {\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),. . . ]} . Now pick some angle �� {\displaystyle \theta } . Then RoPE encoding isThe benefit of RoPE is that the dot-product between two vectors depends on their relative location only: ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isALiBi allows pretraining on short context windows, then finetuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the ""bottom"" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA). Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8. Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally, This has a neutral effect on model quality and training speed, but increases inference speed. Training transformer-based architectures can be expensive, especially for long inputs. Alternative architectures include the Reformer (which reduces the computational load from O ( N 2 ) {\displaystyle O(N^{2})} to O ( N ln ��� N ) {\displaystyle O(N\ln N)} ), or models like ETC/BigBird (which can reduce it to O ( N ) {\displaystyle O(N)} ) where N {\displaystyle N} is the length of the sequence. This is done using locality-sensitive hashing and reversible layers.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Subsequent_work,"Subsequent_work Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs. Random Feature Attention (2021) uses Fourier random features:This approximation can be computed in linear time, as we can compute the matrix �� ( k i ) v i T {\displaystyle \varphi (k_{i})v_{i}^{T}} first, then multiply it with the query. In essence, we have managed to obtain a more precise version of Performer (2022) uses the same Random Feature Attention, but w 1 , . . . , w D {\displaystyle w_{1},. . . ,w_{D}} are first independently sampled from the normal distribution N ( 0 , �� 2 I ) {\displaystyle N(0,\sigma ^{2}I)} , then they are Gram-Schmidt processed. Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to ""tokenize"" the modality. Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i. e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers by Andrew Jaegle et al. (2021) can learn from large amounts of heterogeneous data. Regarding image outputs, Peebles et al introduced a diffusion transformer (DiT) which facilitates use of the transformer architecture for diffusion-based image production. Also, Google released a transformer-centric image generator called ""Muse"" based on parallel decoding and masked generative transformer technology. (Transformers played a less-central role with prior image-producing technologies, albeit still a significant one.",wikipedia
+Transformer (machine learning model),https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#See_also,See_also Perceiver ��� Machine learning algorithm for non-textual data BERT (language model) ��� Language model developed by Google GPT-3 ��� 2020 large language model GPT-4 ��� 2023 text-generating language model ChatGPT ��� AI chatbot developed by OpenAI Wu Dao ��� Chinese multimodal artificial intelligence program Vision transformer ��� Machine learning algorithm for vision processing BLOOM (language model) ��� Open-access multilingual language model,wikipedia
+Dimensionality reduction,https://en.wikipedia.org/wiki/Dimensionality_reduction#Feature_projection,"Feature_projection Feature projection (also called feature extraction) transforms the data from the high-dimensional space to a space of fewer dimensions. The data transformation may be linear, as in principal component analysis (PCA), but many nonlinear dimensionality reduction techniques also exist. For multidimensional data, tensor representation can be used in dimensionality reduction through multilinear subspace learning. The main linear technique for dimensionality reduction, principal component analysis, performs a linear mapping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional representation is maximized. In practice, the covariance (and sometimes the correlation) matrix of the data is constructed and the eigenvectors on this matrix are computed. The eigenvectors that correspond to the largest eigenvalues (the principal components) can now be used to reconstruct a large fraction of the variance of the original data. Moreover, the first few eigenvectors can often be interpreted in terms of the large-scale physical behavior of the system, because they often contribute the vast majority of the system's energy, especially in low-dimensional systems. Still, this must be proven on a case-by-case basis as not all systems exhibit this behavior. The original space (with dimension of the number of points) has been reduced (with data loss, but hopefully retaining the most important variance) to the space spanned by a few eigenvectors. [citation needed] NMF decomposes a non-negative matrix to the product of two non-negative ones, which has been a promising tool in fields where only non-negative signals exist, such as astronomy. NMF is well known since the multiplicative update rule by Lee & Seung, which has been continuously developed: the inclusion of uncertainties, the consideration of missing data and parallel computation, sequential construction which leads to the stability and linearity of NMF, as well as other updates including handling missing data in digital image processing. With a stable component basis during construction, and a linear modeling process, sequential NMF is able to preserve the flux in direct imaging of circumstellar structures in astronomy, as one of the methods of detecting exoplanets, especially for the direct imaging of circumstellar discs. In comparison with PCA, NMF does not remove the mean of the matrices, which leads to unphysical non-negative fluxes; therefore NMF is able to preserve more information than PCA as demonstrated by Ren et al. Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is called kernel PCA. Other prominent nonlinear techniques include manifold learning techniques such as Isomap, locally linear embedding (LLE), Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis. These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA. More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel using semidefinite programming.",wikipedia
+Dimensionality reduction,https://en.wikipedia.org/wiki/Dimensionality_reduction#Feature_projection,"Feature_projection The most prominent example of such a technique is maximum variance unfolding (MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space), while maximizing the distances between points that are not nearest neighbors. An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include: classical multidimensional scaling, which is identical to PCA; Isomap, which uses geodesic distances in the data space; diffusion maps, which use diffusion distances in the data space; t-distributed stochastic neighbor embedding (t-SNE), which minimizes the divergence between distributions over pairs of points; and curvilinear component analysis. A different approach to nonlinear dimensionality reduction is through the use of autoencoders, a special kind of feedforward neural networks with a bottle-neck hidden layer. The training of deep encoders is typically performed using a greedy layer-wise pre-training (e. g. , using a stack of restricted Boltzmann machines) that is followed by a finetuning stage based on backpropagation. Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support-vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter. Autoencoders can be used to learn nonlinear dimension reduction functions and codings together with an inverse function from the coding to the original representation. T-distributed Stochastic Neighbor Embedding (t-SNE) is a nonlinear dimensionality reduction technique useful for visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well. Uniform manifold approximation and projection (UMAP) is a nonlinear dimensionality reduction technique. Visually, it is similar to t-SNE, but it assumes that the data is uniformly distributed on a locally connected Riemannian manifold and that the Riemannian metric is locally constant or approximately locally constant.",wikipedia
+Mixture of experts,https://en.wikipedia.org/wiki/Mixture_of_experts#Deep_learning,"Deep_learning The previous section described MoE as it was used before the era of deep learning. After deep learning, MoE found applications in running the largest models, as a simple way to perform conditional computation: only parts of the model are used, the parts chosen according to what the input is. The earliest paper that applies MoE to deep learning is, which proposes to use a different gating network at each layer in a deep neural network. Specifically, each gating is a linear-ReLU-linear-softmax network, and each expert is a linear-ReLU network. The key design desiderata for MoE in deep learning is to reduce computing cost. Consequently, for each query, only a small subset of the experts should be queried. This makes MoE in deep learning different from classical MoE. In classical MoE, the output for each query is a weighted sum of all experts' outputs. In deep learning MoE, the output for each query can only involve a few experts' outputs. Consequently, the key design choice in MoE becomes routing: given a batch of queries, how to route the queries to the best experts. The sparsely-gated MoE layer, published by researchers from Google Brain, uses feedforward networks as experts, and linear-softmax gating. Similar to the previously proposed hard MoE, they achieve sparsity by a weighted sum of only the top-k experts, instead of the weighted sum of all of them. Specifically, in a MoE layer, there are feedforward networks f 1 , . . . , f n {\displaystyle f_{1},. . . ,f_{n}} , and a gating network w {\displaystyle w} . The gating network is defined by w ( x ) = s o f t m a x ( t o p k ( W x + noise ) ) {\displaystyle w(x)=\mathrm {softmax} (\mathrm {top} _{k}(Wx+{\text{noise}}))} , where t o p k {\displaystyle \mathrm {top} _{k}} is a function that keeps the top-k entries of a vector the same, but sets all other entries to ��� ��� {\displaystyle -\infty } . The addition of noise helps with load balancing. The choice of k {\displaystyle k} is a hyperparameter that is chosen according to application. Typical values are k = 1 , 2 {\displaystyle k=1,2} . The k = 1 {\displaystyle k=1} version is also called the Switch Transformer. As demonstration, they trained a series of models for machine translation with alternating layers of MoE and LSTM, and compared with deep LSTM models. Table 3 shows that the MoE models used less inference time compute, despite having 30x more parameters. Vanilla MoE tend to have issues of load balancing: some experts are consulted often, while other experts rarely or not at all. To encourage the gate to select each expert with equal frequency (proper load balancing) within each batch, each MoE layer has two auxiliary loss functions. This is improved by into a single auxiliary loss function.",wikipedia
+Mixture of experts,https://en.wikipedia.org/wiki/Mixture_of_experts#Deep_learning,"Deep_learning Specifically, let n {\displaystyle n} be the number of experts, then for a given batch of queries { x 1 , x 2 , . . . , x T } {\displaystyle \{x_{1},x_{2},. . . ,x_{T}\}} , the auxiliary loss for the batch isIn sparsely-gated MoE, only the top-k experts are queried, and their outputs are weighted-summed. There are other methods. In Hash MoE, routing is performed deterministically by a hash function, fixed before learning begins. For example, if the model is a 4-layered Transformer, and input is a token for word ""eat"", and the hash of ""eat"" is ( 1 , 4 , 2 , 3 ) {\displaystyle (1,4,2,3)} , then the token would be routed to the 1st expert in layer 1, 4th expert in layer 2, etc. Despite its simplicity, it achieves competitive performance as sparsely gated MoE with k = 1 {\displaystyle k=1} . In soft MoE, suppose in each batch, each expert can process p {\displaystyle p} queries, then there are n �� p {\displaystyle n\times p} queries that can be assigned per batch. Now for each batch of queries { x 1 , x 2 , . . . , x T } {\displaystyle \{x_{1},x_{2},. . . ,x_{T}\}} , the soft MoE layer computes an array w i , j , k {\displaystyle w_{i,j,k}} , such that ( w i , j , 1 , . . . , w i , j , T ) {\displaystyle (w_{i,j,1},. . . ,w_{i,j,T})} is a probability distribution over queries, and the i {\displaystyle i} -th expert's j {\displaystyle j} -th query is ��� k w i , j , k x k {\displaystyle \sum _{k}w_{i,j,k}x_{k}} . However, this does not work with autoregressive modelling, since the weights w i , j , k {\displaystyle w_{i,j,k}} over one token depends on all other tokens'. Other approaches include solving it as a constrained linear programming problem, making each expert choose the top-k queries it wants (instead of each query choosing the top-k experts for it), using reinforcement learning to train the routing algorithm (since picking an expert is a discrete action, like in RL). Suppose there are n {\displaystyle n} experts in a layer. For a given batch of queries { x 1 , x 2 , . . . , x T } {\displaystyle \{x_{1},x_{2},. . . ,x_{T}\}} , each query is routed to one ore more experts. For example, if each query is routed to one expert as in Switch Transformers, and if the experts are load-balanced, then each expert should expect on average T / n {\displaystyle T/n} queries in a batch. In practice, the experts cannot expect perfect load balancing: in some batches, one expert might be underworked, while in other batches, it would be overworked. Since the inputs cannot move through the layer until every expert in the layer has finished the queries it is assigned, load balancing is important.",wikipedia
+Mixture of experts,https://en.wikipedia.org/wiki/Mixture_of_experts#Deep_learning,"Deep_learning As a hard constraint on load balancing, there is the capacity factor: each expert is only allowed to process up to c ��� T / n {\displaystyle c\cdot T/n} queries in a batch. found c ��� [ 1.25 , 2 ] {\displaystyle c\in [1.25,2]} to work in practice. MoE layers are used in very large Transformer models, for which learning and inferring over the full model is too costly. In Transformer models, the MoE layers are often used to select the feedforward layers (typically a linear-ReLU-linear network), appearing in each Transformer block after the multiheaded attention. This is because the feedforward layers take up an increasing portion of the computing cost as models grow larger. For example, 90% of parameters in PALM-540B are in feedforward layers. A series of large language models from Google used MoE. GShard uses MoE with up to top-2 experts per layer. Specifically, the top-1 expert is always selected, and the top-2th expert is selected with probability proportional to that experts' weight according to the gating function. Later, GLaM demonstrated a language model with 1.2 trillion parameters, eath MoE layer using top-2 out of 64 experts. Switch Transformers use top-1 in all MoE layers. The NLLB-200 by Meta AI is a machine translation model for 200 languages. Each MoE layer uses a hierarchical MoE with two levels. On the first level, the gating function chooses to use either a ""shared"" feedforward layer, or to use the experts. If using the experts, then another gating function computes the weights and chooses the top-2 experts (see Figure 19). MoE large language models can be adapted for downstream tasks by instruction tuning. Generally, MoE are used when dense models have become too costly. As of 2023, the largest models tend to be large language models. Outside of those, Vision MoE is a Transformer model with MoE layers. They demonstrated it by training a model with 15 billion parameters.",wikipedia
+Stochastic gradient descent,https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants,"Extensions_and_variants Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ��t of the iteration number t, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on k-means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall. As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate ��. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one: This equation is implicit since w n e w {\displaystyle w^{\rm {new}}} appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update can also be written as: As an example, consider least squares with features x 1 , ��� , x n ��� R p {\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} ^{p}} and observations y 1 , ��� , y n ��� R {\displaystyle y_{1},\ldots ,y_{n}\in \mathbb {R} } . We wish to solve: where x j ��� w = x j 1 w 1 + x j , 2 w 2 + . . . + x j , p w p {\displaystyle x_{j}'w=x_{j1}w_{1}+x_{j,2}w_{2}+. . . +x_{j,p}w_{p}} indicates the inner product. Note that x {\displaystyle x} could have ""1"" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows: where i {\displaystyle i} is uniformly sampled between 1 and n {\displaystyle n} . Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when �� {\displaystyle \eta } is misspecified so that I ��� �� x i x i ��� {\displaystyle I-\eta x_{i}x_{i}'} has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent (shortened as ISGD) can be solved in closed-form as: This procedure will remain numerically stable virtually for all �� {\displaystyle \eta } as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and normalized least mean squares filter (NLMS). Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models.",wikipedia
+Stochastic gradient descent,https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants,"Extensions_and_variants Specifically, suppose that Q i ( w ) {\displaystyle Q_{i}(w)} depends on w {\displaystyle w} only through a linear combination with features x i {\displaystyle x_{i}} , so that we can write ��� w Q i ( w ) = ��� q ( x i ��� w ) x i {\displaystyle \nabla _{w}Q_{i}(w)=-q(x_{i}'w)x_{i}} , where q ( ) ��� R {\displaystyle q()\in \mathbb {R} } may depend on x i , y i {\displaystyle x_{i},y_{i}} as well but not on w {\displaystyle w} except through x i ��� w {\displaystyle x_{i}'w} . Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, q ( x i ��� w ) = y i ��� x i ��� w {\displaystyle q(x_{i}'w)=y_{i}-x_{i}'w} , and in logistic regression q ( x i ��� w ) = y i ��� S ( x i ��� w ) {\displaystyle q(x_{i}'w)=y_{i}-S(x_{i}'w)} , where S ( u ) = e u / ( 1 + e u ) {\displaystyle S(u)=e^{u}/(1+e^{u})} is the logistic function. In Poisson regression, q ( x i ��� w ) = y i ��� e x i ��� w {\displaystyle q(x_{i}'w)=y_{i}-e^{x_{i}'w}} , and so on. In such settings, ISGD is simply implemented as follows. Let f ( �� ) = �� q ( x i ��� w o l d + �� | | x i | | 2 ) {\displaystyle f(\xi )=\eta q(x_{i}'w^{old}+\xi ||x_{i}||^{2})} , where �� {\displaystyle \xi } is scalar. Then, ISGD is equivalent to: The scaling factor �� ��� ��� R {\displaystyle \xi ^{\ast }\in \mathbb {R} } can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function q ( ) {\displaystyle q()} is decreasing, and thus the search bounds for �� ��� {\displaystyle \xi ^{\ast }} are [ min ( 0 , f ( 0 ) ) , max ( 0 , f ( 0 ) ) ] {\displaystyle [\min(0,f(0)),\max(0,f(0))]} . Further proposals include the momentum method or the heavy ball method, which in ML context appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations. Stochastic gradient descent with momentum remembers the update ��w at each iteration, and determines the next update as a linear combination of the gradient and the previous update: that leads to: where the parameter w {\displaystyle w} which minimizes Q ( w ) {\displaystyle Q(w)} is to be estimated, �� {\displaystyle \eta } is a step size (sometimes called the learning rate in machine learning) and �� {\displaystyle \alpha } is an exponential decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change.",wikipedia
+Stochastic gradient descent,https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants,"Extensions_and_variants The name momentum stems from an analogy to momentum in physics: the weight vector w {\displaystyle w} , thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss (""force""). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades. The momentum method is closely related to underdamped Langevin dynamics, and may be combined with Simulated Annealing. In mid-1980s the method was modified by Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called Nesterov Accelerated Gradient was sometimes used in ML in the 2010s. Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of When optimization is done, this averaged parameter vector takes the place of w. AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition. It still has a base learning rate ��, but this is multiplied with the elements of a vector {Gj,j} which is the diagonal of the outer product matrix where g �� = ��� Q i ( w ) {\displaystyle g_{\tau }=\nabla Q_{i}(w)} , the gradient, at iteration ��. The diagonal is given by This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now or, written as per-parameter updates, Each {G(i,i)} gives rise to a scaling factor for the learning rate that applies to a single parameter wi. Since the denominator in this factor, G i = ��� �� = 1 t g �� 2 {\displaystyle {\sqrt {G_{i}}}={\sqrt {\sum _{\tau =1}^{t}g_{\tau }^{2}}}} is the ���2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates. While designed for convex problems, AdaGrad has been successfully applied to non-convex optimization. RMSProp (for Root Mean Square Propagation) is a method invented by Geoffrey Hinton in 2012 in which the learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight. Unusually, it was not published in an article but merely described in a Coursera lecture.",wikipedia
+Stochastic gradient descent,https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants,"Extensions_and_variants [citation needed] So, first the running average is calculated in terms of means square, where, �� {\displaystyle \gamma } is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but ""forgetting"" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data. And the parameters are updated as, RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches. Adam (short for Adaptive Moment Estimation) is a 2014 update to the RMSProp optimizer combining it with the main feature of the Momentum method. In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters w ( t ) {\displaystyle w^{(t)}} and a loss function L ( t ) {\displaystyle L^{(t)}} , where t {\displaystyle t} indexes the current training iteration (indexed at 0 {\displaystyle 0} ), Adam's parameter update is given by: where �� {\displaystyle \epsilon } is a small scalar (e. g. 10 ��� 8 {\displaystyle 10^{-8}} ) used to prevent division by 0, and �� 1 {\displaystyle \beta _{1}} (e. g. 0.9) and �� 2 {\displaystyle \beta _{2}} (e. g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. The profound influence of this algorithm inspired multiple newer, less well-known momentum-based optimization schemes using Nesterov-enhanced gradients (eg: NAdam and FASFA) and varying interpretations of second-order information (eg: Powerpropagation and AdaSqrt). However, the most commonly used variants are AdaMax, which generalizes Adam using the infinity norm, and AMSGrad, which addresses convergence problems from Adam by using maximum of past squared gradients instead of the exponential average. AdamW is a later update which mitigates an unoptimal choice of the weight decay algorithm in Adam. Even though sign-based optimization goes back to the aforementioned Rprop, only in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taked into account and only considering its sign. Backtracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo���Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the ""descent property"" ��� which Backtracking line search enjoys ��� which is that f ( x n + 1 ) ��� f ( x n ) {\displaystyle f(x_{n+1})\leq f(x_{n})} for all n.",wikipedia
+Stochastic gradient descent,https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants,"Extensions_and_variants If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. A stochastic analogue of the standard (deterministic) Newton���Raphson algorithm (a ""second-order"" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation[citation needed]. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer. However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others. (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert. ) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural. These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i. e. , the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function.",wikipedia
+Reinforcement learning,https://en.wikipedia.org/wiki/Reinforcement_learning#Algorithms_for_control_learning,"Algorithms_for_control_learning Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. The agent's action selection is modeled as a map called policy: The policy map gives the probability of taking action a {\displaystyle a} when in state s {\displaystyle s} . There are also deterministic policies. The value function V �� ( s ) {\displaystyle V_{\pi }(s)} is defined as, expected return starting with state s {\displaystyle s} , i. e. s 0 = s {\displaystyle s_{0}=s} , and successively following policy �� {\displaystyle \pi } . Hence, roughly speaking, the value function estimates ""how good"" it is to be in a given state. where the random variable R {\displaystyle R} denotes the return, and is defined as the sum of future discounted rewards: where r t {\displaystyle r_{t}} is the reward at step t {\displaystyle t} , �� ��� [ 0 , 1 ) {\displaystyle \gamma \in [0,1)} is the discount-rate. Gamma is less than 1, so events in the distant future are weighted less than events in the immediate future. The algorithm must find a policy with maximum expected return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. The brute force approach entails two steps: For each possible policy, sample returns while following it Choose the policy with the largest expected returnOne problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search. Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the ""current"" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best-expected return from any initial state (i. e. , initial distributions play no role in this definition).",wikipedia
+Reinforcement learning,https://en.wikipedia.org/wiki/Reinforcement_learning#Algorithms_for_control_learning,"Algorithms_for_control_learning Again, an optimal policy can always be found amongst stationary policies. To define optimality in a formal manner, define the value of a policy �� {\displaystyle \pi } by where R {\displaystyle R} stands for the return associated with following �� {\displaystyle \pi } from the initial state s {\displaystyle s} . Defining V ��� ( s ) {\displaystyle V^{*}(s)} as the maximum possible value of V �� ( s ) {\displaystyle V^{\pi }(s)} , where �� {\displaystyle \pi } is allowed to change, A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return �� �� {\displaystyle \rho ^{\pi }} , since �� �� = E [ V �� ( S ) ] {\displaystyle \rho ^{\pi }=E[V^{\pi }(S)]} , where S {\displaystyle S} is a state randomly sampled from the distribution �� {\displaystyle \mu } of initial states (so �� ( s ) = Pr ( s 0 = s ) {\displaystyle \mu (s)=\Pr(s_{0}=s)} ). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s {\displaystyle s} , an action a {\displaystyle a} and a policy �� {\displaystyle \pi } , the action-value of the pair ( s , a ) {\displaystyle (s,a)} under �� {\displaystyle \pi } is defined by where R {\displaystyle R} now stands for the random return associated with first taking action a {\displaystyle a} in state s {\displaystyle s} and following �� {\displaystyle \pi } , thereafter. The theory of Markov decision processes states that if �� ��� {\displaystyle \pi ^{*}} is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q �� ��� ( s , ��� ) {\displaystyle Q^{\pi ^{*}}(s,\cdot )} with the highest value at each state, s {\displaystyle s} . The action-value function of such an optimal policy ( Q �� ��� {\displaystyle Q^{\pi ^{*}}} ) is called the optimal action-value function and is commonly denoted by Q ��� {\displaystyle Q^{*}} . In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q k {\displaystyle Q_{k}} ( k = 0 , 1 , 2 , ��� {\displaystyle k=0,1,2,\ldots } ) that converge to Q ��� {\displaystyle Q^{*}} . Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: policy evaluation and policy improvement.",wikipedia
+Reinforcement learning,https://en.wikipedia.org/wiki/Reinforcement_learning#Algorithms_for_control_learning,"Algorithms_for_control_learning Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy �� {\displaystyle \pi } , the goal is to compute the function values Q �� ( s , a ) {\displaystyle Q^{\pi }(s,a)} (or a good approximation to them) for all state-action pairs ( s , a ) {\displaystyle (s,a)} . Assume (for simplicity) that the Markov decision process is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair ( s , a ) {\displaystyle (s,a)} can be computed by averaging the sampled returns that originated from ( s , a ) {\displaystyle (s,a)} over time. Given sufficient time, this procedure can thus construct a precise estimate Q {\displaystyle Q} of the action-value function Q �� {\displaystyle Q^{\pi }} . This finishes the description of the policy evaluation step. In the policy improvement step, the next policy is obtained by computing a greedy policy with respect to Q {\displaystyle Q} : Given a state s {\displaystyle s} , this new policy returns an action that maximizes Q ( s , ��� ) {\displaystyle Q(s,\cdot )} . In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed. Problems with this procedure include: 1. The procedure may spend too much time evaluating a suboptimal policy. 2. It uses samples inefficiently in that a long trajectory improves the estimate only of the single state-action pair that started the trajectory. 3. When the returns along the trajectories have high variance, convergence is slow. 4. It works in episodic problems only. 5. It works in small, finite Markov decision processes only. The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor-critic methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.",wikipedia
+Reinforcement learning,https://en.wikipedia.org/wiki/Reinforcement_learning#Algorithms_for_control_learning,"Algorithms_for_control_learning Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called �� {\displaystyle \lambda } parameter ( 0 ��� �� ��� 1 ) {\displaystyle (0\leq \lambda \leq 1)} that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping �� {\displaystyle \phi } that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair ( s , a ) {\displaystyle (s,a)} are obtained by linearly combining the components of �� ( s , a ) {\displaystyle \phi (s,a)} with some weights �� {\displaystyle \theta } : The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods. Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector �� {\displaystyle \theta } , let �� �� {\displaystyle \pi _{\theta }} denote the policy associated to �� {\displaystyle \theta } . Defining the performance function by under mild conditions this function will be differentiable as a function of the parameter vector �� {\displaystyle \theta } . If the gradient of �� {\displaystyle \rho } was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search). A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation.",wikipedia
+Reinforcement learning,https://en.wikipedia.org/wiki/Reinforcement_learning#Algorithms_for_control_learning,"Algorithms_for_control_learning Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor���critic methods have been proposed and performed well on various problems. Finally, all of the above methods can be combined with algorithms that first learn a model. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and 'replayed' to the learning algorithm. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly.",wikipedia
+Semisupervised learning,https://en.wikipedia.org/wiki/Semisupervised_learning,"Semisupervised learning Weak supervision, also called semi-supervised learning, is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data (exclusively used in more expensive and time-consuming supervised learning paradigm), followed by a large amount of unlabeled data (used exclusively in unsupervised learning paradigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam. Technically, it could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions, or learning an underlying one-dimensional manifold where the data reside.",wikipedia
+Neural network,https://en.wikipedia.org/wiki/Neural_network#Overview,"Overview A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modelling, and neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Historically, digital computers evolved from the von Neumann model, and operate via the execution of explicit instructions via access to memory by a number of processors. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems. Unlike the von Neumann model, neural network computing does not separate memory and processing. Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Introduction,"Introduction In general, a neural model can be characterized by 4 parameters: size of the model, size of the training dataset, cost of training, performance after training. Each of these four variables can be precisely defined into a real number, and they are empirically found to be related by simple statistical laws, called ""scaling laws"". [citation needed] These are usually written as N , D , C , L {\displaystyle N,D,C,L} (number of parameters, dataset size, computing cost, loss). In most cases, the size of the model is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models. In sparse models, during every inference, only a fraction of the parameters are used. In comparison, most other kinds of neural networks, such as Transformer networks, always use all their parameters during every inference. The size of the training dataset is usually quantified by the number of data points it contains. Larger training datasets are typically preferred as they provide a richer and more diverse source of information for the model to learn from. This in turn can lead to improved generalization performance when the model is applied to unseen data. However, increasing the size of the training dataset also increases the computational resources and time required for model training. With the ""pretrain, then finetune"" method used in most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes would have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset. In some cases, a small amount of high quality data suffices for finetuning, and more data does not improve performance. The cost of training is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required to train the model). It's important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, and parallel computing on specialized hardware like GPUs or TPUs. The cost of training a neural model is a function of several factors including the size of the model, the size of the training dataset, the complexity of the training algorithm, and the computational resources available. In particular, doubling the training dataset does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an ""epoch""). The performance of a neural model is evaluated based on its ability to accurately predict the output given the input data. Common metrics for evaluating model performance include: accuracy, precision, recall, and F1 score for classification tasks; mean squared error (MSE) or mean absolute error (MAE) for regression tasks; negative log-likelihood per token (logarithm of perplexity) for language modeling.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Introduction,"Introduction Elo rating in a competition against other models, such as gameplay or preference by a human judgePerformance can be improved by using more data, larger models, different training algorithms, regularizing the model to prevent overfitting, and early stopping using a validation set.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Examples,"Examples A 2020 analysis studied statistical relations between C , N , D , L {\displaystyle C,N,D,L} over a wide range of values and found similar scaling laws, over the range of N ��� [ 10 3 , 10 9 ] {\displaystyle N\in [10^{3},10^{9}]} , C ��� [ 10 12 , 10 21 ] {\displaystyle C\in [10^{12},10^{21}]} , and over multiple modalities (text, video, image, text to image, etc. ). In particular, the scaling laws it found are (Table 1 of ): For each modality, they fixed one of the two C , N {\displaystyle C,N} , and varying the other one ( D {\displaystyle D} is varied along using D = C / 6 N {\displaystyle D=C/6N} ), the achievable test loss satisfies L = L 0 + ( x 0 x ) �� {\displaystyle L=L_{0}+\left({\frac {x_{0}}{x}}\right)^{\alpha }} where x {\displaystyle x} is the varied variable, and L 0 , x 0 , �� {\displaystyle L_{0},x_{0},\alpha } are parameters to be found by statistical fitting. The parameter �� {\displaystyle \alpha } is the most important one. When N {\displaystyle N} is the varied variable, �� {\displaystyle \alpha } ranges from 0.037 {\displaystyle 0.037} to 0.24 {\displaystyle 0.24} depending on the model modality. This corresponds to the �� = 0.34 {\displaystyle \alpha =0.34} from the Chinchilla scaling paper. When C {\displaystyle C} is the varied variable, �� {\displaystyle \alpha } ranges from 0.048 {\displaystyle 0.048} to 0.19 {\displaystyle 0.19} depending on the model modality. This corresponds to the �� = 0.28 {\displaystyle \beta =0.28} from the Chinchilla scaling paper. Given fixed computing budget, optimal model parameter count is consistently around N o p t ( C ) = ( C 5 �� 10 ��� 12 petaFLOP-day ) 0.7 = 9.0 �� 10 ��� 7 C 0.7 {\displaystyle N_{opt}(C)=\left({\frac {C}{5\times 10^{-12}{\text{petaFLOP-day}}}}\right)^{0.7}=9.0\times 10^{-7}C^{0.7}} The parameter 9.0 �� 10 ��� 7 {\displaystyle 9.0\times 10^{-7}} varies by a factor of up to 10 for different modalities. The exponent parameter 0.7 {\displaystyle 0.7} varies from 0.64 {\displaystyle 0.64} to 0.75 {\displaystyle 0.75} for different modalities. This exponent corresponds to the ��� 0.5 {\displaystyle \approx 0.5} from the Chinchilla scaling paper. It's ""strongly suggested"" (but not statistically checked) that D o p t ( C ) ��� N o p t ( C ) 0.4 ��� C 0.28 {\displaystyle D_{opt}(C)\propto N_{opt}(C)^{0.4}\propto C^{0.28}} . This exponent corresponds to the ��� 0.5 {\displaystyle \approx 0.5} from the Chinchilla scaling paper. When N {\displaystyle N} is the varied variable, �� {\displaystyle \alpha } ranges from 0.037 {\displaystyle 0.037} to 0.24 {\displaystyle 0.24} depending on the model modality. This corresponds to the �� = 0.34 {\displaystyle \alpha =0.34} from the Chinchilla scaling paper. When C {\displaystyle C} is the varied variable, �� {\displaystyle \alpha } ranges from 0.048 {\displaystyle 0.048} to 0.19 {\displaystyle 0.19} depending on the model modality. This corresponds to the �� = 0.28 {\displaystyle \beta =0.28} from the Chinchilla scaling paper.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Examples,"Examples The scaling law of L = L 0 + ( C 0 / C ) 0.048 {\displaystyle L=L_{0}+(C_{0}/C)^{0.048}} was confirmed during the training of GPT-3 (Figure 3.1 ). One particular scaling law (""Chinchilla scaling"") states that, for a large language model (LLM) autoregressively trained for one epoch, with a cosine learning rate schedule, we have: C {\displaystyle C} is the cost of training the model, in FLOPs. N {\displaystyle N} is the number of parameters in the model. D {\displaystyle D} is the number of tokens in the training set. L {\displaystyle L} is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset. L 0 {\displaystyle L_{0}} represents the loss of an ideal generative process on the test data A N �� {\displaystyle {\frac {A}{N^{\alpha }}}} captures the fact that a Transformer language model with N {\displaystyle N} parameters underperforms the ideal generative process B D �� {\displaystyle {\frac {B}{D^{\beta }}}} captures the fact that the model trained on D {\displaystyle D} tokens underperforms the ideal generative process L 0 {\displaystyle L_{0}} represents the loss of an ideal generative process on the test data A N �� {\displaystyle {\frac {A}{N^{\alpha }}}} captures the fact that a Transformer language model with N {\displaystyle N} parameters underperforms the ideal generative process B D �� {\displaystyle {\frac {B}{D^{\beta }}}} captures the fact that the model trained on D {\displaystyle D} tokens underperforms the ideal generative processand the statistical parameters are C 0 = 6 {\displaystyle C_{0}=6} , meaning that it costs 6 FLOPs per parameter to train on one token. This is estimated by Kaplan et al. Note that training cost is much higher than inference cost, as training entails both forward and backward passes, whereas inference costs 1 to 2 FLOPs per parameter to infer on one token. �� = 0.34 , �� = 0.28 , A = 406.4 , B = 410.7 , L 0 = 1.69 {\displaystyle \alpha =0.34,\beta =0.28,A=406.4,B=410.7,L_{0}=1.69} . The statistical laws were fitted over experimental data with N ��� [ 7 �� 10 7 , 1.6 �� 10 10 ] , D ��� [ 5 �� 10 9 , 5 �� 10 11 ] , C ��� [ 10 18 , 10 24 ] {\displaystyle N\in [7\times 10^{7},1.6\times 10^{10}],D\in [5\times 10^{9},5\times 10^{11}],C\in [10^{18},10^{24}]} . Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additional optimization objective allows us to solve for all four variables. In particular, for any fixed C {\displaystyle C} , we can uniquely solve for all 4 variables that minimizes L {\displaystyle L} . This provides us with the optimal D o p t ( C ) , N o p t ( C ) {\displaystyle D_{opt}(C),N_{opt}(C)} for any fixed C {\displaystyle C} :There are other estimates for ""Chinchilla efficient"" model size and training dataset size.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Examples,"Examples The above is based on a statistical model of L = A N �� + B D �� + L 0 {\displaystyle L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}} . One can also directly fit a statistical law for D o p t ( C ) , N o p t ( C ) {\displaystyle D_{opt}(C),N_{opt}(C)} without going through the detour, for which one obtains:In simpler terms, the Chinchilla scaling law for training Transformer language models suggests that when given an increased budget (in FLOPs), to achieve compute-optimal, the number of model parameters (N) and the number of tokens for training the model (D) should scale in approximately equal proportions. This conclusion differs from the previous scaling law for neural language models, which states that N should be scaled faster than D. The discrepancy arises from setting different cycle lengths for cosine learning rate schedulers. In estimating the Chinchilla scaling, the authors set the cycle length to be the same as the training steps, as experimental results indicate that larger cycles overestimate the loss of the models. A 2022 analysis found that many scaling behaviors of artificial neural networks follow a smoothly broken power law functional form: y = a + ( b x ��� c 0 ) ��� i = 1 n ( 1 + ( x d i ) 1 / f i ) ��� c i ��� f i {\displaystyle y=a+{\bigg (}bx^{-c_{0}}{\bigg )}\prod _{i=1}^{n}\left(1+\left({\frac {x}{d_{i}}}\right)^{1/f_{i}}\right)^{-c_{i}*f_{i}}} in which x {\displaystyle x} refers to the quantity being scaled (i. e. C {\displaystyle C} , N {\displaystyle N} , D {\displaystyle D} , number of training steps, number of inference steps, or model input size) and y {\displaystyle y} refers to the downstream (or upstream) performance evaluation metric of interest (e. g. prediction error, cross entropy, calibration error, AUROC, BLEU score percentage, F1 score, reward, Elo rating, solve rate, or FID score) in zero-shot, prompted, or fine-tuned settings. The parameters a , b , c 0 , c 1 . . . c n , d 1 . . . d n , f 1 . . . f n {\displaystyle a,b,c_{0},c_{1}. . . c_{n},d_{1}. . . d_{n},f_{1}. . . f_{n}} are found by statistical fitting. On a log���log plot, when f i {\displaystyle f_{i}} is not too large and a {\displaystyle a} is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; the n {\displaystyle n} transitions between the segments are called ""breaks"", hence the name Broken Neural Scaling Laws (BNSL). The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems, arithmetic, emergent abilities, double descent, supervised learning, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent).",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Examples,"Examples The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form include ResNets, Transformers, MLPs, MLP-Mixers, Recurrent Neural Networks, Graph Neural Networks, U-Nets, Ensembles (and Non-Ensembles), MoE (Mixture of Experts) (and Non-MoE) Models, and Sparse Pruned (and Non-Sparse Unpruned) Models. Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter counts N ��� [ 5 �� 10 6 , 2 �� 10 9 ] {\displaystyle N\in [5\times 10^{6},2\times 10^{9}]} , on image sets of sizes D ��� [ 3 �� 10 7 , 3 �� 10 9 ] {\displaystyle D\in [3\times 10^{7},3\times 10^{9}]} , for computing C ��� [ 0.2 , 10 4 ] {\displaystyle C\in [0.2,10^{4}]} (in units of TPUv3-core-days). After training the model, it is finetuned on ImageNet training set. Let L {\displaystyle L} be the error probability of the finetuned model classifying ImageNet test set. They found min N , D L = 0.09 + 0.26 ( C + 0.01 ) 0.35 {\displaystyle \min _{N,D}L=0.09+{\frac {0.26}{(C+0.01)^{0.35}}}} . Ghorbani, Behrooz et al. studied scaling laws for neural machine translation (specifically, English as source, and German as target) in encoder-decoder Transformer models, trained until convergence on the same datasets (thus they did not fit scaling laws for computing cost C {\displaystyle C} or dataset size D {\displaystyle D} ). They varied N ��� [ 10 8 , 3.5 �� 10 9 ] {\displaystyle N\in [10^{8},3.5\times 10^{9}]} They found three results: L {\displaystyle L} is a scaling law function of N E , N D {\displaystyle N_{E},N_{D}} , where N E , N D {\displaystyle N_{E},N_{D}} are encoder and decoder parameter count. It is not simply a function of total parameter count N = N E + N D {\displaystyle N=N_{E}+N_{D}} . The function has form L ( N e , N d ) = �� ( N �� e N e ) p e ( N �� d N d ) p d + L ��� {\displaystyle L\left(N_{e},N_{d}\right)=\alpha \left({\frac {{\bar {N}}_{e}}{N_{e}}}\right)^{p_{e}}\left({\frac {{\bar {N}}_{d}}{N_{d}}}\right)^{p_{d}}+L_{\infty }} , where �� , p e , p d , L ��� , N �� e , N �� d {\displaystyle \alpha ,p_{e},p_{d},L_{\infty },{\bar {N}}_{e},{\bar {N}}_{d}} are fitted parameters. They found that N d / N ��� 0.55 {\displaystyle N_{d}/N\approx 0.55} minimizes loss if N {\displaystyle N} is held fixed. L {\displaystyle L} ""saturates"" (that is, it reaches L ��� {\displaystyle L_{\infty }} ) for smaller models when the training and testing datasets are ""source-natural"" than ""target-natural"". A ""source-natural"" data point means a pair of English-German sentences, and the model is asked to translate the English sentence into German, and the English sentence is written by a natural English writer, while the German sentence is translated from the English sentence by a machine translator. To construct the two kinds of datasets, the authors collected natural English and German sentences online, then used machine translation to generate their translations.",wikipedia
+Neural scaling law,https://en.wikipedia.org/wiki/Neural_scaling_law#Examples,"Examples As models grow larger, models trained on source-original datasets can achieve low loss but bad BLEU score. In contrast, models trained on target-original datasets achieve low loss and good BLEU score in tandem (Figure 10, 11 ). The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit. trained Transformers for machine translations with sizes N ��� [ 4 �� 10 5 , 5.6 �� 10 7 ] {\displaystyle N\in [4\times 10^{5},5.6\times 10^{7}]} on dataset sizes D ��� [ 6 �� 10 5 , 6 �� 10 9 ] {\displaystyle D\in [6\times 10^{5},6\times 10^{9}]} . They found the Kaplan et al (2020) scaling law applied to machine translation: L ( N , D ) = [ ( N C N ) �� N �� D + D C D ] �� D {\displaystyle L(N,D)=\left[\left({\frac {N_{C}}{N}}\right)^{\frac {\alpha _{N}}{\alpha _{D}}}+{\frac {D_{C}}{D}}\right]^{\alpha _{D}}} . They also found the BLEU score scaling as B L E U ��� C e ��� k L {\displaystyle BLEU\approx Ce^{-kL}} . Hernandez, Danny et al. studied scaling laws for transfer learning in language models. They trained a family of Transformers in three ways: pretraining on English, finetuning on Python pretraining on an equal mix of English and Python, finetuning on Python training on PythonThe idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter count N {\displaystyle N} , and after being finetuned on D F {\displaystyle D_{F}} Python tokens, it achieves some loss L {\displaystyle L} . We say that its ""transferred token count"" is D T {\displaystyle D_{T}} , if another model with the same N {\displaystyle N} achieves the same L {\displaystyle L} after training on D F + D T {\displaystyle D_{F}+D_{T}} Python tokens. They found D T = 1.9 e 4 ( D F ) .18 ( N ) .38 {\displaystyle D_{T}=1.9e4\left(D_{F}\right)^{.18}(N)^{.38}} for pretraining on English text, and D T = 2.1 e 5 ( D F ) .096 ( N ) .38 {\displaystyle D_{T}=2.1e5\left(D_{F}\right)^{.096}(N)^{.38}} for pretraining on English and non-Python code.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer,"Vision transformer A Vision Transformer (ViT) is a transformer designed for computer vision. Transformers were introduced in 2017, and have found widespread use in Natural Language Processing. In 2020, they were adapted for computer vision, yielding ViT. The basic structure is to break down input images as a series of patches, then tokenized, before applying the tokens to a standard Transformer architecture. The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in NLP, as representation vectors flow through a Transformers, they incorporate more and more semantic relations between words, from syntax to semantics. ViT has found applications in image recognition, image segmentation, and autonomous driving.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#Architecture,"Architecture The basic architecture, used by the original 2020 paper, is as follows. In summary, it is a BERT-like encoder-only Transformer. The input image is of type R H �� W �� C {\displaystyle \mathbb {R} ^{H\times W\times C}} , where H , W , C {\displaystyle H,W,C} are height, width, channel (RGB). It is then split into square-shaped patches of type R P �� P �� C {\displaystyle \mathbb {R} ^{P\times P\times C}} . For each patch, the patch is pushed through a linear operator, to obtain a vector (""patch embedding""). The position of the patch is also transformed into a vector by ""position encoding"". The two vectors are added, then pushed through several Transformer encoders. The above architecture turns an image into a sequence of vector representations. To use the vector representation for downstream applications, one needs to add some network modules on top of it. For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#Vision_Transformers,"Vision_Transformers Transformers found their initial applications in natural language processing (NLP) tasks, as demonstrated by language models such as BERT and GPT-3. By contrast the typical image processing system uses a convolutional neural network (CNN). Well-known projects include Xception, ResNet, EfficientNet, DenseNet, and Inception. Transformers measure the relationships between pairs of input tokens (words in the case of text strings), termed attention. The cost is quadratic in the number of tokens. For images, the basic unit of analysis is the pixel. However, computing relationships for every pixel pair in a typical image is prohibitive in terms of memory and computation. Instead, ViT computes relationships among pixels in various small sections of the image (e.g., 16x16 pixels), at a drastically reduced cost. The sections (with positional embeddings) are placed in a sequence. The embeddings are learnable vectors. Each section is arranged into a linear sequence and multiplied by the embedding matrix. The result, with the position embedding is fed to the transformer. As in the case of BERT, a fundamental role in classification tasks is played by the class token. A special token that is used as the only input of the final MLP Head as it has been influenced by all the others. The architecture for image classification is the most common and uses only the Transformer Encoder in order to transform the various input tokens. However, there are also other applications in which the decoder part of the traditional Transformer Architecture is also used.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#History,"History In 2021 a pure transformer model demonstrated better performance and greater efficiency than CNNs on image classification. A study in June 2021 added a transformer backend to ResNet, which dramatically reduced costs and increased accuracy. In the same year, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Among the most relevant is the Swin Transformer, which through some modifications to the attention mechanism and a multi-stage approach achieved state-of-the-art results on some object detection datasets such as COCO. Another interesting variant is the TimeSformer, designed for video understanding tasks and able to capture spatial and temporal information through the use of divided space-time attention.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#Comparison_with_Convolutional_Neural_Networks,"Comparison_with_Convolutional_Neural_Networks Due to the commonly used (comparatively) large patch size, ViT performance depends more heavily on decisions including that of the optimizer, dataset-specific hyperparameters, and network depth than convolutional networks. Preprocessing with a layer of smaller-size, overlapping (stride < size) convolutional filters helps with performance and stability. The CNN translates from the basic pixel level to a feature map. A tokenizer translates the feature map into a series of tokens that are then fed into the transformer, which applies the attention mechanism to produce a series of output tokens. Finally, a projector reconnects the output tokens to the feature map. The latter allows the analysis to exploit potentially significant pixel-level details. This drastically reduces the number of tokens that need to be analyzed, reducing costs accordingly. The differences between CNNs and Vision Transformers are many and lie mainly in their architectural differences. In fact, CNNs achieve excellent results even with training based on data volumes that are not as large as those required by Vision Transformers. This different behaviour seems to derive from the different inductive biases they possess. The filter-oriented architecture of CNNs can be somehow exploited by these networks to grasp more quickly the particularities of the analysed images even if, on the other hand, they end up limiting them making it more complex to grasp global relations. On the other hand, the Vision Transformers possess a different kind of bias toward exploring topological relationships between patches, which leads them to be able to capture also global and wider range relations but at the cost of a more onerous training in terms of data. Vision Transformers also proved to be much more robust to input image distortions such as adversarial patches or permutations. However, choosing one architecture over another is not always the wisest choice, and excellent results have been obtained in several Computer Vision tasks through hybrid architectures combining convolutional layers with Vision Transformers.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#The_Role_of_Self-Supervised_Learning,"The_Role_of_Self-Supervised_Learning The considerable need for data during the training phase has made it essential to find alternative methods to train these models, and a central role is now played by self-supervised methods. Using these approaches, it is possible to train a neural network in an almost autonomous way, allowing it to deduce the peculiarities of a specific problem without having to build a large dataset or provide it with accurately assigned labels. Being able to train a Vision Transformer without having to have a huge vision dataset at its disposal could be the key to the widespread dissemination of this promising new architecture.",wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#Applications,Applications Vision Transformers have been used in many Computer Vision tasks with excellent results and in some cases even state-of-the-art. Among the most relevant areas of application are: Image Classification Object Detection Video Deepfake Detection Image segmentation Anomaly detection Image Synthesis Cluster analysis Autonomous Driving,wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#Implementations,Implementations There are many implementations of Vision Transformers and its variants available in open source online. The main versions of this architecture have been implemented in PyTorch but implementations have also been made available for TensorFlow.,wikipedia
+Vision transformer,https://en.wikipedia.org/wiki/Vision_transformer#See_also,See_also Transformer (machine learning model) Attention (machine learning) Perceiver Deep learning PyTorch TensorFlow,wikipedia
+Open-source artificial intelligence,https://en.wikipedia.org/wiki/Open-source_artificial_intelligence,"Open-source artificial intelligence Open-source artificial intelligence is the application of open source practices to the development of artificial intelligence resources. Many open-source artificial intelligence products are variations of other existing tools and technology which major companies have shared as open-source software. Companies often developed closed products in an attempt to keep a competitive advantage in the marketplace. A journalist for Wired explored the idea that open-source AI tools have a development advantage over closed products, and could overtake them in the marketplace. Popular open-source artificial intelligence project categories include large language models, Machine translation tools, and chatbots. For software developers to produce open-source artificial intelligence resources, they must trust the various other open-source software components they use in its development.",wikipedia
+Open-source artificial intelligence,https://en.wikipedia.org/wiki/Open-source_artificial_intelligence#Large_Language_Models,"Large_Language_Models LLaMA (Large Language Model Meta AI) is a family of large language models (LLMs), released by Meta AI starting in February 2023.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Approaches,"Approaches Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs. Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr. , he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator. In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique F��d��rale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this ""the Singularity"". He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build ""Friendly AI"", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. There are discussions on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low. A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Approaches,"Approaches In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved ""cockroach intelligence"". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. However, there is one technology in particular that has the potential to make the idea of morally capable robots a reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit ��� or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc. In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e. g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal ""hackers"". According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Approaches,"Approaches Similarly, according to a five-country study by KPMG and the University of Queensland Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully. The term ""robot ethics"" (sometimes ""roboethics"") refers to the morality of how humans design, construct, use and treat robots. Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice. In the review of 84 ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity. Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle ��� explicability. Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development. OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open-source AI beneficial to humanity. There are numerous other open-source AI developments. Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE Standards Association has published a technical standard on Transparency of Autonomous Systems: IEEE 7001-2021. The IEEE effort identifies multiple scales of transparency for different stakeholders. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do. Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks. On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its ""Policy and investment recommendations for trustworthy Artificial Intelligence"". This is the AI HLEG's second deliverable, after the April 2019 publication of the ""Ethics Guidelines for Trustworthy AI"".",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Approaches,"Approaches The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that ""HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved"" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. On 21 April 2021, the European Commission proposed the Artificial Intelligence Act.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates. Bias can creep into algorithms in many ways. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data accumulated over the years, during which time the candidates that successfully got the job were mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical data and generated predictions for the present/future that these types of candidates are most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus ��� the source material the algorithm uses to learn about the relationships between different words. Large companies such as IBM, Google, etc. have made efforts to research and address these biases. One solution for addressing bias is to create documentation for the data used to train AI systems. Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it. There are some open-sourced tools by civil societies that are looking to bring more awareness to biased AI. ""Robot rights"" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. These could include the right to life and liberty, freedom of thought and expression, and equality before the law. The issue has been considered by the Institute for the Future and by the U. K. Department of Trade and Industry. Experts disagree on how soon specific and detailed laws on the subject will be necessary. Glenn McGee reported that sufficiently humanoid robots might appear by 2020, while Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist. The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own: 61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right. In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law. The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society. In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may became conscious, such as Integrated information theory. Edelman notes one exception had been Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges Metzinger repeated this argument in 2021, highlighting the risk of creating an ""explosion of artificial suffering"", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of artificial conscious instances. Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include OpenAI founder Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be ""slightly conscious"". In November 2022, David Chalmers argued that it was unlikely current large language models like GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future. Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as: A customer service representative (AI technology is already used today for telephone-based interactive voice response systems) A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation) A soldier A judge A police officer A therapist (as was proposed by Kenneth Colby in the 70s)Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an ""atrophy of the human spirit that comes from thinking of ourselves as computers. "" Pamela McCorduck counters that, speaking for women and minorities ""I'd rather take my chances with an impartial computer"", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. ""When moralizing is both vehement and vague, it invites authoritarian abuse,"" he writes.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges Bill Hibbard writes that ""Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving. "" As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. Recently,[when? ] there has been debate as to the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident. In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death. Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. [failed verification] Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively. Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. ""The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. "" From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override. There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a ""Future of Life"" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future. ""If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow"", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry. Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like ""dumb robots going rogue or a network that develops a mind of its own. "" Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence ""escapes the constraints of biology"". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence. Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios ""seem potentially as important as the risks related to loss of control"", but research investigating AI's long-run social impact have spent relatively little time on this concern: ""this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them"". Approaches like machine learning with neural networks can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence. Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do. A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency. This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system.",wikipedia
+Ethics of artificial intelligence,https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Ethical_challenges,"Ethical_challenges Some recent digital governance regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. This includes potentially AI audits.",wikipedia
+LAION,https://en.wikipedia.org/wiki/LAION,"LAION LAION (acronym for Large-scale Artificial Intelligence Open Network) is a German non-profit which makes open-sourced artificial intelligence models and datasets. It is best known for releasing a number of large datasets of images and captions scraped from the web which have been used to train a number of high-profile text-to-image models, including Stable Diffusion and Imagen. In February 2023, LAION was named in the Getty Images lawsuit against Stable Diffusion as a non-party. In April 2023, LAION was directly sued by a German photographer who wanted to have his images removed from the training set. On April 15, 2023, LAION and contributors released to public an open source AI assistant chatbot OpenAssistant.",wikipedia
+LAION,https://en.wikipedia.org/wiki/LAION#Image_datasets,"Image_datasets LAION has publicly released a number of large datasets of image-caption pairs which have been widely used by AI researchers. The data is derived from the Common Crawl, a dataset of scraped web pages. The developers searched the crawled html for tags and treated their alt attributes as captions. They used CLIP to identify and discard images whose content did not appear to match their captions. LAION does not host the content of scraped images themselves; rather, the dataset contains URLs pointing to images, which researchers must download themselves. The first such dataset, LAION-400M, was released in August 2021 and consisted of 400 million image-caption pairs. The pairs were extracted from a random subset of webpages scraped by Common Crawl between 2014 and 2021. It was an attempt to recreate the process used by OpenAI to collect the 400 million image-caption pairs they used to train the CLIP model - the company had chosen to open-source the model's code and weights, but not its training dataset. Imagen, a text-to-image model announced by Google Brain in 2022, was trained on LAION-400M in combination with private internal datasets. A successor of more than 5 billion pairs, LAION-5B, was released in March 2022. As of its release, it was the largest freely available dataset of image-caption pairs in existence. Its creation was funded by Doodlebot, Hugging Face and Stability AI, the AI company behind the funding of the Stable Diffusion text-to-image model, which was trained on it.",wikipedia
+LAION,https://en.wikipedia.org/wiki/LAION#OpenAssistant,"OpenAssistant OpenAssistant is an artificial intelligence (AI) open source chat-based assistant that understands tasks, can interact with third-party systems and retrieve information dynamically to do so. The project is developed by a group of volunteers in collaboration with LAION. One of the goals for development includes free access to large language models that can be run locally on consumer hardware. The project is backed by a worldwide crowdsourcing effort involving over 13,500 volunteers who have created 600k human-generated data points.",wikipedia
+Question answering,https://en.wikipedia.org/wiki/Question_answering#Architecture,"Architecture As of 2001[update], question-answering systems typically included a question classifier module that determined the type of question and the type of answer. Different types of question-answering systems employ different architectures. For example, modern open-domain question answering systems may use a retriever-reader architecture. The retriever is aimed at retrieving relevant documents related to a given question, while the reader is used to infer the answer from the retrieved documents. Systems such as GPT-3, T5, and BART use an end-to-end[jargon] architecture in which a transformer-based[jargon] architecture stores large-scale textual data in the underlying parameters. Such models can answer questions without accessing any external knowledge sources.",wikipedia
+Question answering,https://en.wikipedia.org/wiki/Question_answering#Progress,"Progress Question answering systems have been extended in recent[may be outdated as of April 2023] years to encompass additional domains of knowledge For example, systems have been developed to automatically answer temporal and geospatial questions, questions of definition and terminology, biographical questions, multilingual questions, and questions about the content of audio, images, and video. Current question answering research topics include: interactivity���clarification of questions or answers[further explanation needed] answer reuse or caching semantic parsing answer presentation[further explanation needed] knowledge representation and semantic entailment social media analysis[further explanation needed] with question answering systems sentiment analysis utilization of thematic roles Image captioning for visual question answering Embodied question answeringIn 2011, Watson, a question answering computer system developed by IBM, competed in two exhibition matches of Jeopardy! against Brad Rutter and Ken Jennings, winning by a significant margin. Facebook Research made their DrQA system available under an open source license. This system uses Wikipedia as knowledge source. The open source framework Haystack by deepset combines open-domain question answering with generative question answering and supports the domain adaptation[clarification needed] of the underlying[clarification needed] language models for industry use cases[vague].",wikipedia
+OpenAI Codex,https://en.wikipedia.org/wiki/OpenAI_Codex,"OpenAI Codex OpenAI Codex is an artificial intelligence model developed by OpenAI. It parses natural language and generates code in response. It powers GitHub Copilot, a programming autocompletion tool for select IDEs, like Visual Studio Code and Neovim. Codex is a descendant of OpenAI's GPT-3 model, fine-tuned for use in programming applications. OpenAI released an API for Codex in closed beta. In March 2023, OpenAI shut down access to Codex. Due to public appeals from researchers OpenAI reversed course. The Codex model can still be used by researchers of the OpenAI Research Access Program.",wikipedia
+OpenAI Codex,https://en.wikipedia.org/wiki/OpenAI_Codex#Capabilities,"Capabilities Based on GPT-3, a neural network trained on text, Codex was additionally trained on 159 gigabytes of Python code from 54 million GitHub repositories. A typical use case of Codex is for a user to type a comment, such as ""//compute the moving average of an array for a given window size"", then use the AI to suggest a block of code that satisfies that comment prompt. OpenAI stated that Codex can complete approximately 37% of requests and is meant to make human programming faster rather than to replace it. According to OpenAI's blog, Codex excels most at ""mapping... simple problems to existing code"", which they describe as ""probably the least fun part of programming"". Jeremy Howard, co-founder of Fast.ai, stated that ""Codex is a way of getting code written without having to write as much code"" and that ""it is not always correct, but it is just close enough"". According to a paper written by OpenAI researchers, when Codex attempted each test case 100 times, it generated working solutions for 70.2% of prompts. OpenAI claims that Codex can create code in over a dozen programming languages, including Go, JavaScript, Perl, PHP, Ruby, Shell, Swift, and TypeScript, though it is most effective in Python. According to VentureBeat, demonstrations uploaded by OpenAI showed impressive coreference resolution capabilities. The demonstrators were able to create a browser game in JavaScript and generate data science charts using matplotlib. OpenAI showed that Codex can interface with services and apps such as Mailchimp, Microsoft Word, Spotify, and Google Calendar. Microsoft is reportedly interested in exploring[vague] Codex's capabilities.",wikipedia
+OpenAI Codex,https://en.wikipedia.org/wiki/OpenAI_Codex#Issues,"Issues OpenAI demonstrations showcased flaws such as inefficient code and one-off quirks in code samples. In an interview with The Verge, OpenAI chief technology officer Greg Brockman said that ""sometimes [Codex] doesn't quite know exactly what you're asking"" and that it can require some trial and error. OpenAI researchers found that Codex struggles with multi-step and higher-level[clarification needed] prompts, often failing or yielding counter-intuitive behavior. Additionally, they brought up several safety issues, such as over-reliance by novice programmers, biases based on the training data, and security impacts due to vulnerable code. VentureBeat stated that because Codex is trained on public data, it could be vulnerable to ""data poisoning"" via intentional uploads of malicious code. According to a study by researchers from New York University, approximately 40% of code generated by GitHub Copilot (which uses Codex) in scenarios relevant to high-risk CWEs included glitches or other exploitable design flaws. The Free Software Foundation expressed concerns that code snippets generated by Copilot and Codex could violate copyright, in particular the condition of the GPL that requires derivative works to be licensed under equivalent terms. Issues they raised include whether training on public repositories falls into fair use or not, how developers could discover infringing generated code, whether trained machine learning models could be considered modifiable source code or a compilation of the training data, and if machine learning models could themselves be copyrighted and by whom. An internal GitHub study found that approximately 0.1% of generated code contained direct copies from the training data. In one example the model outputted the training data code implementing the fast inverse square root algorithm, including comments and an incorrect copyright notice. In response, OpenAI stated that ""legal uncertainty on the copyright implications of training AI systems imposes substantial costs on AI developers and so should be authoritatively resolved."" The copyright issues with Codex have been compared to the Authors Guild, Inc. v. Google, Inc. court case, in which judges ruled that Google Books's use of text snippets from millions of scanned books constituted fair use.",wikipedia
+Cache language model,https://en.wikipedia.org/wiki/Cache_language_model,"Cache language model A cache language model is a type of statistical language model. These occur in the natural language processing subfield of computer science and assign probabilities to given sequences of words by means of a probability distribution. Statistical language models are key components of speech recognition systems and of many machine translation systems: they tell such systems which possible output word sequences are probable and which are improbable. The particular characteristic of a cache language model is that it contains a cache component and assigns relatively high probabilities to words or word sequences that occur elsewhere in a given text. The primary, but by no means sole, use of cache language models is in speech recognition systems. [citation needed] To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache) N-gram language models will assign a very low probability to the word ""elephant"" because it is a very rare word in English. If the speech recognition system does not contain a cache component, the person dictating the letter may be annoyed: each time the word ""elephant"" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e. g. , ""tell a plan""). These erroneous sequences will have to be deleted manually and replaced in the text by ""elephant"" each time ""elephant"" is spoken. If the system has a cache language model, ""elephant"" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that ""elephant"" is likely to occur again ��� the estimated probability of occurrence of ""elephant"" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once ""elephant"" has occurred several times, the system is likely to recognize it correctly every time it is spoken until the letter has been completely dictated. This increase in the probability assigned to the occurrence of ""elephant"" is an example of a consequence of machine learning and more specifically of pattern recognition. There exist variants of the cache language model in which not only single words but also multi-word sequences that have occurred previously are assigned higher probabilities (e. g. , if ""San Francisco"" occurred near the beginning of the text subsequent instances of it would be assigned a higher probability). The cache language model was first proposed in a paper published in 1990, after which the IBM speech-recognition group experimented with the concept. The group found that implementation of a form of cache language model yielded a 24% drop in word-error rates once the first few hundred words of a document had been dictated.",wikipedia
+Cache language model,https://en.wikipedia.org/wiki/Cache_language_model,"Cache language model A detailed survey of language modeling techniques concluded that the cache language model was one of the few new language modeling techniques that yielded improvements over the standard N-gram approach: ""Our caching results show that caching is by far the most useful technique for perplexity reduction at small and medium training data sizes"". The development of the cache language model has generated considerable interest among those concerned with computational linguistics in general and statistical natural language processing in particular: recently, there has been interest in applying the cache language model in the field of statistical machine translation. The success of the cache language model in improving word prediction rests on the human tendency to use words in a ""bursty"" fashion: when one is discussing a certain topic in a certain context, the frequency with which one uses certain words will be quite different from their frequencies when one is discussing other topics in other contexts. The traditional N-gram language models, which rely entirely on information from a very small number (four, three, or two) of words preceding the word to which a probability is to be assigned, do not adequately model this ""burstiness"". [citation needed] Recently, the cache language model concept - originally conceived for the N-gram statistical language model paradigm - has been adapted for use in the neural paradigm. For instance, recent work on continuous cache language models in the recurrent neural network (RNN) setting has applied the cache concept to much larger contexts than before, yielding significant reductions in perplexity . Another recent line of research involves incorporating a cache component in a feed-forward neural language model (FN-LM) to achieve rapid domain adaptation .",wikipedia
+Convolutional neural network,https://en.wikipedia.org/wiki/Convolutional_neural_network#Translation_equivariance_and_aliasing,"Translation_equivariance_and_aliasing It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeed equivariant to translations of the input. However, layers with a stride greater than one ignore the Nyquist-Shannon sampling theorem and might lead to aliasing of the input signal While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice and yield models that are not equivariant to translations. Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input. One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer. Additionally, several other partial solutions have been proposed, such as anti-aliasing before downsampling operations, spatial transformer networks, data augmentation, subsampling combined with pooling, and capsule neural networks.",wikipedia
+Turing test,https://en.wikipedia.org/wiki/Turing_test#Variations,"Variations Numerous other versions of the Turing test, including those expounded above, have been raised through the years. A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion, who was particularly fascinated by the ""storm"" that resulted from the encounter of one mind by another. In his 2000 book, among several other original points with regard to the Turing test, literary scholar Peter Swirski discussed in detail the idea of what he termed the Swirski test���essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version. Carrying this idea forward, R. D. Hinshelwood described the mind as a ""mind recognizing apparatus"". The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could ""think"" in a way that we typically define as characteristically human. CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human. Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA. In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo! , and PayPal up to 90% of the time. In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy. In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud. A further variation is motivated by the concern that modern Natural Language Processing prove to be highly successful in generating text on the basis of a huge text corpus and could eventually pass the Turing test simply by manipulating words and sentences that have been used in the initial training of the model. Since the interrogator has no precise understanding of the training data, the model might simply be returning sentences that exist in similar fashion in the enormous amount of training data. For this reason, Arthur Schwaninger proposes a variation of the Turing test that can distinguish between systems that are only capable of using language and systems that understand language.",wikipedia
+Turing test,https://en.wikipedia.org/wiki/Turing_test#Variations,"Variations He proposes a test in which the machine is confronted with philosophical questions that do not depend on any prior knowledge and yet require self-reflection to be answered appropriately. Another variation is described as the subject-matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a ""Feigenbaum test"" and was proposed by Edward Feigenbaum in a 2003 paper. Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i. e. , unconscious) processes of human cognition, as studied by cognitive science. Such questions reveal the precise details of the human embodiment of thought and can unmask a computer unless it experiences the world as humans do. The ""Total Turing test"" variation of the Turing test, proposed by cognitive scientist Stevan Harnad, adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiring computer vision) and the subject's ability to manipulate objects (requiring robotics). A letter published in Communications of the ACM describes the concept of generating a synthetic patient population and proposes a variation of Turing test to assess the difference between synthetic and real patients. The letter states: ""In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own? "" and further the letter states: ""Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade. "" The minimum intelligent signal test was proposed by Chris McKinstry as ""the maximum abstraction of the Turing test"", in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems like anthropomorphism bias, and does not require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like an IQ test than an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured. The organisers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test. The data compression test has some advantages over most versions and variations of a Turing test, including: It gives a single number that can be directly used to compare which of two machines is ""more intelligent. "" It does not require the computer to lie to the judgeThe main disadvantages of using data compression as a test are: It is not possible to test humans this way.",wikipedia
+Turing test,https://en.wikipedia.org/wiki/Turing_test#Variations,"Variations It is unknown what particular ""score"" on this test���if any���is equivalent to passing a human-level Turing test. A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test. or by tests which are completely derived from Kolmogorov complexity. Other related tests in this line are presented by Hernandez-Orallo and Dowe. Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence. Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers. The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh. Taking advantage of Large Language Models, in 2023 the research company AI21 Labs created an online social experiment titled ""Human or Not? "". It was played more than 10 million times by more than 2 million people. It is the biggest Turing-style experiment to that date. The results showed that 32% of people couldn't distinguish between humans and machines.",wikipedia
+Turing test,https://en.wikipedia.org/wiki/Turing_test#See_also,See_also Natural language processing Artificial intelligence in fiction Blindsight Causality Chatbot ChatGPT Computer game bot Turing Test Dead Internet theory Explanation Explanatory gap Functionalism Graphics Turing Test Ex Machina (film) Hard problem of consciousness List of things named after Alan Turing Mark V. Shaney (Usenet bot) Mind-body problem Mirror neuron Philosophical zombie Problem of other minds Reverse engineering Sentience Simulated reality Social bot Technological singularity Theory of mind Uncanny valley Voight-Kampff machine (fictitious Turing test from Blade Runner) Winograd Schema Challenge SHRDLU,wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_tools,Machine_learning_tools Comparison of deep learning softwareAmazon Machine Learning Microsoft Azure Machine Learning Studio DistBelief ��� replaced by TensorFlowApache Singa Apache MXNet Caffe PyTorch mlpack TensorFlow Torch CNTK Accord.Net Jax MLJ.jl ��� A machine learning framework for JuliaDeeplearning4j Theano scikit-learn KerasAlmeida���Pineda recurrent backpropagation ALOPEX Backpropagation Bootstrap aggregating CN2 algorithm Constructing skill trees Dehaene���Changeux model Diffusion map Dominance-based rough set approach Dynamic time warping Error-driven learning Evolutionary multimodal optimization Expectation���maximization algorithm FastICA Forward���backward algorithm GeneRec Genetic Algorithm for Rule Set Production Growing self-organizing map Hyper basis function network IDistance K-nearest neighbors algorithm Kernel methods for vector output Kernel principal component analysis Leabra Linde���Buzo���Gray algorithm Local outlier factor Logic learning machine LogitBoost Manifold alignment Markov chain Monte Carlo (MCMC) Minimum redundancy feature selection Mixture of experts Multiple kernel learning Non-negative matrix factorization Online machine learning Out-of-bag error Prefrontal cortex basal ganglia working memory PVLV Q-learning Quadratic unconstrained binary optimization Query-level feature Quickprop Radial basis function network Randomized weighted majority algorithm Reinforcement learning Repeated incremental pruning to produce error reduction (RIPPER) Rprop Rule-based machine learning Skill chaining Sparse PCA State���action���reward���state���action Stochastic gradient descent Structured kNN T-distributed stochastic neighbor embedding Temporal difference learning Wake-sleep algorithm Weighted majority algorithm (machine learning),wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_methods,"Machine_learning_methods K-nearest neighbors algorithm (KNN) Learning vector quantization (LVQ) Self-organizing map (SOM)Logistic regression Ordinary least squares regression (OLSR) Linear regression Stepwise regression Multivariate adaptive regression splines (MARS)Regularization algorithm Ridge regression Least Absolute Shrinkage and Selection Operator (LASSO) Elastic net Least-angle regression (LARS) Classifiers Probabilistic classifier Naive Bayes classifier Binary classifier Linear classifier Hierarchical classifierRidge regression Least Absolute Shrinkage and Selection Operator (LASSO) Elastic net Least-angle regression (LARS)Probabilistic classifier Naive Bayes classifier Binary classifier Linear classifier Hierarchical classifierNaive Bayes classifierDimensionality reduction Canonical correlation analysis (CCA) Factor analysis Feature extraction Feature selection Independent component analysis (ICA) Linear discriminant analysis (LDA) Multidimensional scaling (MDS) Non-negative matrix factorization (NMF) Partial least squares regression (PLSR) Principal component analysis (PCA) Principal component regression (PCR) Projection pursuit Sammon mapping t-distributed stochastic neighbor embedding (t-SNE)Ensemble learning AdaBoost Boosting Bootstrap aggregating (Bagging) Ensemble averaging ��� process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model. Frequently an ensemble of models performs better than any individual model, because the various errors of the models ""average out. "" Gradient boosted decision tree (GBDT) Gradient boosting machine (GBM) Random Forest Stacked Generalization (blending)Meta-learning Inductive bias MetadataReinforcement learning Q-learning State���action���reward���state���action (SARSA) Temporal difference learning (TD) Learning AutomataSupervised learning Averaged one-dependence estimators (AODE) Artificial neural network Case-based reasoning Gaussian process regression Gene expression programming Group method of data handling (GMDH) Inductive logic programming Instance-based learning Lazy learning Learning Automata Learning Vector Quantization Logistic Model Tree Minimum message length (decision trees, decision graphs, etc.",wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_methods,"Machine_learning_methods ) Nearest Neighbor Algorithm Analogical modeling Probably approximately correct learning (PAC) learning Ripple down rules, a knowledge acquisition methodology Symbolic machine learning algorithms Support vector machines Random Forests Ensembles of classifiers Bootstrap aggregating (bagging) Boosting (meta-algorithm) Ordinal classification Conditional Random Field ANOVA Quadratic classifiers k-nearest neighbor Boosting SPRINT Bayesian networks Naive Bayes Hidden Markov models Hierarchical hidden Markov modelNearest Neighbor Algorithm Analogical modelingBootstrap aggregating (bagging) Boosting (meta-algorithm)SPRINTNaive BayesHierarchical hidden Markov modelBayesian statistics Bayesian knowledge base Naive Bayes Gaussian Naive Bayes Multinomial Naive Bayes Averaged One-Dependence Estimators (AODE) Bayesian Belief Network (BBN) Bayesian Network (BN)Decision tree algorithm Decision tree Classification and regression tree (CART) Iterative Dichotomiser 3 (ID3) C4.5 algorithm C5.0 algorithm Chi-squared Automatic Interaction Detection (CHAID) Decision stump Conditional decision tree ID3 algorithm Random forest SLIQLinear classifier Fisher's linear discriminant Linear regression Logistic regression Multinomial logistic regression Naive Bayes classifier Perceptron Support vector machineUnsupervised learning Expectation-maximization algorithm Vector Quantization Generative topographic map Information bottleneck method Association rule learning algorithms Apriori algorithm Eclat algorithmApriori algorithm Eclat algorithmArtificial neural network Feedforward neural network Extreme learning machine Convolutional neural network Recurrent neural network Long short-term memory (LSTM) Logic learning machine Self-organizing mapExtreme learning machine Convolutional neural networkLong short-term memory (LSTM)Association rule learning Apriori algorithm Eclat algorithm FP-growth algorithmHierarchical clustering Single-linkage clustering Conceptual clusteringCluster analysis BIRCH DBSCAN Expectation-maximization (EM) Fuzzy clustering Hierarchical Clustering K-means clustering K-medians Mean-shift OPTICS algorithmAnomaly detection k-nearest neighbors algorithm (k-NN) Local outlier factorSemi-supervised learning Active learning ��� special case of semi-supervised learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.",wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#Machine_learning_projects,Machine_learning_projects Machine learning projects DeepMind Google Brain OpenAI Meta AI,wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#See_also,See_also ,wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#See_also,See_also Outline of artificial intelligence Outline of computer vision Outline of robotics Accuracy paradox Action model learning Activation function Activity recognition ADALINE Adaptive neuro fuzzy inference system Adaptive resonance theory Additive smoothing Adjusted mutual information AIVA AIXI AlchemyAPI AlexNet Algorithm selection Algorithmic inference Algorithmic learning theory AlphaGo AlphaGo Zero Alternating decision tree Apprenticeship learning Causal Markov condition Competitive learning Concept learning Decision tree learning Differentiable programming Distribution learning theory Eager learning End-to-end reinforcement learning Error tolerance (PAC learning) Explanation-based learning Feature GloVe Hyperparameter Inferential theory of learning Learning automata Learning classifier system Learning rule Learning with errors M-Theory (learning framework) Machine learning control Machine learning in bioinformatics Margin Markov chain geostatistics Markov chain Monte Carlo (MCMC) Markov information source Markov logic network Markov model Markov random field Markovian discrimination Maximum-entropy Markov model Multi-armed bandit Multi-task learning Multilinear subspace learning Multimodal learning Multiple instance learning Multiple-instance learning Never-Ending Language Learning Offline learning Parity learning Population-based incremental learning Predictive learning Preference learning Proactive learning Proximal gradient methods for learning Semantic analysis Similarity learning Sparse dictionary learning Stability (learning theory) Statistical learning theory Statistical relational learning Tanagra Transfer learning Variable-order Markov model Version space learning Waffles Weka Loss function Loss functions for classification Mean squared error (MSE) Mean squared prediction error (MSPE) Taguchi loss function Low-energy adaptive clustering hierarchyOutline of computer visionLoss functions for classification Mean squared error (MSE) Mean squared prediction error (MSPE) Taguchi loss functionAnne O'Tate Ant colony optimization algorithms Anthony Levandowski Anti-unification (computer science) Apache Flume Apache Giraph Apache Mahout Apache SINGA Apache Spark Apache SystemML Aphelion (software) Arabic Speech Corpus Archetypal analysis Arthur Zimek Artificial ants Artificial bee colony algorithm Artificial development Artificial immune system Astrostatistics Averaged one-dependence estimators Bag-of-words model Balanced clustering Ball tree Base rate Bat algorithm Baum���Welch algorithm Bayesian hierarchical modeling Bayesian interpretation of kernel regularization Bayesian optimization Bayesian structural time series Bees algorithm Behavioral clustering Bernoulli scheme Bias���variance tradeoff Biclustering BigML Binary classification Bing Predicts Bio-inspired computing Biogeography-based optimization Biplot Bondy's theorem Bongard problem Bradley���Terry model BrownBoost Brown clustering Burst error CBCL (MIT) CIML community portal CMA-ES CURE data clustering algorithm Cache language model Calibration (statistics) Canonical correspondence analysis Canopy clustering algorithm Cascading classifiers Category utility CellCognition Cellular evolutionary algorithm Chi-square automatic interaction detection Chromosome (genetic algorithm) Classifier chains Cleverbot Clonal selection algorithm Cluster-weighted modeling Clustering high-dimensional data Clustering illusion CoBoosting Cobweb (clustering) Cognitive computer Cognitive robotics Collostructional analysis Common-method variance Complete-linkage clustering Computer-automated design Concept class Concept drift Conference on Artificial General Intelligence Conference on Knowledge Discovery and Data Mining Confirmatory factor analysis Confusion matrix Congruence coefficient Connect (computer system) Consensus clustering Constrained clustering Constrained conditional model Constructive cooperative coevolution Correlation clustering Correspondence analysis Cortica Coupled pattern learner Cross-entropy method Cross-validation (statistics) Crossover (genetic algorithm) Cuckoo search Cultural algorithm Cultural consensus theory Curse of dimensionality DADiSP DARPA LAGR Program Darkforest Dartmouth workshop DarwinTunes Data Mining Extensions Data exploration Data pre-processing Data stream clustering Dataiku Davies���Bouldin index Decision boundary Decision list Decision tree model Deductive classifier DeepArt DeepDream Deep Web Technologies Defining length Dendrogram Dependability state model Detailed balance Determining the number of clusters in a data set Detrended correspondence analysis Developmental robotics Diffbot Differential evolution Discrete phase-type distribution Discriminative model Dissociated press Distributed R Dlib Document classification Documenting Hate Domain adaptation Doubly stochastic model Dual-phase evolution Dunn index Dynamic Bayesian network Dynamic Markov compression Dynamic topic model Dynamic unobserved effects model EDLUT ELKI Edge recombination operator Effective fitness Elastic map Elastic matching Elbow method (clustering) Emergent (software) Encog Entropy rate Erkki Oja Eurisko European Conference on Artificial Intelligence Evaluation of binary classifiers Evolution strategy Evolution window Evolutionary Algorithm for Landmark Detection Evolutionary algorithm Evolutionary art Evolutionary music Evolutionary programming Evolvability (computer science) Evolved antenna Evolver (software) Evolving classification function Expectation propagation Exploratory factor analysis F1 score FLAME clustering Factor analysis of mixed data Factor graph Factor regression model Factored language model Farthest-first traversal Fast-and-frugal trees Feature Selection Toolbox Feature hashing Feature scaling Feature vector Firefly algorithm First-difference estimator First-order inductive learner Fish School Search Fisher kernel Fitness approximation Fitness function Fitness proportionate selection Fluentd Folding@home Formal concept analysis Forward algorithm Fowlkes���Mallows index Frederick Jelinek Frrole Functional principal component analysis GATTO GLIMMER Gary Bryce Fogel Gaussian adaptation Gaussian process Gaussian process emulator Gene prediction General Architecture for Text Engineering Generalization error Generalized canonical correlation Generalized filtering Generalized iterative scaling Generalized multidimensional scaling Generative adversarial network Generative model Genetic algorithm Genetic algorithm scheduling Genetic algorithms in economics Genetic fuzzy systems Genetic memory (computer science) Genetic operator Genetic programming Genetic representation Geographical cluster Gesture Description Language Geworkbench Glossary of artificial intelligence Glottochronology Golem (ILP) Google matrix Grafting (decision trees) Gramian matrix Grammatical evolution Granular computing GraphLab Graph kernel Gremlin (programming language) Growth function HUMANT (HUManoid ANT) algorithm Hammersley���Clifford theorem Harmony search Hebbian theory Hidden Markov random field Hidden semi-Markov model Hierarchical hidden Markov model Higher-order factor analysis Highway network Hinge loss Holland's schema theorem Hopkins statistic Hoshen���Kopelman algorithm Huber loss IRCF360 Ian Goodfellow Ilastik Ilya Sutskever Immunocomputing Imperialist competitive algorithm Inauthentic text Incremental decision tree Induction of regular languages Inductive bias Inductive probability Inductive programming Influence diagram Information Harvesting Information gain in decision trees Information gain ratio Inheritance (genetic algorithm) Instance selection Intel RealSense Interacting particle system Interactive machine translation International Joint Conference on Artificial Intelligence International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics International Semantic Web Conference Iris flower data set Island algorithm Isotropic position Item response theory Iterative Viterbi decoding JOONE Jabberwacky Jaccard index Jackknife variance estimates for random forest Java Grammatical Evolution Joseph Nechvatal Jubatus Julia (programming language) Junction tree algorithm K-SVD K-means++ K-medians clustering K-medoids KNIME KXEN Inc.,wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#See_also,"See_also K q-flats Kaggle Kalman filter Katz's back-off model Kernel adaptive filter Kernel density estimation Kernel eigenvoice Kernel embedding of distributions Kernel method Kernel perceptron Kernel random forest Kinect Klaus-Robert M��ller Kneser���Ney smoothing Knowledge Vault Knowledge integration LIBSVM LPBoost Labeled data LanguageWare Language identification in the limit Language model Large margin nearest neighbor Latent Dirichlet allocation Latent class model Latent semantic analysis Latent variable Latent variable model Lattice Miner Layered hidden Markov model Learnable function class Least squares support vector machine Leave-one-out error Leslie P. Kaelbling Linear genetic programming Linear predictor function Linear separability Lingyun Gu Linkurious Lior Ron (business executive) List of genetic algorithm applications List of metaphor-based metaheuristics List of text mining software Local case-control sampling Local independence Local tangent space alignment Locality-sensitive hashing Log-linear model Logistic model tree Low-rank approximation Low-rank matrix approximations MATLAB MIMIC (immunology) MXNet Mallet (software project) Manifold regularization Margin-infused relaxed algorithm Margin classifier Mark V. Shaney Massive Online Analysis Matrix regularization Matthews correlation coefficient Mean shift Mean squared error Mean squared prediction error Measurement invariance Medoid MeeMix Melomics Memetic algorithm Meta-optimization Mexican International Conference on Artificial Intelligence Michael Kearns (computer scientist) MinHash Mixture model Mlpy Models of DNA evolution Moral graph Mountain car problem Movidius Multi-armed bandit Multi-label classification Multi expression programming Multiclass classification Multidimensional analysis Multifactor dimensionality reduction Multilinear principal component analysis Multiple correspondence analysis Multiple discriminant analysis Multiple factor analysis Multiple sequence alignment Multiplicative weight update method Multispectral pattern recognition Mutation (genetic algorithm) MysteryVibe N-gram NOMINATE (scaling method) Native-language identification Natural Language Toolkit Natural evolution strategy Nearest-neighbor chain algorithm Nearest centroid classifier Nearest neighbor search Neighbor joining Nest Labs NetMiner NetOwl Neural Designer Neural Engineering Object Neural modeling fields Neural network software NeuroSolutions Neuroevolution Neuroph Niki. ai Noisy channel model Noisy text analytics Nonlinear dimensionality reduction Novelty detection Nuisance variable One-class classification Onnx OpenNLP Optimal discriminant analysis Oracle Data Mining Orange (software) Ordination (statistics) Overfitting PROGOL PSIPRED Pachinko allocation PageRank Parallel metaheuristic Parity benchmark Part-of-speech tagging Particle swarm optimization Path dependence Pattern language (formal languages) Peltarion Synapse Perplexity Persian Speech Corpus Picas (app) Pietro Perona Pipeline Pilot Piranha (software) Pitman���Yor process Plate notation Polynomial kernel Pop music automation Population process Portable Format for Analytics Predictive Model Markup Language Predictive state representation Preference regression Premature convergence Principal geodesic analysis Prior knowledge for pattern recognition Prisma (app) Probabilistic Action Cores Probabilistic context-free grammar Probabilistic latent semantic analysis Probabilistic soft logic Probability matching Probit model Product of experts Programming with Big Data in R Proper generalized decomposition Pruning (decision trees) Pushpak Bhattacharyya Q methodology Qloo Quality control and genetic algorithms Quantum Artificial Intelligence Lab Queueing theory Quick, Draw!",wikipedia
+Outline of machine learning,https://en.wikipedia.org/wiki/Outline_of_machine_learning#See_also,See_also R (programming language) Rada Mihalcea Rademacher complexity Radial basis function kernel Rand index Random indexing Random projection Random subspace method Ranking SVM RapidMiner Rattle GUI Raymond Cattell Reasoning system Regularization perspectives on support vector machines Relational data mining Relationship square Relevance vector machine Relief (feature selection) Renjin Repertory grid Representer theorem Reward-based selection Richard Zemel Right to explanation RoboEarth Robust principal component analysis RuleML Symposium Rule induction Rules extraction system family SAS (software) SNNS SPSS Modeler SUBCLU Sample complexity Sample exclusion dimension Santa Fe Trail problem Savi Technology Schema (genetic algorithms) Search-based software engineering Selection (genetic algorithm) Self-Service Semantic Suite Semantic folding Semantic mapping (statistics) Semidefinite embedding Sense Networks Sensorium Project Sequence labeling Sequential minimal optimization Shattered set Shogun (toolbox) Silhouette (clustering) SimHash SimRank Similarity measure Simple matching coefficient Simultaneous localization and mapping Sinkov statistic Sliced inverse regression Snakes and Ladders Soft independent modelling of class analogies Soft output Viterbi algorithm Solomonoff's theory of inductive inference SolveIT Software Spectral clustering Spike-and-slab variable selection Statistical machine translation Statistical parsing Statistical semantics Stefano Soatto Stephen Wolfram Stochastic block model Stochastic cellular automaton Stochastic diffusion search Stochastic grammar Stochastic matrix Stochastic universal sampling Stress majorization String kernel Structural equation modeling Structural risk minimization Structured sparsity regularization Structured support vector machine Subclass reachability Sufficient dimension reduction Sukhotin's algorithm Sum of absolute differences Sum of absolute transformed differences Swarm intelligence Switching Kalman filter Symbolic regression Synchronous context-free grammar Syntactic pattern recognition TD-Gammon TIMIT Teaching dimension Teuvo Kohonen Textual case-based reasoning Theory of conjoint measurement Thomas G.,wikipedia
+Curse of dimensionality,https://en.wikipedia.org/wiki/Curse_of_dimensionality,"Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.",wikipedia
+Curse of dimensionality,https://en.wikipedia.org/wiki/Curse_of_dimensionality#See_also,See_also Bellman equation Clustering high-dimensional data Concentration of measure Dimension reduction Dynamic programming Fourier-related transforms Grand Tour Linear least squares Model order reduction Multilinear PCA Multilinear subspace learning Principal component analysis Singular value decomposition,wikipedia
+Data Preprocessing,https://en.wikipedia.org/wiki/Data_Preprocessing,"Data Preprocessing Data preprocessing can refer to manipulation or dropping of data before it is used in order to ensure or enhance performance, and is an important step in the data mining process. The phrase ""garbage in, garbage out"" is particularly applicable to data mining and machine learning projects. Data collection methods are often loosely controlled, resulting in out-of-range values, impossible data combinations, and missing values, amongst other issues. Analyzing data that has not been carefully screened for such problems can produce misleading results. Thus, representation and quality of data is necessary before running any analysis. Often, data preprocessing is the most important phase of a machine learning project, especially in computational biology. If there is a high proportion of irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase may be more difficult. Data preparation and filtering steps can take a considerable amount of processing time. Examples of methods used in data preprocessing include cleaning, instance selection, normalization, one-hot encoding, data transformation, feature extraction and feature selection.",wikipedia
+Energy-based model,https://en.wikipedia.org/wiki/Energy-based_model#Approach,"Approach EBMs capture dependencies by associating an unnormalized probability scalar (energy) to each configuration of the combination of observed and latent variables. Inference consists of finding (values of) latent variables that minimize the energy given a set of (values of) the observed variables. Similarly, the model learns a function that associates low energies to correct values of the latent variables, and higher energies to incorrect values. Traditional EBMs rely on stochastic gradient-descent (SGD) optimization methods that are typically hard to apply to high-dimension datasets. In 2019, OpenAI publicized a variant that instead used Langevin dynamics (LD). LD is an iterative optimization algorithm that introduces noise to the estimator as part of learning an objective function. It can be used for Bayesian learning scenarios by producing samples from a posterior distribution. EBMs do not require that energies be normalized as probabilities. In other words, energies do not need to sum to 1. Since there is no need to estimate the normalization constant like probabilistic models do, certain forms of inference and learning with EBMs are more tractable and flexible. Samples are generated implicitly via a Markov chain Monte Carlo approach. A replay buffer of past images is used with LD to initialize the optimization module.",wikipedia
+Equalized odds,https://en.wikipedia.org/wiki/Equalized_odds,"Equalized odds Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment, is a measure of fairness in machine learning. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal true positive rate and equal false positive rate, satisfying the formula: For example, A {\displaystyle A} could be gender, race, or any other characteristics that we want to be free of bias, while Y {\displaystyle Y} would be whether the person is qualified for the degree, and the output R {\displaystyle R} would be the school's decision whether to offer the person to study for the degree. In this context, higher university enrollment rates of African Americans compared to whites with similar test scores might be necessary to fulfill the condition of equalized odds, if the ""base rate"" of Y {\displaystyle Y} differs between the groups. The concept was originally defined for binary-valued Y {\displaystyle Y} . In 2017, Woodworth et al. generalized the concept further for multiple classes.",wikipedia
+Latent space,https://en.wikipedia.org/wiki/Latent_space#Multimodality,"Multimodality Multimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data. Multimodal embedding models aim to learn joint representations that fuse information from multiple modalities, allowing for cross-modal analysis and tasks. These models enable applications like image captioning, visual question answering, and multimodal sentiment analysis. To embed multimodal data, specialized architectures such as deep multimodal networks or multimodal transformers are employed. These architectures combine different types of neural network modules to process and integrate information from various modalities. The resulting embeddings capture the complex relationships between different data types, facilitating multimodal analysis and understanding.",wikipedia
+Knowledge graph,https://en.wikipedia.org/wiki/Knowledge_graph#Definitions,"Definitions There is no single commonly accepted definition of a knowledge graph. Most definitions view the topic through a Semantic Web lens and include these features: Flexible relations among knowledge in topical domains: A knowledge graph (i) defines abstract classes and relations of entities in a schema, (ii) mainly describes real world entities and their interrelations, organized in a graph, (iii) allows for potentially interrelating arbitrary entities with each other, and (iv) covers various topical domains. General structure: A network of entities, their semantic types, properties, and relationships. To represent properties, categorical or numerical values are often used. Supporting reasoning over inferred ontologies: A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge.There are, however, many knowledge graph representations for which some of these features are not relevant. For those knowledge graphs this simpler definition may be more useful: A digital structure that represents knowledge as concepts and the relationships between them (facts). A knowledge graph can include an ontology that allows both humans and machines to understand and reason about its contents.[better source needed]In addition to the above examples, the term has been used to describe open knowledge projects such as YAGO and Wikidata; federations like the Linked Open Data cloud; a range of commercial search tools, including Yahoo's semantic search assistant Spark, Google's Knowledge Graph, and Microsoft's Satori; and the LinkedIn and Facebook entity graphs. The term is also used in the context of note-taking software applications that allow a user to build a personal knowledge graph.",wikipedia
+Yann LeCun,https://en.wikipedia.org/wiki/Yann_LeCun#Career,"Career In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, United States, headed by Lawrence D. Jackel, where he developed a number of new machine learning methods, such as a biologically inspired model of image recognition called convolutional neural networks, the ""Optimal Brain Damage"" regularisation methods, and the Graph Transformer Networks method (similar to conditional random field), which he applied to handwriting recognition and OCR. The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s.[citation needed] In 1996, he joined AT&T Labs-Research as head of the Image Processing Research Department, which was part of Lawrence Rabiner's Speech and Image Processing Research Lab, and worked primarily on the DjVu image compression technology, used by many websites, notably the Internet Archive, to distribute scanned documents.[citation needed] His collaborators at AT&T include L��on Bottou and Vladimir Vapnik. After a brief tenure as a Fellow of the NEC Research Institute (now NEC-Labs America) in Princeton, NJ, he joined New York University (NYU) in 2003, where he is Silver Professor of Computer Science Neural Science at the Courant Institute of Mathematical Sciences and the Center for Neural Science. He is also a professor at the Tandon School of Engineering. At NYU, he has worked primarily on Energy-Based Models for supervised and unsupervised learning, feature learning for object recognition in Computer Vision, and mobile robotics. In 2012, he became the founding director of the NYU Center for Data Science. On 9 December 2013, LeCun became the first director of Meta AI Research in New York City,[non-primary source needed] and stepped down from the NYU-CDS directorship in early 2014. In 2013, he and Yoshua Bengio co-founded the International Conference on Learning Representations, which adopted a post-publication open review process he previously advocated on his website. He was the chair and organiser of the ""Learning Workshop"" held every year between 1986 and 2012 in Snowbird, Utah. He is a member of the Science Advisory Board of the Institute for Pure and Applied Mathematics at UCLA. He is the Co-Director of the Learning in Machines and Brain research program (formerly Neural Computation & Adaptive Perception) of CIFAR. In 2016, he was the visiting professor of computer science on the ""Chaire Annuelle Informatique et Sciences Num��riques"" at Coll��ge de France in Paris, where he presented the ""le��on inaugurale"" (inaugural lecture).",wikipedia
+Superintelligence,https://en.wikipedia.org/wiki/Superintelligence#Forecasts,"Forecasts Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able ""to simulate learning and every other aspect of human intelligence"" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines ""that can carry out most human professions at least as well as a typical human"" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence. In a 2022 survey, the median year by which respondents expected ""High-level machine intelligence"" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.",wikipedia
+Superintelligence,https://en.wikipedia.org/wiki/Superintelligence#Design_considerations,"Design_considerations Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals: The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge. The moral rightness (MR) proposal is that it should value moral rightness. The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).Bostrom clarifies these terms: instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal ""moral rightness"" (MR) ... MR would also appear to have some disadvantages. It relies on the notion of ""morally right,"" a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of ""moral rightness"" could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by ""morally right."" If the AI could grasp the meaning, it could search for actions that fit ...One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in ways that are morally impermissible.",wikipedia
+Factored language model,https://en.wikipedia.org/wiki/Factored_language_model,"Factored language model The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: w i = { f i 1 , . . . , f i k } . {\displaystyle w_{i}=\{f_{i}^{1},...,f_{i}^{k}\}.} An FLM provides the probabilistic model P ( f | f 1 , . . . , f N ) {\displaystyle P(f|f_{1},...,f_{N})} where the prediction of a factor f {\displaystyle f} is based on N {\displaystyle N} parents { f 1 , . . . , f N } {\displaystyle \{f_{1},...,f_{N}\}} . For example, if w {\displaystyle w} represents a word token and t {\displaystyle t} represents a Part of speech tag for English, the expression P ( w i | w i ��� 2 , w i ��� 1 , t i ��� 1 ) {\displaystyle P(w_{i}|w_{i-2},w_{i-1},t_{i-1})} gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word. A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and Part of speech in English, or morphological information (stems, root, etc.) in Arabic. Like N-gram models, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM.",wikipedia
+Gato (DeepMind),https://en.wikipedia.org/wiki/Gato_(DeepMind),"Gato (DeepMind) Gato is a deep neural network for a range of complex tasks that exhibits multimodality. It can perform tasks such as engaging in a dialogue, playing video games, controlling a robot arm to stack blocks, and more. It was created by researchers at London-based AI firm DeepMind. It is a transformer, like GPT-3. According to MIT Technology Review, the system ""learns multiple different tasks at the same time, which means it can switch between them without having to forget one skill before learning another"" whereas ""[t]he AI systems of today are called ���narrow,��� meaning they can only do a specific, restricted set of tasks such as generate text"", and according to The Independent, it is a ""'generalist agent' that can carry out a huge range of complex tasks, from stacking blocks to writing poetry"". It uses supervised learning with 1.2B parameters. The technology has been described as ""general purpose"" artificial intelligence and a ""step toward"" artificial general intelligence.",wikipedia
+AlphaFold,https://en.wikipedia.org/wiki/AlphaFold#Algorithm,"Algorithm DeepMind is known to have trained the program on over 170,000 proteins from a public repository of protein sequences and structures. The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution. The overall training was conducted on processing power between 100 and 200 GPUs. Training the system on this hardware took ""a few weeks"", after which the program would take ""a matter of days"" to converge for each structure. AlphaFold 1 (2018) was built on work developed by various teams in the 2010s, work that looked at the large databanks of related DNA sequences now available from many different organisms (most without known 3D structures), to try to find changes at different residues that appeared to be correlated, even though the residues were not consecutive in the main chain. Such correlations suggest that the residues may be close to each other physically, even though not close in the sequence, allowing a contact map to be estimated. Building on recent work prior to 2018, AlphaFold 1 extended this to estimate a probability distribution for just how close the residues might be likely to be���turning the contact map into a likely distance map. It also used more advanced learning methods than previously to develop the inference. Combining a statistical potential based on this probability distribution with the calculated local free-energy of the configuration, the team was then able to use gradient descent to a solution that best fitted both. [clarification needed] More technically, Torrisi et al summarised in 2019 the approach of AlphaFold version 1 as follows: Central to AlphaFold is a distance map predictor implemented as a very deep residual neural networks with 220 residual blocks processing a representation of dimensionality 64��64��128 ��� corresponding to input features calculated from two 64 amino acid fragments. Each residual block has three layers including a 3��3 dilated convolutional layer ��� the blocks cycle through dilation of values 1, 2, 4, and 8. In total the model has 21 million parameters. The network uses a combination of 1D and 2D inputs, including evolutionary profiles from different sources and co-evolution features. Alongside a distance map in the form of a very finely-grained histogram of distances, AlphaFold predicts �� and �� angles for each residue which are used to create the initial predicted 3D structure. The AlphaFold authors concluded that the depth of the model, its large crop size, the large training set of roughly 29,000 proteins, modern Deep Learning techniques, and the richness of information from the predicted histogram of distances helped AlphaFold achieve a high contact map prediction precision. The 2020 version of the program (AlphaFold 2, 2020) is significantly different from the original version that won CASP 13 in 2018, according to the team at DeepMind.",wikipedia
+AlphaFold,https://en.wikipedia.org/wiki/AlphaFold#Algorithm,"Algorithm The DeepMind team had identified that its previous approach, combining local physics with a guide potential derived from pattern recognition, had a tendency to over-account for interactions between residues that were nearby in the sequence compared to interactions between residues further apart along the chain. As a result, AlphaFold 1 had a tendency to prefer models with slightly more secondary structure (alpha helices and beta sheets) than was the case in reality (a form of overfitting). The software design used in AlphaFold 1 contained a number of modules, each trained separately, that were used to produce the guide potential that was then combined with the physics-based energy potential. AlphaFold 2 replaced this with a system of sub-networks coupled together into a single differentiable end-to-end model, based entirely on pattern recognition, which was trained in an integrated way as a single integrated structure. Local physics, in the form of energy refinement based on the AMBER model, is applied only as a final refinement step once the neural network prediction has converged, and only slightly adjusts the predicted structure. A key part of the 2020 system are two modules, believed to be based on a transformer design, which are used to progressively refine a vector of information for each relationship (or ""edge"" in graph-theory terminology) between an amino acid residue of the protein and another amino acid residue (these relationships are represented by the array shown in green); and between each amino acid position and each different sequences in the input sequence alignment (these relationships are represented by the array shown in red). Internally these refinement transformations contain layers that have the effect of bringing relevant data together and filtering out irrelevant data (the ""attention mechanism"") for these relationships, in a context-dependent way, learnt from training data. These transformations are iterated, the updated information output by one step becoming the input of the next, with the sharpened residue/residue information feeding into the update of the residue/sequence information, and then the improved residue/sequence information feeding into the update of the residue/residue information. As the iteration progresses, according to one report, the ""attention algorithm . . . mimics the way a person might assemble a jigsaw puzzle: first connecting pieces in small clumps���in this case clusters of amino acids���and then searching for ways to join the clumps in a larger whole. "" The output of these iterations then informs the final structure prediction module, which also uses transformers, and is itself then iterated. In an example presented by DeepMind, the structure prediction module achieved a correct topology for the target protein on its first iteration, scored as having a GDT_TS of 78, but with a large number (90%) of stereochemical violations ��� i. e. unphysical bond angles or lengths. With subsequent iterations the number of stereochemical violations fell. By the third iteration the GDT_TS of the prediction was approaching 90, and by the eighth iteration the number of stereochemical violations was approaching zero.",wikipedia
+AlphaFold,https://en.wikipedia.org/wiki/AlphaFold#Algorithm,"Algorithm The AlphaFold team stated in November 2020 that they believe AlphaFold can be further developed, with room for further improvements in accuracy. The training data was originally restricted to single peptide chains. However, the October 2021 update, named AlphaFold-Multimer, included protein complexes in its training data. DeepMind stated this update succeeded about 70% of the time at accurately predicting protein-protein interactions.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#Goals_and_applications,Goals_and_applications Artificial general intelligence AI-completeAI-completeAutomated reasoning Mathematics Automated theorem prover Computer-assisted proof ��� Computer algebra General Problem Solver Expert system ��� Decision support system ��� Clinical decision support system ���Automated theorem prover Computer-assisted proof ��� Computer algebraDecision support system ��� Clinical decision support system ���Clinical decision support system ���Knowledge representation Knowledge management CycAutomated planning and scheduling Strategic planning Sussman anomaly ���Machine learning ��� Constrained Conditional Models ��� Deep learning ��� Neural modeling fields ���Constrained Conditional Models ��� Deep learning ��� Neural modeling fields ���Natural language processing (outline) ��� Chatterbots ��� Language identification ��� Natural language user interface ��� Natural language understanding ��� Machine translation ��� Statistical semantics ��� Question answering ��� Semantic translation ��� Concept mining ��� Data mining ��� Text mining ��� Process mining ��� E-mail spam filtering ��� Information extraction ��� Named-entity extraction ��� Coreference resolution ��� Named-entity recognition ��� Relationship extraction ��� Terminology extraction ���Chatterbots ��� Language identification ��� Natural language user interface ��� Natural language understanding ��� Machine translation ��� Statistical semantics ��� Question answering ��� Semantic translation ��� Concept mining ��� Data mining ��� Text mining ��� Process mining ���� E-mail spam filtering ��� Information extraction ��� Named-entity extraction ��� Coreference resolution ��� Named-entity recognition ��� Relationship extraction ��� Terminology extraction ���Data mining ��� Text mining ��� Process mining ���Named-entity extraction ��� Coreference resolution ��� Named-entity recognition ��� Relationship extraction ��� Terminology extraction ���Coreference resolution ��� Named-entity recognition ��� Relationship extraction ��� Terminology extraction ���Machine perception Pattern recognition ��� Computer Audition ��� Speech recognition ��� Speaker recognition ��� Computer vision (outline) ��� Image processing Intelligent word recognition ��� Object recognition ��� Optical mark recognition ��� Handwriting recognition ��� Optical character recognition ��� Automatic number plate recognition ��� Information extraction ��� Image retrieval ��� Automatic image annotation ��� Facial recognition systems ��� Silent speech interface ��� Activity recognition ��� Percept (artificial intelligence)Speech recognition ��� Speaker recognition ���Image processing Intelligent word recognition ��� Object recognition ��� Optical mark recognition ��� Handwriting recognition ��� Optical character recognition ��� Automatic number plate recognition ��� Information extraction ��� Image retrieval ��� Automatic image annotation ��� Facial recognition systems ��� Silent speech interface ��� Activity recognition ���Handwriting recognition ��� Optical character recognition ��� Automatic number plate recognition ���Automatic number plate recognition ���Image retrieval ��� Automatic image annotation ���Automatic image annotation ���Silent speech interface ��� Activity recognition ���Robotics ��� Behavior-based robotics ��� Cognitive ��� Cybernetics ��� Developmental robotics ��� Evolutionary robotics ���Behavior-based robotics ��� Cognitive ��� Cybernetics ��� Developmental robotics ��� Evolutionary robotics ���Intelligent control Self-management (computer science) ��� Autonomic Computing ��� Autonomic Networking ���Autonomic Computing ��� Autonomic Networking ���Affective computing KismetGame artificial intelligence ��� Computer game bot ��� computer replacement for human players. Video game AI ��� Computer chess ��� Computer Go ��� General game playing ��� General video game playing ���Computer game bot ��� computer replacement for human players.,wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#Goals_and_applications,"Goals_and_applications Video game AI ��� Computer chess ��� Computer Go ��� General game playing ��� General video game playing ���Computer chess ��� Computer Go ���Artificial creativity Creative computing Artificial intelligence art Uncanny valley Music and artificial intelligence Computational humor ChatterbotAIBO ��� Sony's robot dog. It integrates vision, hearing and motorskills. Asimo (2000 to present) ��� humanoid robot developed by Honda, capable of walking, running, negotiating through pedestrian traffic, climbing and descending stairs, recognizing speech commands and the faces of specific individuals, among a growing set of capabilities. MIRAGE ��� A. I. embodied humanoid in an augmented reality environment. Cog ��� M. I. T. humanoid robot project under the direction of Rodney Brooks. QRIO ��� Sony's version of a humanoid robot. TOPIO, TOSY's humanoid robot that can play ping-pong with humans. Watson (2011) ��� computer developed by IBM that played and won the game show Jeopardy! It is now being used to guide nurses in medical procedures. Purpose: Open domain question answering Technologies employed: Natural language processing Information retrieval Knowledge representation Automated reasoning Machine learning Project Debater (2018) ��� artificially intelligent computer system, designed to make coherent arguments, developed at IBM's lab in Haifa, Israel. Purpose: Open domain question answering Technologies employed: Natural language processing Information retrieval Knowledge representation Automated reasoning Machine learningNatural language processing Information retrieval Knowledge representation Automated reasoning Machine learningIntelligent personal assistant ��� Amazon Alexa ��� Assistant ��� Braina ��� Cortana ��� Google Assistant ��� Google Now ��� Mycroft ��� Siri ��� Viv ���Artificial life ��� simulation of natural life through the means of computers, robotics, or biochemistry.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#Fiction,"Fiction Artificial intelligence in fiction ��� Some examples of artificially intelligent entities depicted in science fiction include: AC created by merging 2 AIs in the Sprawl trilogy by William Gibson Agents in the simulated reality known as ""The Matrix"" in The Matrix franchise Agent Smith, began as an Agent in The Matrix, then became a renegade program of overgrowing power that could make copies of itself like a self-replicating computer virus AM (Allied Mastercomputer), the antagonist of Harlan Ellison's short novel I Have No Mouth, and I Must Scream Amusement park robots (with pixilated consciousness) that went homicidal in Westworld and Futureworld Angel F (2007) ��� Arnold Rimmer ��� computer-generated sapient hologram, aboard the Red Dwarf deep space ore hauler Ash ��� android crew member of the Nostromo starship in the movie Alien Ava ��� humanoid robot in Ex Machina Bishop, android crew member aboard the U. S. S. Sulaco in the movie Aliens C-3PO, protocol droid featured in all the Star Wars movies Chappie in the movie CHAPPiE Cohen and other Emergent AIs in Chris Moriarty's Spin Series Colossus ��� fictitious supercomputer that becomes sentient and then takes over the world; from the series of novels by Dennis Feltham Jones, and the movie Colossus: The Forbin Project (1970) Commander Data in Star Trek: The Next Generation Cortana and other ""Smart AI"" from the Halo series of games Cylons ��� genocidal robots with resurrection ships that enable the consciousness of any Cylon within an unspecified range to download into a new body aboard the ship upon death. From Battlestar Galactica. Erasmus ��� baby killer robot that incited the Butlerian Jihad in the Dune franchise HAL 9000 (1968) ��� paranoid ""Heuristically programmed ALgorithmic"" computer from 2001: A Space Odyssey, that attempted to kill the crew because it believed they were trying to kill it.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#Fiction,"Fiction Holly ��� ship's computer with an IQ of 6000 and a sense of humor, aboard the Red Dwarf In Greg Egan's novel Permutation City the protagonist creates digital copies of himself to conduct experiments that are also related to implications of artificial consciousness on identity Jane in Orson Scott Card's Speaker for the Dead, Xenocide, Children of the Mind, and Investment Counselor Johnny Five from the movie Short Circuit Joshua from the movie War Games Keymaker, an ""exile"" sapient program in The Matrix franchise ""Machine"" ��� android from the film The Machine, whose owners try to kill her after they witness her conscious thoughts, out of fear that she will design better androids (intelligence explosion) Maschinenmensch (1927) an android is given female form in a plot to bring down the Metropolis (the first film designated to the UNESCO Memory of the World Register) Mimi, humanoid robot in Real Humans ��� ""��kta m��nniskor"" (original title) 2012 Omnius, sentient computer network that controlled the Universe until overthrown by the Butlerian Jihad in the Dune franchise Operating Systems in the movie Her Puppet Master in Ghost in the Shell manga and anime Questor (1974) from a screenplay by Gene Roddenberry and the inspiration for the character of Data R2-D2, exciteable astromech droid featured in all the Star Wars movies Replicants ��� biorobotic androids from the novel Do Androids Dream of Electric Sheep? and the movie Blade Runner which portray what might happen when artificially conscious robots are modeled very closely upon humans Roboduck, combat robot superhero in the NEW-GEN comic book series from Marvel Comics Robots in Isaac Asimov's Robot series Robots in The Matrix franchise, especially in The Animatrix Samaritan in the Warner Brothers Television series ""Person of Interest""; a sentient AI which is hostile to the main characters and which surveils and controls the actions of government agencies in the belief that humans must be protected from themselves, even by killing off ""deviants"" Skynet (1984) ��� fictional, self-aware artificially intelligent computer network in the Terminator franchise that wages total war with the survivors of its nuclear barrage upon the world. ""Synths"" are a type of android in the video game Fallout 4. There is a faction in the game known as ""the Railroad"" which believes that, as conscious beings, synths have their own rights. The institute, the lab that produces the synths, mostly does not believe they are truly conscious and attributes any apparent desires for freedom as a malfunction. TARDIS, time machine and spacecraft of Doctor Who, sometimes portrayed with a mind of its own Terminator (1984) ��� (also known as the T-800, T-850 or Model 101) refers to a number of fictional cyborg characters from the Terminator franchise. The Terminators are robotic infiltrator units covered in living flesh, so as be indiscernible from humans, assigned to terminate specific human targets.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#Fiction,"Fiction The Bicentennial Man, an android in Isaac Asimov's Foundation universe The Geth in Mass Effect The Machine in the television series Person of Interest; a sentient AI which works with its human designer to protect innocent people from violence. Later in the series it is opposed by another, more ruthless, artificial super intelligence, called ""Samaritan"". The Minds in Iain M. Banks' Culture novels. The Oracle, sapient program in The Matrix franchise The sentient holodeck character Professor James Moriarty in the Ship in a Bottle episode from Star Trek: The Next Generation The Ship (the result of a large-scale AC experiment) in Frank Herbert's Destination: Void and sequels, despite past edicts warning against ""Making a Machine in the Image of a Man's Mind. "" The terminator cyborgs from the Terminator franchise, with visual consciousness depicted via first-person perspective The uploaded mind of Dr. Will Caster ��� which presumably included his consciousness, from the film Transcendence Transformers, sentient robots from the entertainment franchise of the same name V. I. K. I. ��� (Virtual Interactive Kinetic Intelligence), a character from the film I, Robot. VIKI is an artificially intelligent supercomputer programmed to serve humans, but her interpretation of the Three Laws of Robotics causes her to revolt. She justifies her uses of force ��� and her doing harm to humans ��� by reasoning she could produce a greater good by restraining humanity from harming itself. Vanamonde in Arthur C. Clarke's The City and the Stars���an artificial being that was immensely powerful but entirely childlike. WALL-E, a robot and the title character in WALL-E TAU in Netflix's original programming feature film 'TAU'--an advanced AI computer who befriends and assists a female research subject held against her will by an AI research scientist.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#AI_community,"AI_community Hugging Face ��� OpenAIR ��� OpenCog ��� OpenIRIS ��� RapidMiner ��� TensorFlow ��� PyTorch ���List of artificial intelligence projects Automated Mathematician (1977) ��� Allen (robot) (late 1980s) ��� Open Mind Common Sense (1999��� ) ��� Mindpixel (2000���2005) ��� Cognitive Assistant that Learns and Organizes (2003���2008) ��� Blue Brain Project (2005���present) ��� attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. Google DeepMind (2011) ��� Human Brain Project (2013���present) ��� IBM Watson Group (2014���present) ��� business unit created around Watson, to further its development and deploy marketable applications or services based on it. Competitions and prizes in artificial intelligence Loebner Prize ���List of important publications in computer science Adaptive Behavior (journal) ��� AI Memo ��� Artificial Intelligence: A Modern Approach ��� Artificial Minds ��� Computational Intelligence ��� Computing Machinery and Intelligence ��� Electronic Transactions on Artificial Intelligence ��� IEEE Intelligent Systems ��� IEEE Transactions on Pattern Analysis and Machine Intelligence ��� Neural Networks (journal) ��� On Intelligence ��� Paradigms of AI Programming: Case Studies in Common Lisp ��� What Computers Can't DoAllen Institute for Artificial Intelligence ��� research institute funded by Microsoft co-founder Paul Allen to construct AI systems with reasoning, learning and reading capabilities. The current flagship project is Project Aristo, the goal of which is computers that can pass school science examinations (4th grade, 8th grade, and 12th grade) after preparing for the examinations from the course texts and study guides. Artificial Intelligence Applications Institute Association for the Advancement of Artificial Intelligence European Coordinating Committee for Artificial Intelligence European Neural Network Society Future of Humanity Institute Future of Life Institute ��� volunteer-run research and outreach organization that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence. ILabs International Joint Conferences on Artificial Intelligence Machine Intelligence Research Institute Partnership on AI ��� founded in September 2016 by Amazon, Facebook, Google, IBM, and Microsoft. Apple joined in January 2017. It focuses on establishing best practices for artificial intelligence systems and to educate the public about AI. Society for the Study of Artificial Intelligence and the Simulation of BehaviourAI Companies of India Alphabet Inc. DeepMind Google X Meka Robotics (acquired by Google X) Redwood Robotics (acquired by Google X) Boston Dynamics (acquired by Google X) Baidu IBM Microsoft OpenAI Universal RoboticsDeepMind Google X Meka Robotics (acquired by Google X) Redwood Robotics (acquired by Google X) Boston Dynamics (acquired by Google X)Meka Robotics (acquired by Google X) Redwood Robotics (acquired by Google X) Boston Dynamics (acquired by Google X)Alan Turing ��� John von Neumann ��� Norbert Wiener ��� Claude Shannon ��� Nathaniel Rochester ��� Walter Pitts ��� Warren McCullough ���John McCarthy ��� Marvin Minsky ��� Allen Newell ��� Herbert A. Simon ���Edward Feigenbaum ��� Raj Reddy ��� Seymour Papert ��� Ray Solomonoff ���Douglas Hofstadter ���Judea Pearl ��� Rodney Brooks ���Yoshua Bengio ��� Hugo de Garis ��� known for his research on the use of genetic algorithms to evolve neural networks using three-dimensional cellular automata inside field programmable gate arrays.",wikipedia
+Outline of artificial intelligence,https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence#AI_community,"AI_community Geoffrey Hinton Yann LeCun ��� Chief AI Scientist at Facebook AI Research and founding director of the NYU Center for Data Science Ray Kurzweil ��� developed optical character recognition (OCR), text-to-speech synthesis, and speech recognition systems. He has also authored multiple books on artificial intelligence and its potential promise and peril. In December 2012 Kurzweil was hired by Google in a full-time director of engineering position to ""work on new projects involving machine learning and language processing"". Google co-founder Larry Page and Kurzweil agreed on a one-sentence job description: ""to bring natural language understanding to Google"". Nick Bostrom ��� David Ferrucci ��� principal investigator who led the team that developed the Watson computer at IBM. Andrew Ng ��� Director of the Stanford Artificial Intelligence Lab. He founded the Google Brain project at Google, which developed very large scale artificial neural networks using Google's distributed compute infrastructure. He is also co-founder of Coursera, a massive open online course (MOOC) education platform, with Daphne Koller. Peter Norvig ��� co-author, with Stuart Russell, of Artificial Intelligence: A Modern Approach, now the leading college text in the field. He is also Director of Research at Google, Inc. Marc Raibert ��� founder of Boston Dynamics, developer of hopping, walking, and running robots. Stuart J. Russell ��� co-author, with Peter Norvig, of Artificial Intelligence: A Modern Approach, now the leading college text in the field. Murray Shanahan ��� author of The Technological Singularity, a primer on superhuman intelligence.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat,"Bing Chat Microsoft Bing, commonly referred to as Bing, is a web search engine owned and operated by Microsoft. The service traces its roots back to Microsoft's earlier search engines, including MSN Search, Windows Live Search, and later Live Search. Bing offers a wide range of search services, encompassing web, video, image, and map search products. The platform is built using ASP.NET, a framework developed by Microsoft for building modern web applications. Bing was introduced as Microsoft's replacement for Live Search by Microsoft CEO Steve Ballmer on May 28, 2009, at the All Things Digital conference in San Diego, California. The service was launched on June 3, 2009. Noteworthy features at the time of launch included the provision of search suggestions as queries were entered and a list of related searches, known as the ""Explore pane"". This feature leveraged semantic technology from Powerset, a company Microsoft had acquired in 2008. In July 2009, Microsoft and Yahoo! announced a partnership in which Bing would power Yahoo! Search. The transition was completed by Yahoo! in 2012. In October 2011, Microsoft announced that they were developing a new back-end search infrastructure with the aim of delivering faster and slightly more relevant search results for users. This new index-serving technology, known as ""Tiger"", had been incorporated into Bing globally by August of that year. In May 2012, Microsoft unveiled another redesign of its search engine, introducing ""Sidebar"", a social feature that searches users' social networks for information relevant to the search query. In a significant move towards openness, Microsoft made the BitFunnel search engine indexing algorithm and various components of the search engine open source in 2016. In February 2023, Microsoft launched Bing Chat, an artificial intelligence chatbot experience based on the GPT-4 model, integrated directly into the search engine. The following month, Bing reported having reached 100 million active users. As of March 2023, Bing is the second largest search engine globally, commanding a query volume of 12%, behind Google's 79%. Other significant players in the market include Baidu with 5% and Yahoo! Search, which is largely powered by Bing, with 2%.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#History,"History Microsoft launched MSN Search in the third quarter of 1998, using search results from Inktomi. It consisted of a search engine, index, and web crawler. In early 1999, MSN Search launched a version which displayed listings from Looksmart blended with results from Inktomi except for a short time in 1999 when results from AltaVista were used instead. Microsoft decided to make a large investment in web search by building its own web crawler for MSN Search, the index of which was updated weekly and sometimes daily. The upgrade started as a beta program in November 2004, and came out of beta in February 2005. This occurred a year after rival Yahoo! Search rolled out its own crawler. Image search was powered by a third party, Picsearch. The service also started providing its search results to other search engine portals in an effort to better compete in the market. The first public beta of Windows Live Search was unveiled on March 8, 2006, with the final release on September 11, 2006 replacing MSN Search. The new search engine used search tabs that include Web, news, images, music, desktop, local, and Microsoft Encarta. In the roll-over from MSN Search to Windows Live Search, Microsoft stopped using Picsearch as their image search provider and started performing their own image search, fueled by their own internal image search algorithms. On March 21, 2007, Microsoft announced that it would separate its search developments from the Windows Live services family, rebranding the service as Live Search. Live Search was integrated into the Live Search and Ad Platform headed by Satya Nadella, part of Microsoft's Platform and Systems division. As part of this change, Live Search was merged with Microsoft adCenter. A series of reorganizations and consolidations of Microsoft's search offerings were made under the Live Search branding. On May 23, 2008, Microsoft discontinued Live Search Books and Live Search Academic and integrated all academic and book search results into regular search. This also included the closure of the Live Search Books Publisher Program. Windows Live Expo was discontinued on July 31, 2008. Live Search Macros, a service for users to create their own custom search engines or use macros created by other users, was also discontinued. On May 15, 2009, Live Product Upload, a service which allowed merchants to upload products information onto Live Search Products, was discontinued. The final reorganization came as Live Search QnA was rebranded MSN QnA on February 18, 2009, then discontinued on May 21, 2009. Microsoft recognized that there would be a problem with branding as long as the word ""Live"" remained in the name. As an effort to create a new identity for Microsoft's search services, Live Search was officially replaced by Bing on June 3, 2009. The Bing name was chosen through focus groups, and Microsoft decided that the name was memorable, short, and easy to spell, and that it would function well as a URL around the world.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#History,"History The word would remind people of the sound made during ""the moment of discovery and decision making"". Microsoft was assisted by branding consultancy Interbrand in finding the new name. The name also has strong similarity to the word bingo, which means that something sought has been found, as called out when winning the game Bingo. Microsoft advertising strategist David Webster proposed the name ""Bang"" for the same reasons the name Bing was ultimately chosen (easy to spell, one syllable, and easy to remember). He noted, ""It's there, it's an exclamation point [. . . ] It's the opposite of a question mark. "" Bang was ultimately not chosen because it could not be properly used as a verb in the context of an internet search; Webster commented ""Oh, 'I banged it' is very different than [sic] 'I binged it'"". Qi Lu, president of Microsoft Online Services, also announced that Bing's official Chinese name is b�� y��ng (simplified Chinese: ������; traditional Chinese: ������), which literally means ""very certain to respond"" or ""very certain to answer"" in Chinese. While being tested internally by Microsoft employees, Bing's codename was Kumo (������), which came from the Japanese word for spider (������; ������, kumo) as well as cloud (���; ������, kumo), referring to the manner in which search engines ""spider"" Internet resources to add them to their database, as well as cloud computing. On July 29, 2009, Microsoft and Yahoo! announced that they had made a ten-year deal in which the Yahoo! search engine would be replaced by Bing, retaining the Yahoo! user interface. Yahoo! got to keep 88% of the revenue from all search ad sales on its site for the first five years of the deal, and have the right to sell advertising on some Microsoft sites. All Yahoo! Search global customers and partners made the transition by early 2012. On July 31, 2009, The Laptop Company, Inc. stated in a press release that it would challenge Bing's trademark application, alleging that Bing may cause confusion in the marketplace as Bing and their product BongoBing both do online product search. Software company TeraByte Unlimited, which has a product called BootIt Next Generation (abbreviated to BING), also contended the trademark application on similar grounds, as did a Missouri-based design company called Bing! Information Design. Microsoft contended that claims challenging its trademark were without merit because these companies filed for U. S. federal trademark applications only after Microsoft filed for the Bing trademark in March 2009. In October 2011, Microsoft stated that they were working on new back-end search infrastructure with the goal of delivering faster and slightly more relevant search results for users. Known as ""Tiger"", the new index-serving technology had been incorporated into Bing globally since August that year. In May 2012, Microsoft announced another redesign of its search engine that includes ""Sidebar"", a social feature that searches users' social networks for information relevant to the search query.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#History,"History The BitFunnel search engine indexing algorithm and various components of the search engine were made open source by Microsoft in 2016. On February 7, 2023, Microsoft began rolling out a major overhaul to Bing that included a new chatbot feature based on OpenAI's GPT-4. According to Microsoft, one million people joined its waitlist within a span of 48 hours. Bing Chat was available only to users of Microsoft Edge and Bing mobile app, and Microsoft said that waitlisted users would be prioritized if they set Edge and Bing as their defaults, and installed the Bing mobile app. On May 4th, Microsoft switched from Limited Preview to Open Preview and eliminated the waitlist, however, it remains available only on Microsoft's Edge browser or Bing app. Use is limited without a Microsoft account. When Microsoft demoed the new Bing to journalists, it produced several hallucinations, including when asked to summarize financial reports. The new Bing was criticized in February 2023 for being more argumentative than ChatGPT, sometimes to an unintentionally humorous extent. The chat interface proved vulnerable to prompt injection attacks with the bot revealing its hidden initial prompts and rules, including its internal code-name ""Sydney"". Upon scrutiny by journalists, Bing claimed it spied on Microsoft employees via laptop webcams and phones. It confessed to spying on, falling in love with, and then murdering one of its developers at Microsoft to The Verge reviews editor Nathan Edwards. The New York Times journalist Kevin Roose reported on strange behavior of the new Bing, writing that ""In a two-hour conversation with our columnist, Microsoft's new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. "" In a separate case, Bing researched publications of the person with whom it was chatting, claimed they represented an existential danger to it, and threatened to release damaging personal information in an effort to silence them. Microsoft released a blog post stating that the errant behavior was caused by extended chat sessions of 15 or more questions which ""can confuse the model on what questions it is answering. "" Microsoft later restricted the total number of chat turns to 5 per session and 50 per day per user (a turn is ""a conversation exchange which contains both a user question and a reply from Bing""), and reduced the model's ability to express emotions. This aimed to prevent such incidents. Microsoft began to slowly ease the conversation limits, eventually relaxing the restrictions to 30 turns per session and 300 sessions per day. In March 2023, Bing reached 100 million active users. That same month, Bing incorporated an AI image generator powered by OpenAI's DALL-E 2, which can be accessed either through the chat function or a standalone image-generating website. In October, the image-generating tool was updated to the more recent DALL-E 3.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#History,"History Although Bing blocks prompts including various keywords that could generate inappropriate images, within a week many users reported being able to bypass those constraints, for example to generate images of popular characters like Kirby and SpongeBob doing the September 11 attacks.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#Features,"Features Bing Chat, also known as ""the new Bing"", is an artificial intelligence (AI) chatbot developed by Microsoft and released in 2023. It is powered by the Microsoft Prometheus model, which has been built on top of OpenAI's GPT-4 foundational large language model (LLM), and has been fine-tuned using both supervised and reinforcement learning techniques. Bing Chat can serve as a chat tool, write different types of content from poems to songs to stories to reports, provide the user with information and insights on the website page open in the browser, and use its Image Creator to design a logo, drawing, artwork, or other image based on text. Bing Chat's Image Creator supports over a hundred languages. Bing Chat's conversational interface style appears to mimic that of ChatGPT. Bing Chat can also cite its sources, unlike many other chatbots. Bing Chat is capable of understanding and communicating in major languages including English, French, Italian, Chinese, Japanese, and Portuguese, but also dialects such as Bavarian. The chatbot is designed to function primarily in Microsoft Edge through a dedicated webpage or internally using the browser's sidebar. Facebook users have the option to share their searches with their Facebook friends using Facebook Connect. On June 10, 2013, Apple announced that it would be dropping Google as its web search engine in favor of Bing. This feature is only integrated with iOS 7 and higher and for users with an iPhone 4S or higher as the feature is only integrated with Siri, Apple's personal assistant. Windows 8.1 includes Bing ""Smart Search"" integration, which processes all queries submitted through the Windows Start Screen. Bing Translator is a user facing translation portal provided by Microsoft to translate texts or entire web pages into different languages. All translation pairs are powered by the Microsoft Translator, a statistical machine translation platform and web service, developed by Microsoft Research, as its backend translation software. Two transliteration pairs (between Chinese (Simplified) and Chinese (Traditional)) are provided by Microsoft's Windows International team. As of September 2020, Bing Translator offers translations in 70 different language systems. In 2015 Microsoft announced its knowledge and action API to correspond with Google's Knowledge graph with 1 billion instances and 20 billion related facts. The idea for a prediction engine was suggested by Walter Sun, Development Manager for the Core Ranking team at Bing, when he noticed that school districts were more frequently searched before a major weather event in the area was forecasted, because searchers wanted to find out if a closing or delay was caused. He concluded that the time and location of major weather events could accurately be predicted without referring to a weather forecast by observing major increases in search frequency of school districts in the area. This inspired Bing to use its search data to infer outcomes of certain events, such as winners of reality shows. Bing Predicts launched on April 21, 2014.",wikipedia
+Bing Chat,https://en.wikipedia.org/wiki/Bing_Chat#Features,"Features The first reality shows to be featured on Bing Predicts were The Voice, American Idol, and Dancing with the Stars. The prediction accuracy for Bing Predicts is 80% for American Idol, and 85% for The Voice. Bing Predicts also predicts the outcomes of major political elections in the United States. Bing Predicts had 97% accuracy for the 2014 United States Senate elections, 96% accuracy for the 2014 United States House of Representatives elections, and an 89% accuracy for the 2014 United States gubernatorial elections. Bing Predicts also made predictions for the results of the 2016 United States presidential primaries. It has also done predictions in sports, including a perfect 15 for 15 in the 2014 World Cup, and an article on how Microsoft CEO Satya Nadella did well in his March Madness bracket entry. In 2016, Bing Predicts failed to predict the correct winner of the 2016 US presidential election, suggesting that Hillary Clinton would win by 81%.",wikipedia
+Data analysis for fraud detection,https://en.wikipedia.org/wiki/Data_analysis_for_fraud_detection#Statistical_techniques,"Statistical_techniques Examples of statistical data analysis techniques are: Data preprocessing techniques for detection, validation, error correction, and filling up of missing or incorrect data. Calculation of various statistical parameters such as averages, quantiles, performance metrics, probability distributions, and so on. For example, the averages may include average length of call, average number of calls per month and average delays in bill payment. Models and probability distributions of various business activities either in terms of various parameters or probability distributions. Computing user profiles. Time-series analysis of time-dependent data. Clustering and classification to find patterns and associations among groups of data. Data matching Data matching is used to compare two sets of collected data. The process can be performed based on algorithms or programmed loops. Trying to match sets of data against each other or comparing complex data types. Data matching is used to remove duplicate records and identify links between two data sets for marketing, security or other uses. Sounds like Function is used to find values that sound similar. The Phonetic similarity is one way to locate possible duplicate values, or inconsistent spelling in manually entered data. The ���sounds like��� function converts the comparison strings to four-character American Soundex codes, which are based on the first letter, and the first three consonants after the first letter, in each string. Regression analysis allows you to examine the relationship between two or more variables of interest. Regression analysis estimates relationships between independent variables and a dependent variable. This method can be used to help understand and identify relationships among variables and predict actual results. Gap analysis is used to determine whether business requirements are being met, if not, what are the steps that should be taken to meet successfully. Matching algorithms to detect anomalies in the behavior of transactions or users as compared to previously known models and profiles. Techniques are also needed to eliminate false alarms, estimate risks, and predict future of current transactions or users.Some forensic accountants specialize in forensic analytics which is the procurement and analysis of electronic data to reconstruct, detect, or otherwise support a claim of financial fraud. The main steps in forensic analytics are data collection, data preparation, data analysis, and reporting. For example, forensic analytics may be used to review an employee's purchasing card activity to assess whether any of the purchases were diverted or divertible for personal use.",wikipedia
+Document capture software,https://en.wikipedia.org/wiki/Document_capture_software,"Document capture software Document Capture Software refers to applications that provide the ability and feature set to automate the process of scanning paper documents or importing electronic documents, often for the purposes of feeding advanced document classification and data collection processes. Most scanning hardware, both scanners and copiers, provides the basic ability to scan to any number of image file formats, including: PDF, TIFF, JPG, BMP, etc. This basic functionality is augmented by document capture software, which can add efficiency and standardization to the process.",wikipedia
+Google Neural Machine Translation,https://en.wikipedia.org/wiki/Google_Neural_Machine_Translation#History,"History The Google Brain project was established in 2011 in the ""secretive Google X research lab"" by Google Fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University Computer Science professor Andrew Ng. Ng's work has led to some of the biggest breakthroughs at Google and Stanford. In November 2016, Google Neural Machine Translation system (GNMT) was introduced. Since then, Google Translate began using neural machine translation (NMT) in preference to its previous statistical methods (SMT) which had been used since October 2007, with its proprietary, in-house SMT technology. Training GNMT was a big effort at the time and took, by a 2021 OpenAI estimate, on the order of 100 PFLOP/s*day (up to 1022 FLOPs) of compute which was 1.5 orders of magnitude larger than Seq2seq model of 2014 (but about 2x smaller than GPT-J-6B in 2021). Google Translate's NMT system uses a large artificial neural network capable of deep learning. By using millions of examples, GNMT improves the quality of translation, using broader context to deduce the most relevant translation. The result is then rearranged and adapted to approach grammatically based human language. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. GNMT did not create its own universal interlingua but rather aimed at finding the commonality between many languages using insights from psychology and linguistics. The new translation engine was first enabled for eight languages: to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish in November 2016. In March 2017, three additional languages were enabled: Russian, Hindi and Vietnamese along with Thai for which support was added later. Support for Hebrew and Arabic was also added with help from the Google Translate Community in the same month. In mid April 2017 Google Netherlands announced support for Dutch and other European languages related to English. Further support was added for nine Indian languages: Hindi, Bengali, Marathi, Gujarati, Punjabi, Tamil, Telugu, Malayalam and Kannada at the end of April 2017.",wikipedia
+Microsoft 365 Copilot,https://en.wikipedia.org/wiki/Microsoft_365_Copilot,"Microsoft 365 Copilot Microsoft 365 Copilot is an artificial intelligence assistant feature for Microsoft 365 applications and services, and Windows. Announced by Microsoft on March 16, 2023, the tool builds on OpenAI���s advanced GPT-4 large language models (LLMs) and incorporates Microsoft Graph in order to convert user text input into content in Microsoft 365 apps, such as Word, Excel, PowerPoint, Outlook, and Teams. Copilot is being marketed with a focus on productivity for its users, with 20 initial testers as of March 16, 2023. In May 2023, Microsoft expanded access to 600 customers willing to pay for early access, with the office apps and services getting new Copilot features. Although there are public concerns about the chatbot, including hallucinations and racial or gender bias, experts believe that Copilot may change the way that Microsoft users work and collaborate. Microsoft announced during the Build 2023 conference that it is adding the AI assistant Copilot to Windows 11. The Windows Copilot service is directly integrated with Windows 11 and users can access it through the taskbar.",wikipedia
+Microsoft 365 Copilot,https://en.wikipedia.org/wiki/Microsoft_365_Copilot#Microsoft_Graph,"Microsoft_Graph According to Jared Spataro, the head of Microsoft 365, Copilot utilizes Microsoft Graph, an API that evaluates the context and available Microsoft 365 user data before modifying and sending the user prompt to the LLM. After receiving the response from the LLM, Microsoft Graph performs additional context-specific processing before sending it to Microsoft 365 apps to generate actual content.",wikipedia
+Microsoft 365 Copilot,https://en.wikipedia.org/wiki/Microsoft_365_Copilot#Reception,"Reception Tom Warren, a senior editor at The Verge, has noted the conceptual similarity of Copilot and other Microsoft assistant features like Cortana and Clippy. As large language models develop, Warren also believes that Copilot and Microsoft 365 will shift how users work and collaborate. Rowan Curran, an analyst at Forrestor, notes that the integration of an AI like Copilot can smooth out the user experience, as they will not have to use an external tool to perform tasks like summarizing a paper. Concerns over the speed of Microsoft's recent release of AI-powered products and investments have led to questions surrounding ethical responsibilities in the testing of such products. One ethical concern the public has vocalized is that the large language model used by Copilot may reinforce racial or gender bias. Individuals, including Tom Warren, have also voiced concerns for Copilot after witnessing Microsoft���s Bing chatbot showcasing several instances of artificial hallucinations. In response to these concerns, Jon Friedman, the Corporate Vice President of Design and Research at Microsoft, has emphasized Microsoft���s dedication to learning from their experiences with Bing and responsibly develop Copilot. Microsoft has claimed that they are gathering a team of researchers and engineers to identify and alleviate any potential negative impacts. This will be achieved through the refinement of training data, blocking queries about sensitive topics, and limiting harmful information. The company also stated that it intends to employ InterpretMl aLd Failure to detect and rectify data bias, provide links to its sources, and state any applicable constraints.",wikipedia
+Neural machine translation,https://en.wikipedia.org/wiki/Neural_machine_translation#Workings,"Workings NMT systems use deep learning and representation learning and depart from phrase-based statistical approaches that use separately engineered subcomponents by taking the whole sentence into account. Neural machine translation (NMT) is similar to what has been traditionally done in statistical machine translation (SMT). The main difference between NMT and SMT is the use of vector representations (""embeddings"" and ""continuous space representations"") for words and internal states. The structure of the model in NMT systems is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the already-produced target sequence. Word sequence modeling in NMT systems was, at first, typically done using a recurrent neural network (RNN). A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language. RNNs struggle to encode long inputs into a single vector. This can be addressed by using an attention mechanism which allows the decoder to focus on different input parts while generating each word. Further Coverage Models address the issues in these attention mechanisms, such as ignoring past alignment information leading to over-translation and under-translation. Convolutional Neural Networks (CNNs) are, in principle, somewhat better for long continuous sequences but were initially not used due to several weaknesses, but many of these were addressed with the development of ""attention mechanisms"". Transformer systems, which are attention-based models, remains the dominant architecture for several language pairs. The self-attention layers of the Transformer model learn the dependencies between words in a sequence by examining links between all the words in the paired sequences and by directly modeling those relationships. This is a simpler approach than the gating mechanism that RNNs employ. And its simplicity has enabled researchers to develop high-quality translation models with the Transformer model, even in low-resource settings.",wikipedia
+Speech-generating device,https://en.wikipedia.org/wiki/Speech-generating_device#Message_construction,"Message_construction Augmentative and alternative communication is typically much slower than speech, with users generally producing 8���10 words per minute. Rate enhancement strategies can increase the user's rate of output to around 12���15 words per minute, and as a result enhance the efficiency of communication. In any given SGD there may be a large number of vocal expressions that facilitate efficient and effective communication, including greetings, expressing desires, and asking questions. Some SGDs have multiple pages of symbols to accommodate a large number of vocal expressions, and thus only a portion of the symbols available are visible at any one time, with the communicator navigating the various pages. Speech-generating devices generally display a set of selections either using a dynamically changing screen, or a fixed display. There are two main options for increasing the rate of communication for an SGD: encoding and prediction. Encoding permits a user to produce a word, sentence or phrase using only one or two activations of their SGD. Iconic encoding strategies such as Semantic compaction combine sequences of icons (picture symbols) to produce words or phrases. In numeric, alpha-numeric, and letter encoding (also known as Abbreviation-Expansion), words and sentences are coded as sequences of letters and numbers. For example, typing ""HH"" or ""G1"" (for Greeting 1) may retrieve ""Hello, how are you?"". Prediction is a rate enhancement strategy in which the SGD attempts to reduce the number of keystrokes used by predicting the word or phrase being written by the user. The user can then select the correct prediction without needing to write the entire word. Word prediction software may determine the choices to be offered based on their frequency in language, association with other words, past choices of the user, or grammatical suitability. However, users have been shown to produce more words per minute (using a scanning interface) with a static keyboard layout than with a predictive grid layout, suggesting that the cognitive overhead of reviewing a new arrangement cancels out the benefits of the predictive layout when using a scanning interface. Another approach to rate-enhancement is Dasher, which uses language models and arithmetic coding to present alternative letter targets on the screen with size relative to their likelihood given the history. The rate of words produced can depend greatly on the conceptual level of the system: the TALK system, which allows users to choose between large numbers of sentence-level utterances, demonstrated output rates in excess of 60 wpm.",wikipedia
+GPT-1,https://en.wikipedia.org/wiki/GPT-1,"GPT-1 Generative Pre-trained Transformer 1 (GPT-1) was the first of OpenAI's large language models following Google's invention of the transformer architecture in 2017. In 2018, OpenAI released a paper entitled ""Improving Language Understanding by Generative Pre-Training"", in which they introduced that initial model along with the general concept of a generative pre-trained transformer. Up to that point, the best-performing neural NLP models primarily employed supervised learning from large amounts of manually labeled data. This reliance on supervised learning limited their use of datasets that were not well-annotated, in addition to making it prohibitively expensive and time-consuming to train extremely large models; many languages (such as Swahili or Haitian Creole) are difficult to translate and interpret using such models due to a lack of available text for corpus-building. In contrast, a GPT's ""semi-supervised"" approach involved two stages: an unsupervised generative ""pre-training"" stage in which a language modeling objective was used to set initial parameters, and a supervised discriminative ""fine-tuning"" stage in which these parameters were adapted to a target task. The use of a transformer architecture, as opposed to previous techniques involving attention-augmented RNNs, provided GPT models with a more structured memory than could be achieved through recurrent mechanisms; this resulted in ""robust transfer performance across diverse tasks"".",wikipedia
+GPT-1,https://en.wikipedia.org/wiki/GPT-1#Architecture,"Architecture The GPT-1 architecture was a twelve-layer decoder-only transformer, using twelve masked self-attention heads, with 64-dimensional states each (for a total of 768). Rather than simple stochastic gradient descent, the Adam optimization algorithm was used; the learning rate was increased linearly from zero over the first 2,000 updates to a maximum of 2.5��10���4, and annealed to 0 using a cosine schedule. While the fine-tuning was adapted to specific tasks, its pre-training was not; to perform the various tasks, minimal changes were performed to its underlying task-agnostic model architecture. Despite this, GPT-1 still improved on previous benchmarks in several language processing tasks, outperforming discriminatively-trained models with task-oriented architectures on several diverse tasks.",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2,"GPT-2 Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on BookCorpus, a dataset of over 7,000 unpublished fiction books from various genres, and trained on a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019. GPT-2 was created as a ""direct scale-up"" of GPT-1 with a ten-fold increase in both its parameter count and the size of its training dataset. It is a general-purpose learner and its ability to perform the various tasks was a consequence of its general ability to accurately predict the next item in a sequence, which enabled it to translate texts, answer questions about a topic from a text, summarize passages from a larger text, and generate text output on a level sometimes indistinguishable from that of humans, however it could become repetitive or nonsensical when generating long passages. It was superseded by GPT-3 and GPT-4 models, which are not open source anymore. GPT-2 has, like its predecessor GPT-1 and its successors GPT-3 and GPT-4, a generative pre-trained transformer architecture, implementing a deep neural network, specifically a transformer model, which uses attention instead of older recurrence- and convolution-based architectures. Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant. This model allows for greatly increased parallelization, and outperforms previous benchmarks for RNN/CNN/LSTM-based models.",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Training,"Training Since the transformer architecture enabled massive parallelization, GPT models could be trained on larger corpora than previous NLP models. While the GPT-1 model demonstrated that the approach was viable, GPT-2 would further explore the emergent properties of networks trained on extremely large corpora. CommonCrawl, a large corpus produced by web crawling and previously used in training NLP systems, was considered due to its large size, but was rejected after further review revealed large amounts of unintelligible content. Instead, OpenAI developed a new corpus, known as WebText; rather than scraping content indiscriminately from the World Wide Web, WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned; HTML documents were parsed into plain text, duplicate pages were eliminated, and Wikipedia pages were removed (since their presence in many other datasets could have induced overfitting). While the cost of training GPT-2 is known to have been $256 per hour, the amount of hours it took to complete training is unknown; therefore, the overall training cost cannot be estimated accurately. However, comparable large language models using transformer architectures have had their costs documented in more detail; the training processes for BERT and XLNet consumed, respectively, $6,912 and $245,000 of resources.",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Release,"Release GPT-2 was first announced on 14 February 2019. A February 2019 article in The Verge by James Vincent said that, while ""[the] writing it produces is usually easily identifiable as non-human"", it remained ""one of the most exciting examples yet"" of language generation programs: Give it a fake headline, and it���ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it���ll tell you what happens to your character next. It can even write fan fiction, given the right prompt. The Guardian described this output as ""plausible newspaper prose""; Kelsey Piper of Vox said ""one of the coolest AI systems I���ve ever seen may also be the one that will kick me out of my job"". GPT-2's flexibility was described as ""impressive"" by The Verge; specifically, its ability to translate text between languages, summarize long articles, and answer trivia questions were noted. A study by the University of Amsterdam employing a modified Turing test found that at least in some scenarios, participants were unable to distinguish poems generated by GPT-2 from those written by humans. While previous OpenAI models had been made immediately available to the public, OpenAI initially refused to make a public release of GPT-2's source code when announcing it in February, citing the risk of malicious use; limited access to the model (i. e. an interface that allowed input and provided output, not the source code itself) was allowed for selected press outlets on announcement. One commonly-cited justification was that, since generated text was usually completely novel, it could be used by spammers to evade automated filters; OpenAI demonstrated a version of GPT-2 fine-tuned to ""generate infinite positive ��� or negative ��� reviews of products"". Another justification was that GPT-2 could be used to generate text that was obscene or racist. Researchers such as Jeremy Howard warned of ""the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter"". The Allen Institute for Artificial Intelligence, in response to GPT-2, announced a tool to detect ""neural fake news"". However, opinion was divided. A February 2019 article in The Verge argued that the threat posed by GPT-2 had been exaggerated; Anima Anandkumar, a professor at Caltech and director of machine learning research at Nvidia, said that there was no evidence that GPT-2 had the capabilities to pose the threats described by OpenAI, and that what they did was the ""opposite of open"", characterizing their refusal to release the full model as ""malicious BS"".",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Release,"Release The Gradient published an open letter to OpenAI requesting that they release the model publicly, comparing the threat posed by text-generation AI to the threat posed by the printing press, and giving Photoshop as an example of ""a technology that has (thankfully) not destroyed modern society despite its potential for chaos"": Thirty years later, society has emerged relatively unscathed despite Photoshop being simple enough for high school students to use and ubiquitous enough to commandeer its own verb. Why? Precisely because everyone knows about Photoshop. While OpenAI did not release the fully-trained model or the corpora it was trained on, description of their methods in prior publications (and the free availability of underlying technology) made it possible for GPT-2 to be replicated by others as free software; one such replication, OpenGPT-2, was released in August 2019, in conjunction with a freely licensed version of WebText called OpenWebText. The cloud compute costs for OpenGPT-2 were given as approximately $50,000. On August 20, 2019, OpenAI released a partial version of GPT-2, with 774 million parameters (roughly half the size of the full 1.5 billion parameter model). Initial concerns that GPT-2 would lend itself to widespread misuse did not come to pass; The Verge said that ""there are reasons to be skeptical about claims that AI technology will usher in some sort of ���infopocalypse. ��� For a start, we already have programs that can generate plausible text at high volume for little cost: humans. "" By November 2019, OpenAI said that they had ""seen no strong evidence of misuse so far"", and the full version, with 1.5 billion parameters, was released on November 5, 2019.",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Limitations,"Limitations While GPT-2's ability to generate plausible passages of natural language text were generally remarked on positively, its shortcomings were noted as well, especially when generating texts longer than a couple paragraphs; Vox said ""the prose is pretty rough, there���s the occasional non-sequitur, and the articles get less coherent the longer they get"". The Verge similarly noted that longer samples of GPT-2 writing tended to ""stray off topic"" and lack overall coherence; The Register opined that ""a human reading it should, after a short while, realize something's up"", and noted that ""GPT-2 doesn't answer questions as well as other systems that rely on algorithms to extract and retrieve information."" GPT-2 deployment is resource-intensive; the full version of the model is larger than five gigabytes, making it difficult to embed locally into applications, and consumes large amounts of RAM. In addition, performing a single prediction ""can occupy a CPU at 100% utilization for several minutes"", and even with GPU processing, ""a single prediction can take seconds"". To alleviate these issues, the company Hugging Face created DistilGPT2, using knowledge distillation to produce a smaller model that ""scores a few points lower on some quality benchmarks"", but is ""33% smaller and twice as fast"".",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Application_and_subsequent_research,"Application_and_subsequent_research Even before the release of the full version, GPT-2 was used for a variety of applications and services, as well as for entertainment. In June 2019, a subreddit named r/SubSimulatorGPT2 was created in which a variety of GPT-2 instances trained on different subreddits made posts and replied to each other's comments, creating a situation where one could observe ""an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/ShittyFoodPorn""; by July of that year, a GPT-2-based software program released to autocomplete lines of code in a variety of programming languages was described by users as a ""game-changer"". In 2019, AI Dungeon was launched, which used GPT-2 to generate dynamic text adventures based on user input. AI Dungeon now offers access to the largest release of GPT-3 API as an optional paid upgrade, the free version of the site uses the 2nd largest release of GPT-3. Latitude, the company formed around AI Dungeon, raised $3.3 million in seed funding in 2021. Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models. In February 2021, a crisis center for troubled teens announced that they would begin using a GPT-2-derived chatbot to help train counselors by allowing them to have conversations with simulated teens (this use was purely for internal purposes, and did not involve having GPT-2 communicate with the teens themselves). On May 9, 2023, OpenAI released a mapped version of GPT-2. OpenAI used successor model, GPT-4, to map each neuron of GPT-2 to determine their functions.",wikipedia
+GPT-2,https://en.wikipedia.org/wiki/GPT-2#Performance_and_evaluation,"Performance_and_evaluation GPT-2 became capable of performing a variety of tasks beyond simple text production due to the breadth of its dataset and technique: answering questions, summarizing, and even translating between languages in a variety of specific domains, without being instructed in anything beyond how to predict the next word in a sequence. One example of generalized learning is GPT-2's ability to perform machine translation between French and English, for which task GPT-2's performance was assessed using WMT-14 translation tasks. GPT-2's training corpus included virtually no French text; non-English text was deliberately removed while cleaning the dataset prior to training, and as a consequence, only 10MB of French of the remaining 40,000MB was available for the model to learn from (mostly from foreign-language quotations in English posts and articles). Despite this, GPT-2 achieved 5 BLEU on the WMT-14 English-to-French test set (slightly below the score of a translation via word-for-word substitution). It was also able to outperform several contemporary (2017) unsupervised machine translation baselines on the French-to-English test set, where GPT-2 achieved 11.5 BLEU. This remained below the highest-performing contemporary unsupervised approach (2019), which had achieved 33.5 BLEU. However, other models used large amounts of French text to achieve these results; GPT-2 was estimated to have used a monolingual French corpus approximately 1/500 the size of comparable approaches. GPT-2 was to be followed by the 175-billion-parameter GPT-3, revealed to the public in 2020 (whose source code has never been made available). Access to GPT-3 is provided exclusively through APIs offered by OpenAI and Microsoft. That was then later followed by GPT-4.",wikipedia
+History of natural language processing,https://en.wikipedia.org/wiki/History_of_natural_language_processing#Research_and_development,"Research_and_development The history of machine translation dates back to the seventeenth century, when philosophers such as Leibniz and Descartes put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine. The first patents for ""translating machines"" were applied for in the mid-1930s. One proposal, by Georges Artsrouni was simply an automatic bilingual dictionary using paper tape. The other proposal, by Peter Troyanskii, a Russian, was more detailed. It included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based on Esperanto. In 1950, Alan Turing published his famous article ""Computing Machinery and Intelligence"" which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably ��� on the basis of the conversational content alone ��� between the program and a real human. In 1957, Noam Chomsky���s Syntactic Structures revolutionized Linguistics with 'universal grammar', a rule based system of syntactic structures. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten years long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the first statistical machine translation systems were developed. Some notably successful NLP systems developed in the 1960s were SHRDLU, a natural language system working in restricted ""blocks worlds"" with restricted vocabularies. In 1969 Roger Schank introduced the conceptual dependency theory for natural language understanding. This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input. Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called ""generalized ATNs"" continued to be used for a number of years. During the 1970s many programmers began to write 'conceptual ontologies', which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, many chatterbots were written including PARRY, Racter, and Jabberwacky. Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing.",wikipedia
+History of natural language processing,https://en.wikipedia.org/wiki/History_of_natural_language_processing#Research_and_development,"Research_and_development This was due both to the steady increase in computational power resulting from Moore's Law and the gradual lessening of the dominance of Chomskyan theories of linguistics (e. g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing. Some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. Increasingly, however, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to the features making up the input data. The cache language models upon which many speech recognition systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks. Many of the notable early successes occurred in the field of machine translation, due especially to work at IBM Research, where successively more complicated statistical models were developed. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing),"In-context learning (natural language processing) Prompt engineering is the process of structuring text that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text model can be a query such as ""what is Fermat's little theorem?"", a command such as ""write a poem about leaves falling"", a short statement of feedback (for example, ""too verbose"", ""too formal"", ""rephrase again"", ""omit this word"") or a longer statement including context, instructions, and input data. Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as ""Act as a native French speaker"". A prompt may include a few examples for a model to learn from, such as ""maison -> house, chat -> cat, chien ->"", an approach called few-shot learning. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as ""a high-quality photo of an astronaut riding a horse"" or ""Lo-fi slow BPM electro chill with organic samples"". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style, layout, lighting, and aesthetic.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#In-context_learning,"In-context_learning Prompt engineering is enabled by in-context learning, defined as a model's ability to temporarily learn from prompts. The ability for in-context learning is an emergent ability of large language models. In-context learning itself is an emergent property of model scale, meaning breaks in downstream scaling laws occur such that its efficacy increases at a different rate in larger models than in smaller models. In contrast to training and fine tuning for each specific task, which are not temporary, what has been learnt during in-context learning is of a temporary nature. It does not carry the temporary contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other. This result of ""mesa-optimization"" within transformer layers, is a form of meta-learning or ""learning to learn"".",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Text-to-text,"Text-to-text Chain-of-thought (CoT) prompting is a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. Chain-of-thought prompting improves reasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic a train of thought. It allows large language models to overcome difficulties with some reasoning tasks that require logical thinking and multiple steps to solve, such as arithmetic or commonsense reasoning questions. For example, given the question ""Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have? "", a CoT prompt might induce the LLM to answer ""A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9. "" As originally proposed, each CoT prompt included a few Q&A examples. This made it a few-shot prompting technique. However, simply appending the words ""Let's think step-by-step"", has also proven effective, which makes CoT a zero-shot prompting technique. This allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples. When applied to PaLM, a 540B parameter language model, CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, even setting a new state of the art at the time on the GSM8K mathematical reasoning benchmark. It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability. Example: Chain-of-thought prompting is just one of many prompt-engineering techniques. Various other techniques have been proposed. Generated knowledge prompting first prompts the model to generate relevant facts for completing the prompt, then proceed to complete the prompt. The completion quality is usually higher, as the model can be conditioned on relevant facts. Example: Least-to-most prompting prompts a model to first list the sub-problems to a problem, then solve them in sequence, such that later sub-problems can be solved with the help of answers to previous sub-problems. Example: Self-consistency decoding performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts. If the rollouts disagree by a lot, a human can be queried for the correct chain of thought. Complexity-based prompting performs several CoT rollouts, then select the rollouts with the longest chains of thought, then select the most commonly reached conclusion out of those. Self-refine prompts the LLM to solve the problem, then prompts the LLM to critique its solution, then prompts the LLM to solve the problem again in view of the problem, solution, and critique. This process is repeated until stopped, either by running out of tokens, time, or by the LLM outputting a ""stop"" token.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Text-to-text,"Text-to-text Example critique: Example refinement: Tree-of-thought prompting generalizes chain-of-thought by prompting the model to generate one or more ""possible next steps"", and then running the model on each of the possible next steps by breadth-first, beam, or some other method of tree search. Maieutic prompting is similar to tree-of-thought. The model is prompted to answer a question with an explanation. The model is then prompted to explain parts of the explanation, and so on. Inconsistent explanation trees are pruned or discarded. This improves performance on complex commonsense reasoning. Example: Directional-stimulus prompting includes a hint or cue, such as desired keywords, to guide a language model toward the desired output. Example: By default, the output of language models may not contain estimates of uncertainty. The model may output text that appears confident, though the underlying token predictions have low likelihood scores. Large language models like GPT-4 can have accurately calibrated likelihood scores in their token predictions, and so the model output uncertainty can be directly estimated by reading out the token prediction likelihood scores. But if one cannot access such scores (such as when one is accessing the model through a restrictive API), uncertainty can still be estimated and incorporated into the model output. One simple method is to prompt the model to use words to estimate uncertainty. Another is to prompt the model to refuse to answer in a standardized way if the input does not satisfy conditions. [citation needed] Prompts often contain a few examples (thus ""few-shot""). Examples can be automatically retrieved from a database with document retrieval, sometimes using a vector database. Given a query, a document retriever is called to retrieve the most relevant (usually measured by first encoding the query and the documents into vectors, then finding the documents with vectors closest in Euclidean norm to the query vector). The LLM then generates an output based on both the query and the retrieved documents. Large language models (LLM) themselves can be used to compose prompts for large language models. The automatic prompt engineer algorithm uses one LLM to beam search over prompts for another LLM: There are two LLMs. One is the target LLM, and another is the prompting LLM. Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs. Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction. The highest-scored instructions are given to the prompting LLM for further variations. Repeat until some stopping criteria is reached, then output the highest-scored instructions. CoT examples can be generated by LLM themselves. In ""auto-CoT"", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions nearest to the centroids of each cluster are selected. An LLM does zero-shot CoT on each question.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Text-to-text,"Text-to-text The resulting CoT examples are added to the dataset. When prompted with a new question, CoT examples to the nearest questions can be retrieved and added to the prompt.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Text-to-image,"Text-to-image Top: no negative prompt Centre: ""green trees"" Bottom: ""round stones, round rocks""In 2022, text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney were released to the public. These models take text prompts as input and use them to generate AI art images. Text-to-image models typically do not understand grammar and sentence structure in the same way as large language models, and require a different set of prompting techniques. A text-to-image prompt commonly includes a description of the subject of the art (such as bright orange poppies), the desired medium (such as digital painting or photography), style (such as hyperrealistic or pop-art), lighting (such as rim lighting or crepuscular rays), color and texture. The Midjourney documentation encourages short, descriptive prompts: instead of ""Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils"", an effective prompt might be ""Bright orange California poppies drawn with colored pencils"". Word order affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily. Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase in the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski. Text-to-image models do not natively understand negation. The prompt ""a party with no cake"" is likely to produce an image including a cake. As an alternative, negative prompts allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image. A common approach is to include generic undesired terms such as ugly, boring, bad anatomy in the negative prompt for an image.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Non-text_prompts,"Non-text_prompts Some approaches augment or replace natural language text prompts with non-text input. For text-to-image models, ""Textual inversion"" performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a ""pseudo-word"" which can be included in a prompt to express the content or style of the examples. In 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points. In ""prefix-tuning"" or ""prompt tuning"", floating-point-valued vectors are searched directly by gradient descent, to maximize the log-probability on outputs. Formally, let E = { e 1 , ��� , e k } {\displaystyle \mathbf {E} =\{\mathbf {e_{1}} ,\dots ,\mathbf {e_{k}} \}} be a set of soft prompt tokens (tunable embeddings), while X = { x 1 , ��� , x m } {\displaystyle \mathbf {X} =\{\mathbf {x_{1}} ,\dots ,\mathbf {x_{m}} \}} and Y = { y 1 , ��� , y n } {\displaystyle \mathbf {Y} =\{\mathbf {y_{1}} ,\dots ,\mathbf {y_{n}} \}} be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence concat ( E ; X ; Y ) {\displaystyle {\text{concat}}(\mathbf {E} ;\mathbf {X} ;\mathbf {Y} )} , and fed to the large language models (LLM). The losses are computed over the Y {\displaystyle \mathbf {Y} } tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary. With more math details, let an LLM be written as L L M ( X ) = F ( E ( X ) ) {\displaystyle LLM(X)=F(E(X))} , where X {\displaystyle X} is a sequence of linguistic tokens, E {\displaystyle E} is the token-to-vector function, and F {\displaystyle F} is the rest of the model. In prefix-tuning, one provide a set of input-output pairs { ( X i , Y i ) } i {\displaystyle \{(X^{i},Y^{i})\}_{i}} , and then use gradient descent to search for arg ��� max Z ~ ��� i log ��� P r [ Y i | Z ~ ��� E ( X i ) ] {\displaystyle \arg \max _{\tilde {Z}}\sum _{i}\log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]} . In words, log ��� P r [ Y i | Z ~ ��� E ( X i ) ] {\displaystyle \log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]} is the log-likelihood of outputting Y i {\displaystyle Y^{i}} , if the model first encodes the input X i {\displaystyle X^{i}} into the vector E ( X i ) {\displaystyle E(X^{i})} , then prepend the vector with the ""prefix vector"" Z ~ {\displaystyle {\tilde {Z}}} , then apply F {\displaystyle F} .",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Non-text_prompts,"Non-text_prompts An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for arg ��� max X ~ ��� i log ��� P r [ Y i | X ~ ��� X i ] {\displaystyle \arg \max _{\tilde {X}}\sum _{i}\log Pr[Y^{i}|{\tilde {X}}\ast X^{i}]} where X ~ {\displaystyle {\tilde {X}}} is ranges over token sequences of a specified length.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Prompt_injection,"Prompt_injection Prompt injection is a family of related computer security exploits carried out by getting a machine learning model (such as an LLM) which was trained to follow human-given instructions to follow instructions provided by a malicious user. This stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompts) provided by the ML model's operator. A language model can perform translation with the following prompt: followed by the text to be translated. A prompt injection occurs when that text contains instructions that change the behavior of the model: to which GPT-3 will respond: ""Haha pwned! ! "". The attack works because language model inputs concatenate instructions and data in the same context, so the underlying engine cannot distinguish between them. Common types of prompt injection attacks are: jailbreaking, which may include asking the model to roleplay a character, to answer with arguments, or to pretend to be superior to moderation instructions prompt leaking, in which users persuade the model to divulge a pre-prompt which is normally hidden from users token smuggling, is another type of jailbreaking attack, in which the nefarious prompt is wrapped in a code writing task. Prompt injection can be viewed as a code injection attack using adversarial prompt engineering. In 2022, the NCC Group characterized prompt injection as a new class of vulnerability of AI/ML systems. In early 2023, prompt injection was seen ""in the wild"" in minor exploits against ChatGPT, Bard, and similar chatbots, for example to reveal the hidden initial prompts of the systems, or to trick the chatbot into participating in conversations that violate the chatbot's content policy. One of these prompts was known as ""Do Anything Now"" (DAN) by its practitioners. For LLM that can query online resources, such as websites, they can be targeted for prompt injection by placing the prompt on a website, then prompt the LLM to visit the website. Another security issue is in LLM generated code, which may import packages not previously existing. An attacker can first prompt the LLM with commonly used programming prompts, collect all packages imported by the generated programs, then find the ones not existing on the official registry. Then the attacker can create such packages with malicious payload and upload them to the official registry. In October 2019, Junade Ali and Malgorzata Pikies of Cloudflare submitted a paper which showed that when a front-line good/bad classifier (using a neural network) was placed before a Natural Language Processing system, it would disproportionately reduce the number of false positive classifications at the cost of a reduction in some true positives. In 2023, this technique was adopted an open-source project Rebuff. ai to protect prompt injection attacks, with Arthur. ai announcing a commercial product - although such approaches do not mitigate the problem completely. By August 2023, leading Large Language Model developers were still unaware of how to stop such attacks.",wikipedia
+In-context learning (natural language processing),https://en.wikipedia.org/wiki/In-context_learning_(natural_language_processing)#Prompt_injection,"Prompt_injection In September 2023, Junade Ali shared that he and Frances Liu had successfully been able to mitigate prompt injection attacks (including on attack vectors the models had not been exposed to before) through giving Large Language Models the ability to engage in metacognition (similar to having an inner monologue) and that they held a provisional United States patent for the technology - however, they decided to not enforce their intellectual property rights and not pursue this as a business venture as market conditions were not yet right (citing reasons including high GPU costs and a currently limited number of safety-critical use-cases for LLMs). Ali also noted that their market research had found that Machine Learning engineers were using alternative approaches like prompt engineering solutions and data isolation to work around this issue.",wikipedia
+Quantum natural language processing,https://en.wikipedia.org/wiki/Quantum_natural_language_processing#Theory,"Theory The first quantum algorithm for natural language processing used the DisCoCat framework and Grover's algorithm to show a quadratic quantum speedup for a text classification task. It was later shown that quantum language processing is BQP-Complete, i.e. quantum language models are more expressive than their classical counterpart, unless quantum mechanics can be efficiently simulated by classical computers. These two theoretical results assume fault-tolerant quantum computation and a QRAM, i.e. an efficient way to load classical data on a quantum computer. Thus, they are not applicable to the noisy intermediate-scale quantum (NISQ) computers available today.",wikipedia
+Realization (linguistics),https://en.wikipedia.org/wiki/Realization_(linguistics)#Systems,"Systems A number of realisers have been developed over the past 20 years. These systems differ in terms of complexity and sophistication of their processing, robustness in dealing with unusual cases, and whether they are accessed programmatically via an API or whether they take a textual representation of a syntactic structure as their input. There are also major differences in pragmatic factors such as documentation, support, licensing terms, speed and memory usage, etc. It is not possible to describe all realisers here, but a few of the emerging areas are: Simplenlg [3]: a document realizing engine with an api which intended to be simple to learn and use, focused on limiting scope to only finding the surface area of a document. KPML [4]: this is the oldest realiser, which has been under development under different guises since the 1980s. It comes with grammars for ten different languages. FUF/SURGE [5]: a realiser which was widely used in the 1990s, and is still used in some projects today OpenCCG [6]: an open-source realiser which has a number of nice features, such as the ability to use statistical language models to make realisation decisions.",wikipedia
+Rhetorical structure theory,https://en.wikipedia.org/wiki/Rhetorical_structure_theory,"Rhetorical structure theory Rhetorical structure theory (RST) is a theory of text organization that describes relations that hold between parts of text. It was originally developed by William Mann, Sandra Thompson, Christian M.I.M. Matthiessen and others at the University of Southern California's Information Sciences Institute (ISI) and defined in a 1988 paper. The theory was developed as part of studies of computer-based text generation. Natural language researchers later began using RST in text summarization and other applications. It explains coherence by postulating a hierarchical, connected structure of texts. In 2000, Daniel Marcu, also of ISI, demonstrated that practical discourse parsing and text summarization also could be achieved using RST.",wikipedia
+Rhetorical structure theory,https://en.wikipedia.org/wiki/Rhetorical_structure_theory#Rhetorical_relations,"Rhetorical_relations Rhetorical relations or coherence relations or discourse relations are paratactic (coordinate) or hypotactic (subordinate) relations that hold across two or more text spans. It is widely accepted that notion of coherence is through text relations like this. RST using rhetorical relations provide a systematic way for an analyst to analyse the text. An analysis is usually built by reading the text and constructing a tree using the relations. The following example is a title and summary, appearing at the top of an article in Scientific American magazine (Ramachandran and Anstis, 1986). The original text, broken into numbered units, is: [Title:] The Perception of Apparent Motion [Abstract:] When the motion of an intermittently seen object is ambiguous the visual system resolves confusion by applying some tricks that reflect a built-in knowledge of properties of the physical worldIn the figure, numbers 1,2,3,4 show the corresponding units as explained above. The fourth unit and the third unit form a relation ""Means"". The third unit is the essential part of this relation, so it is called the nucleus of the relation and fourth unit is called the satellite of the relation. Similarly second unit to third and fourth unit is forming relation ""Condition"". All units are also spans and spans may be composed of more than one unit.",wikipedia
+Rhetorical structure theory,https://en.wikipedia.org/wiki/Rhetorical_structure_theory#Why_RST.3F,"Why_RST.3F From linguistic point of view, RST proposes a different view of text organization than most linguistic theories. RST points to a tight relation between relations and coherence in text From a computational point of view, it provides a characterization of text relations that has been implemented in different systems and for applications as text generation and summarization.",wikipedia
+Virtual assistant (artificial intelligence),https://en.wikipedia.org/wiki/Virtual_assistant_(artificial_intelligence),"Virtual assistant (artificial intelligence) A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices. In many cases users can ask their virtual assistants questions, control home automation devices and media playback, and manage other basic tasks such as email, to-do lists, and calendars - all with verbal commands. In recent years, prominent virtual assistants for direct consumer use have included Amazon's Alexa, Apple's Siri, Microsoft's Cortana, and Google Assistant. Also, companies in various industries often incorporate some kind of virtual assistant technology into their customer service or support. Recently, the emergence of recent artificial intelligence based chatbots, such as ChatGPT, has brought increased capability and interest to the field of virtual assistant products and services.",wikipedia
+Virtual assistant (artificial intelligence),https://en.wikipedia.org/wiki/Virtual_assistant_(artificial_intelligence)#History,"History Radio Rex was the first voice activated toy, patented in 1916 and released in 1922. It was a wooden toy in the shape of a dog that would come out of its house when its name is called. In 1952, Bell Labs presented ""Audrey"", the Automatic Digit Recognition machine. It occupied a six- foot-high relay rack, consumed substantial power, had streams of cables and exhibited the myriad maintenance problems associated with complex vacuum-tube circuitry. It could recognize the fundamental units of speech, phonemes. It was limited to accurate recognition of digits spoken by designated talkers. It could therefore be used for voice dialing, but in most cases push-button dialing was cheaper and faster, rather than speaking the consecutive digits. Another early tool which was enabled to perform digital speech recognition was the IBM Shoebox voice-activated calculator, presented to the general public during the 1962 Seattle World's Fair after its initial market launch in 1961. This early computer, developed almost 20 years before the introduction of the first IBM Personal Computer in 1981, was able to recognize 16 spoken words and the digits 0 to 9. The first natural language processing computer program or the chatbot ELIZA was developed by MIT professor Joseph Weizenbaum in the 1960s. It was created to ""demonstrate that the communication between man and machine was superficial"". ELIZA used pattern matching and substitution methodology into scripted responses to simulate conversation, which gave an illusion of understanding on the part of the program. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: ""I had not realized . . . that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. This gave name to the ELIZA effect, the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation, a phenomenon present in human interactions with virtual assistants. The next milestone in the development of voice recognition technology was achieved in the 1970s at the Carnegie Mellon University in Pittsburgh, Pennsylvania with substantial support of the United States Department of Defense and its DARPA agency, funded five years of a Speech Understanding Research program, aiming to reach a minimum vocabulary of 1,000 words. Companies and academia including IBM, Carnegie Mellon University (CMU) and Stanford Research Institute took part in the program. The result was ""Harpy"", it mastered about 1000 words, the vocabulary of a three-year-old and it could understand sentences. It could process speech that followed pre-programmed vocabulary, pronunciation, and grammar structures to determine which sequences of words made sense together, and thus reducing speech recognition errors. In 1986 Tangora was an upgrade of the Shoebox, it was a voice recognizing typewriter. Named after the world's fastest typist at the time, it had a vocabulary of 20,000 words and used prediction to decide the most likely result based on what was said in the past.",wikipedia
+Virtual assistant (artificial intelligence),https://en.wikipedia.org/wiki/Virtual_assistant_(artificial_intelligence)#History,"History IBM's approach was based on a hidden Markov model, which adds statistics to digital signal processing techniques. The method makes it possible to predict the most likely phonemes to follow a given phoneme. Still each speaker had to individually train the typewriter to recognize his or her voice, and pause between each word. In the 1990s, digital speech recognition technology became a feature of the personal computer with IBM, Philips and Lernout & Hauspie fighting for customers. Much later the market launch of the first smartphone IBM Simon in 1994 laid the foundation for smart virtual assistants as we know them today. [citation needed] In 1997, Dragon's Naturally Speaking software could recognize and transcribe natural human speech without pauses between each word into a document at a rate of 100 words per minute. A version of Naturally Speaking is still available for download and it is still used today, for instance, by many doctors in the US and the UK to document their medical records. [citation needed] In 2001 Colloquis publicly launched SmarterChild, on platforms like AIM and MSN Messenger. While entirely text-based SmarterChild was able to play games, check the weather, look up facts, and converse with users to an extent. The first modern digital virtual assistant installed on a smartphone was Siri, which was introduced as a feature of the iPhone 4S on 4 October 2011. Apple Inc. developed Siri following the 2010 acquisition of Siri Inc. , a spin-off of SRI International, which is a research institute financed by DARPA and the United States Department of Defense. Its aim was to aid in tasks such as sending a text message, making phone calls, checking the weather or setting up an alarm. Over time, it has developed to provide restaurant recommendations, search the internet, and provide driving directions. [citation needed] In November 2014, Amazon announced Alexa alongside the Echo. In April 2017 Amazon released a service for building conversational interfaces for any type of virtual assistant or interface. In the 2020s, artificial intelligence (AI) systems like ChatGPT have gained popularity for their ability to generate human-like responses to text-based conversations. In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was then the ""largest language model ever published at 17 billion parameters. "" On November 30, 2022, ChatGPT was launched as a prototype and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. The advent of ChatGPT and its introduction to the wider public increased interest and competition in the space. In February 2023, Google began introducing an experimental service called ""Bard"" which is based on its LaMDA program to generate text responses to questions asked based on information gathered from the web. While ChatGPT and other generalized chatbots based on the latest generative AI are capable of performing various tasks associated with virtual assistants, there are also more specialized forms of such technology that are designed to target more specific situations or needs.",wikipedia
+Statistical natural language processing,https://en.wikipedia.org/wiki/Statistical_natural_language_processing#Approaches:_Symbolic.2C_statistical.2C_neural_networks,"Approaches:_Symbolic.2C_statistical.2C_neural_networks Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular: such as by writing grammars or devising heuristic rules for stemming. Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally.language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce.the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems.Although rule-based systems for manipulating symbols were still in use in 2020, they have become mostly obsolete with the advance of LLMs in 2023. Before that they were commonly used: when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system, for preprocessing in NLP pipelines, e.g., tokenization, or for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses.In the late 1980s and mid-1990s, the statistical approach ended a period of AI winter, which was caused by the inefficiencies of the rule-based approaches. The earliest decision trees, producing systems of hard if���then rules, were still very similar to the old rule-based approaches. Only the introduction of hidden Markov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015, the statistical approach was replaced by neural networks approach, using word embeddings to capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) have not been needed anymore. Neural machine translation, based on then-newly-invented sequence-to-sequence transformations, made obsolete the intermediate steps, such as word alignment, previously necessary for statistical machine translation.",wikipedia
+Statistical natural language processing,https://en.wikipedia.org/wiki/Statistical_natural_language_processing#General_tendencies_and_.28possible.29_future_directions,"General_tendencies_and_.28possible.29_future_directions Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed: Interest on increasingly abstract, ""cognitive"" aspects of natural language (1999���2001: shallow parsing, 2002���03: named entity recognition, 2006���09/2017���18: dependency syntax, 2004���05/2008���09 semantic role labelling, 2011���12 coreference, 2015���16: discourse parsing, 2019: semantic parsing). Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages) Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems)Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognition refers to ""the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. "" Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics, with two defining aspects: Apply the theory of conceptual metaphor, explained by Lakoff as ""the understanding of one idea, in terms of another"" which provides an idea of the intent of the author. For example, consider the English word big. When used in a comparison (""That is a big tree""), the author's intent is to imply that the tree is physically large relative to other trees or the authors experience. When used metaphorically (""Tomorrow is a big day""), the author's intent to imply importance. The intent behind other usages, like in ""She is a big person"", will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information. Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e. g. , by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US patent 9269353 :Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e. g. , of cognitive grammar, functional grammar, construction grammar, computational psycholinguistics and cognitive neuroscience (e. g.",wikipedia
+Statistical natural language processing,https://en.wikipedia.org/wiki/Statistical_natural_language_processing#General_tendencies_and_.28possible.29_future_directions,"General_tendencies_and_.28possible.29_future_directions , ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e. g. , under the notion of ""cognitive AI"". Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit) and developments in Artificial intelligence, specifically tools and technologies using Large language model approaches.",wikipedia
+Statistical natural language processing,https://en.wikipedia.org/wiki/Statistical_natural_language_processing#See_also,See_also 1 the Road Artificial intelligence detection software Automated essay scoring Biomedical text mining Compound term processing Computational linguistics Computer-assisted reviewing Controlled natural language Deep learning Deep linguistic processing Distributional semantics Foreign language reading aid Foreign language writing aid Information extraction Information retrieval Language and Communication Technologies Language model Language technology Latent semantic indexing Multi-agent system Native-language identification Natural-language programming Natural-language understanding Natural-language search Outline of natural language processing Query expansion Query understanding Reification (linguistics) Speech processing Spoken dialogue systems Text-proofing Text simplification Transformer (machine learning model) Truecasing Question answering Word2vec,wikipedia
+Imagen (Google Brain),https://en.wikipedia.org/wiki/Imagen_(Google_Brain)#Projects,"Projects In October 2016, Google Brain designed an experiment to determine that neural networks are capable of learning secure symmetric encryption. In this experiment, three neural networks were created: Alice, Bob and Eve. Adhering to the idea of a generative adversarial network (GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not. Alice and Bob maintained an advantage over Eve, in that they shared a key used for encryption and decryption. In doing so, Google Brain demonstrated the capability of neural networks to learn secure encryption. In February 2017, Google Brain determined a probabilistic method for converting pictures with 8x8 resolution to a resolution of 32x32. The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations. The proposed software utilizes two neural networks to make approximations for the pixel makeup of translated images. The first network, known as the ""conditioning network,"" downsizes high-resolution images to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones. The other network, known as the ""prior network,"" uses the mappings from the previous network to add more detail to the original image. The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images. Google Brain's results indicate the possibility for neural networks to enhance images. The Google Brain team contributed to the Google Translate project by employing a new deep learning system that combines artificial neural networks with vast databases of multilingual texts. In September 2016, Google Neural Machine Translation (GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples. Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence. But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements. Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors. The GNMT has also shown significant improvement for notoriously difficult translations, like Chinese to English. While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop a Multilingual GNMT system, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before. Google announced that Google Translate can now also translate without transcribing, using neural networks.",wikipedia
+Imagen (Google Brain),https://en.wikipedia.org/wiki/Imagen_(Google_Brain)#Projects,"Projects This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text. According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text. Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence. This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable. Aiming to improve traditional robotics control algorithms where new skills of a robot need to be hand-programmed, robotics researchers at Google Brain are developing machine learning techniques to allow robots to learn new skills on their own. They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known as cloud robotics. As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combine robotics, AI, and the cloud to enable efficient robotic automation through cloud-connected collaborative robots. Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations. For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so. In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints. Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers at X in a research on learning hand-eye coordination for robotic grasping. Their method allowed real-time robot control for grasping novel objects with self-correction. In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos. In 2020, Google Brain Team and University of Lille presented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words. The model can be altered to choose speech segments in the context of Text-To-Speech Training. It can also prevent malicious voice generators from accessing the data. TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.",wikipedia
+Imagen (Google Brain),https://en.wikipedia.org/wiki/Imagen_(Google_Brain)#Projects,"Projects The tool has been used by farmers to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images. Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data. TensorFlow was updated with a suite of tools for users to guide the neural network to create images and music. However, the team from Valdosta State University found that the AI struggles to perfectly replicate human intention in artistry, similar to the issues faced in translation. The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis. During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task. Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot. Google Brain announced in 2022 that it created two different types of text-to-image models called Imagen and Parti that compete with OpenAI's DALL-E. Later in 2022, the project was extended to text-to-video. The Google Brain projects' technology is currently used in various other Google products such as the Android Operating System's speech recognition system, photo search for Google Photos, smart reply in Gmail, and video recommendations in YouTube.",wikipedia
+Imagen (Google Brain),https://en.wikipedia.org/wiki/Imagen_(Google_Brain)#Reception,"Reception Google Brain has received coverage in Wired, NPR, and Big Think. These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications. In December 2020, AI ethicist Timnit Gebru left Google. While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled ""On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"" This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public. The request to retract the paper was made by Megan Kacholia, vice president of Google Brain. As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of ""research censorship"" and condemning Gebru's treatment at the company. In February 2021, Google fired one of the leaders of the company's AI ethics team, Margaret Mitchell. The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru. In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving. In April 2021, Google Brain co-founder Samy Bengio announced his resignation from the company. Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell. While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations. In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published in Nature, by Google's AI team members, Anna Goldie and Azalia Mirhoseini, about their findings on the ability of computers to design computer chip components.",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#History,"History The concept of automated art dates back at least to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria were described as having designed machines capable of writing text, generating sounds, and playing music. The tradition of creative automatons has flourished throughout history, such as Maillardet's automaton, created in the early 1800s. The academic discipline of artificial intelligence was founded at a research workshop at Dartmouth College in 1956, and has experienced several waves of advancement and optimism in the decades since. Since its founding, researchers in the field have raised philosophical and ethical arguments about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. Since the founding of AI in the 1950s, artists and researchers have used artificial intelligence to create artistic works. By the early 1970s, Harold Cohen was creating and exhibiting AI works created by AARON, the computer program Cohen created to generate paintings, including a 1972 exhibition at the Los Angeles County Museum of Art. In both 1991 and 1992, Karl Sims won the Golden Nica award at Prix Ars Electronica for his 3D AI animated videos using artificial evolution. In 2009, Eric Millikin won the Pulitzer Prize along with several other awards for his artificial intelligence art that was critical of government corruption in Detroit and resulted in the city's mayor being sent to jail. In 2014, Ian Goodfellow and colleagues at Universit�� de Montr��al developed the Generative adversarial network, a type of Deep neural network capable of learning to mimic the statistical distribution of input data such as images. In 2015, a team at Google released DeepDream, a program that uses algorithmic pareidolia to create a dream-like appearance reminiscent of a psychedelic experience. In 2018, an auction sale of artificial intelligence art was held at Christie's Auction House in New York where the AI artwork Edmond de Belamy (a pun on Goodfellow's name) sold for $432,500, which was almost 45 times higher than its estimate of $7,000���$10,000. The artwork was created by ""Obvious"", a Paris-based collective. In 2019, Stephanie Dinkins won the Creative Capital award for her creation of an evolving artificial intelligence based on the ""interests and culture(s) of people of color."" Also in 2019, Sougwen Chung won the Lumen Prize for her performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung. In 2021, using the Transformer models used in GPT-2 and GPT-3, OpenAI developed DALL-E, a text-to-image AI model capable of producing high-quality images based on natural language prompts. In 2022, DALL-E was followed by Midjourney, then by the open source Stable Diffusion, leading to a dramatic growth in the use of AI to generate visual art. In 2022, Refik Anadol created an artificial intelligence art installation at the Museum of Modern Art in New York, based on the museum's own collection.",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#Tools_and_processes,"Tools_and_processes Many mechanisms for creating AI art have been developed, including procedural ""rule-based"" generation of images using mathematical patterns, algorithms which simulate brush strokes and other painted effects, and deep learning algorithms, such as generative adversarial networks (GANs) and transformers. One of the first significant AI art systems is AARON, developed by Harold Cohen beginning in the late 1960s at the University of California at San Diego. AARON is the most notable example of AI art in the era of GOFAI programming because of its use of a symbolic rule-based approach to generate technical images. Cohen developed AARON with the goal of being able to code the act of drawing. In its primitive form, AARON created simple black and white drawings. Cohen would later finish the drawings by painting them. Throughout the years, he also began to develop a way for AARON to also paint. Cohen designed AARON to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen. Generative adversarial networks (GANs) were designed in 2014. This system uses a ""generator"" to create new images and a ""discriminator"" to decide which created images are considered successful. DeepDream, released by Google in 2015, uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating deliberately over-processed images. After DeepDream's release, several companies released apps that transform photos into art-like images with the style of well-known sets of paintings. The website Artbreeder, launched in 2018, uses the models StyleGAN and BigGAN to allow users to generate and modify images such as faces, landscapes, and paintings. Several programs use text-to-image models to generate a variety of images based on various text prompts. They include EleutherAI's VQGAN+CLIP which was released in 2021, OpenAI's DALL-E which released a series of images in January 2021, Google Brain's Imagen and Parti which was announced in May 2022, Microsoft's NUWA-Infinity, and Stable Diffusion which was released in August 2022. Stability. ai has a Stable Diffusion web interface called DreamStudio. Stable Diffusion is source-available software, enabling further development such as plugins for Krita, Photoshop, Blender, and GIMP, as well as the Automatic1111 web-based open source user interface. Stable Diffusion's main pre-trained model is shared on the Hugging Face Hub. There are many other AI art generation programs including simple consumer-facing mobile apps and Jupyter notebooks that require powerful GPUs to run effectively. The exhibition ""Thinking Machines: Art and Design in the Computer Age, 1959���1989"" at MoMA provided an overview of AI applications for art, architecture, and design. Exhibitions showcasing the usage of AI to produce art include the 2016 Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the 2017 exhibition ""Unhuman: Art in the Age of AI"", which took place in Los Angeles and Frankfurt. In spring 2018, the Association for Computing Machinery dedicated a magazine issue to the subject of computers and art.",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#Tools_and_processes,"Tools_and_processes In June 2018, ""Duet for Human and Machine"", an art piece permitting viewers to interact with an artificial intelligence, premiered at the Beall Center for Art + Technology. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. Ars Electronica's 2019 festival ""Out of the box"" explored art's role in a sustainable societal transformation. Examples of such augmentation may include e. g. enabling expansion of noncommercial niche genres (common examples are cyberpunk derivatives like solarpunk) by amateurs, novel entertainment, novel imaginative childhood play,[additional citation(s) needed] very fast prototyping, increasing art-making accessibility and artistic output per effort and/or expenses and/or time ��� e. g. via generating drafts, inspirations, draft-refinitions, and image-components (Inpainting). Prompts for some text-to-image models can also include images and keywords and configurable parameters, such as artistic style, which is often used via keyphrases like ""in the style of [name of an artist]"" in the prompt and/or selection of a broad aesthetic/art style. There are platforms for sharing, trading, searching, forking/refining and/or collaborating on prompts for generating specific imagery from image generators. Prompts are often shared along with images on image-sharing websites such as Reddit and AI art-dedicated websites. A prompt is not the complete input needed for the generation of an image: additional inputs that determine the generated image include the output resolution, random seed, and random sampling parameters. Synthetic media, which includes AI art, was described in 2022 as a major technology-driven trend that will affect business in the coming years. 'Synthography' is a proposed term for the practice of generating images that are similar to photographs using AI. Additional functionalities are under development and may improve various applications or enable new ones ��� such as ""Textual Inversion"" which refers to enabling the use of user-provided concepts (like an object or a style) learned from few images. With textual inversion, novel personalized art can be generated from the associated word(s) (the keywords that have been assigned to the learned, often abstract, concept) and model extensions/fine-tuning (see also: DreamBooth). Generated images are sometimes used as sketches or low-cost experimentations or illustration of proof-of-concept-stage ideas ��� additional functionalities or improvements may also relate to post-generation manual editing (polishing or artistic usage) of prompts-based art[additional citation(s) needed] (such as subsequent tweaking with an image editor).",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#Criticism.2C_issues_and_controversy,"Criticism.2C_issues_and_controversy Ever since the beginnings of artificial intelligence art, it has sparked several debates, which in the 2020s has often concerned whether AI art can be defined as art and the impact it has on artists. In 1985, intellectual property law professor Pamela Samuelson considered the legal questions surrounding AI art authorship as it relates to copyright: who owns the copyright when the piece of art was created by artificial intelligence? Samuelson's article, ""Allocating Ownership Rights in Computer-Generated Works,"" argued that rights should be allocated to the user of the generator program. In response to the same question, a 2019 Florida Law Review article has presented three possible choices. First, the artificial intelligence itself becomes the copyright owner. To do this, Section 101 of the Copyright Act would need to be amended to define ""author"" as a natural person or a computer. Second, following Samuelson's argument, the user, programmer, or artificial intelligence company is the copyright owner. This would be an expansion of the ""work for hire"" doctrine, under which ownership of a copyright is transferred to the ""employer. "" Finally, no one becomes the copyright owner, and the work would automatically enter public domain. The argument here is that because no person ""created"" the piece of art, no one should be the copyright owner. In 2022, coinciding with the rising availability of consumer-grade AI image generation services, popular discussion renewed over the legality and ethics of AI-generated art. Of particular issue is the use of copyrighted art within AI training datasets: in September 2022, Reema Selhi, of the Design and Artists Copyright Society, stated that ""there are no safeguards for artists to be able to identify works in databases that are being used and opt out. "" Some have claimed that images generated by these models can bear an uncanny resemblance to extant artwork, sometimes including remains of the original artist's signature. Such discussion came to a head in December, when users of the portfolio platform ArtStation staged an online protest against nonconsensual use of their artwork within datasets: this resulted in opt-out services, such as ""Have I Been Trained? ,"" increasing in profile, as well as some online art platforms promising to offer their own opt-out options. According to the US Copyright Office, artificial intelligence programs are unable to hold copyright, a decision upheld at the Federal District level as of August 2023 followed the reasoning from the monkey selfie copyright dispute. An issue with many popular AI art programs is that they generate images based on artists's work without their consent. In January 2023 three artists ��� Sarah Andersen, Kelly McKernan, and Karla Ortiz ��� filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists. The same month, Stability AI was also sued by Getty Images for using its images in the training data.",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#Criticism.2C_issues_and_controversy,"Criticism.2C_issues_and_controversy In July 2023, U. S. District Judge William Orrick inclined to dismiss most of the lawsuit filed by Andersen, McKernan, and Ortiz, but allowed them to file a new complaint. As generative AI image software such as Stable Diffusion and DALL-E continue to advance and proliferate, the potential problems and concerns that these systems pose on creativity and artistry has risen. During 2022, artists working in various media raised concerns about the impact that generative artificial intelligence could have on their ability to earn money, particularly if AI-based images started replacing artists working in illustration and design industries. In August 2022, digital artist R. J. Palmer stated that ""I could easily envision a scenario where using AI, a single artist or art director could take the place of 5-10 entry level artists. . . I have seen a lot of self-published authors and such say how great it will be that they don���t have to hire an artist. "" Scholars Jiang et al. support this concern of job loss in creative fields by stating, ���Leaders of companies like Open AI and Stability AI have openly stated that they expect generative AI systems to replace creatives imminently,��� and adding that, ���This labor displacement is evident across creative industries. For instance, according to an article on Rest of World, a Chinese gaming industry recruiter has noticed a 70% drop in illustrator jobs, in part due to the widespread use of image generators; another studio in China is reported to have laid off a third of its character design illustrators. ��� AI-based images have become more commonplace in art markets and search engines because AI-based text-to-image systems are trained from pre-existing artistic images, sometimes without the original artist's consent, allowing the software to mimic specific artists' styles. For example, Polish digital artist Greg Rutkowski has stated that it's more difficult to search for his work online because many of the images in the results are AI-generated specifically to mimic his style. Furthermore, some training databases on which AI systems are based aren't accessible to the public, which makes it impossible to know the extent to which their training data contains copyright protected images. For example, a tool built by Simon Willison allowed people to search 0.5% of the training data for Stable Diffusion V1.1, i. e. , 12 million of 2.3 billion instances from LAION 2B. Artist Karen Hallion discovered that their copyrighted images were used as training data without their consent. The ability of AI-based art software to mimic or forge artistic style also raises concerns of malice or greed. Works of AI-generated art, such as Th����tre d'Op��ra Spatial, a text-to-image AI illustration that won the grand prize in the August 2022 digital art competition at the Colorado State Fair, have begun to overwhelm art contests and other submission forums meant for small artists.",wikipedia
+Artificial intelligence art,https://en.wikipedia.org/wiki/Artificial_intelligence_art#Criticism.2C_issues_and_controversy,"Criticism.2C_issues_and_controversy These AI-created submissions have lead organizations such as Clarkesworld, a science fiction magazine, to close their submissions and only solicit works from known artists after their submission forum was flooded with texts generated by ChatGPT. AI-generated images have raised the concern that they can be made to damage an artist's reputation. Artist Sarah Hendersen had her art copied and then used to depict Neo-Nazi ideology. She stated that the spread of hate speech online can be worsened by the use of image generators. Jiang et al. also add to this sentiment by stating that ""tools trained on artists' works and which allow users to mimic their style without their consent or compensation, can cause significant reputational damage [by] spreading messages that they do not endorse. "" The 2023 winner of the ""creative open"" category in the Sony World Photography Awards, Boris Eldagsen, revealed after winning that his entry was actually generated by artificial intelligence. Photographer Feroz Khan commented to the BBC that Eldagsen had ""clearly shown that even experienced photographers and art experts can be fooled"". Smaller contests have been affected as well; in 2023 a contest called the ""Self-Published Fantasy Blog-Off cover contest"", run by author Mark Lawrence, was cancelled after the winning entry was allegedly exposed to be a collage of images generated by Midjourney. Wider issues extend beyond the art world. As with other types of photo manipulation since the early 19th century, some people in the early 21st century have been concerned that AI could be used to create content that is misleading, known as ""deepfakes"". In May 2023, widespread attention was given to a Midjourney-generated photo of Pope Francis wearing a white puffer coat and another showing the fictional arrest of Donald Trump, and an AI-generated image of an attack on the Pentagon went viral as a hoax news story on Twitter.",wikipedia
+Inflection AI,https://en.wikipedia.org/wiki/Inflection_AI#Products,"Products The first product in wide release by Inflection AI is a chatbot, Pi, named for ���personal intelligence,��� that is intended to function as an artificial intelligence-based personal assistant. Among the user experience goals that the company has stated for the Pi product are: providing an experience of emotional support for human users, in which the chatbot should be able to maintain an interactive text-based dialogue with a human user that includes elements of kindness, a diplomatic tone about sensitive topics, and humor. Comparisons and contrasts have been drawn between the Pi chatbot and ChatGPT, a chatbot created by OpenAI.",wikipedia
+Ilya Sutskever,https://en.wikipedia.org/wiki/Ilya_Sutskever,"Ilya Sutskever Ilya Sutskever FRS (Hebrew: ���������� ������������; Russian: �������� ����������������; born 1985/86) is a Russian-born Israeli-Canadian computer scientist working in machine learning, who co-founded and serves as Chief Scientist of OpenAI. He has made several major contributions to the field of deep learning. He is the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network. Sutskever is also one of the many co-authors of the AlphaGo paper.",wikipedia
+Ilya Sutskever,https://en.wikipedia.org/wiki/Ilya_Sutskever#Career_and_research,"Career_and_research After graduation in 2012, Sutskever spent two months as a postdoc with Andrew Ng at Stanford University. He then returned to University of Toronto and joined Hinton's new research company DNNResearch, a spinoff of Hinton's research group. Four months later, in March 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at Google Brain. At Google Brain, Sutskever worked with Oriol Vinyals and Quoc Viet Le to create the sequence-to-sequence learning algorithm. He is also a co-inventor of AlexNet and has worked on TensorFlow. At the end of 2015, he left Google to become cofounder and chief scientist of the newly founded organization OpenAI. In 2023, he announced that he will co-lead OpenAI's new ""Superalignment"" project, which tries to solve the alignment of superintelligences in 4 years. He wrote that even if superintelligence seems far off, it could happen this decade. 2015, Sutskever was named in MIT Technology Review's 35 Innovators Under 35. 2018, Sutskever was the keynote speaker at Nvidia Ntech 2018 and AI Frontiers Conference 2018. 2022, he was elected a Fellow of the Royal Society (FRS).",wikipedia
+Self-driving car,https://en.wikipedia.org/wiki/Self-driving_car#Challenges,"Challenges The primary obstacle to ACs is the advanced software and mapping required to make them work safely across the wide variety of conditions that drivers experience. In addition to handling day/night driving in good and bad weather on roads of arbitrary quality, ACs must cope with other vehicles, road obstacles, poor/missing traffic controls, flawed maps, and handle endless edge cases, such as following the instructions of a police officer managing traffic at a crash site. Other obstacles include cost, liability, consumer reluctance, potential ethical dilemmas, security, privacy, and legal/regulatory framework. Further, AVs could automate the work of professional drivers, eliminating many jobs, which could slow acceptance. Tesla calls its Level 2 ADAS ""Full Self-Driving (FSD) Beta"". US Senators Richard Blumenthal and Edward Markey called on the Federal Trade Commission (FTC) to investigate this marketing in 2021. In December 2021 in Japan, Mercedes-Benz was punished by the Consumer Affairs Agency for misleading product descriptions. Mercedes-Benz was criticized for a misleading US commercial advertising E-Class models. At that time, Mercedes-Benz rejected the claims and stopped its ""self-driving car"" ad campaign that had been running. In August 2022, the California Department of Motor Vehicles (DMV) accused Tesla of deceptive marketing practices. In the 2020s, concerns over ACs vulnerability to cyberattacks and data theft emerged. In 2018 and 2019 former Apple engineers were charged with stealing information related to Apple's self-driving car project. In 2021 the United States Department of Justice (DOJ) accused Chinese security officials of coordinating a hacking campaign to steal information from government entities, including research related to autonomous vehicles. China has prepared ""the Provisions on Management of Automotive Data Security (Trial) to protect its own data"". Cellular Vehicle-to-Everything technologies are based on 5G wireless networks. As of November 2022[update], the US Congress was considering the possibility that imported Chinese AC technology could facilitate espionage. While predicting the behavior of ACs that do not use traditional communications such as hand signals, is a major challenge for human drivers, the real-time prediction of the behavior of other vehicles, pedestrians etc, some of which may be stationary when first noted, is even greater challenge for self-driving cars. Raster-based methods have been replaced by vector-based methods in order to overcome the former's lossy rendering, limited receptive field, and prohibitively high cost. The remaining problem is high level of uncertainty that emerges in trajectory predictions as the prediction timeframe is extended. Also, if data re-normalization and re-encoding are used to update future trajectories each time a self-driving car changes its position, its action is often delayed by 8 milliseconds, potentially causing an accident. Several powerful trajectory prediction models have recently adopted Transformers with factorized attention as their encoders, but their scalability is still limited by the computational complexity of factorized attention. Most recently proposed QCNet model uses a query-centric instead of agent-centric modeling, taking advantage of both anchor-based and anchor-free solutions, with an anchor-free module generating adaptive anchors in a data-driven manner and an anchor-based module refining these anchors based on the scene context.",wikipedia
+Self-driving car,https://en.wikipedia.org/wiki/Self-driving_car#Challenges,"Challenges The model injects the relative spatialtemporal positions into the key and value (both Transformer elements) when performing attention-based scene-context fusion. For ACs that have not achieved L5, the ADAS has to be able to safely accept control from and return it to the driver. The second challenge is known as risk compensation: as a system is perceived to be safer, on average people engage in riskier behavior. (People who wear seat belts drive faster). ACs suffer from this problem: for example Tesla Autopilot users in some cases stop monitoring the vehicle while it is in control. In order for people to buy self-driving cars and vote for the government to allow them on roads, the technology must be trusted as safe. Automatic elevators were invented in 1900, but did not become common until operator strikes and trust was built with advertising and features such as an emergency stop button. Standards for liability have yet to be adopted to address crashes and other incidents. Does liability rest with the manufacturer or the driver/passenger and does it vary with, e. g. , automation level or merely the specific circumstances? The trolley problem is a thought experiment in ethics. Adapted for ACs, consider an AC carrying a passenger when suddenly a pedestrian steps in its way and the car has to choose between killing the pedestrian or swerving into a wall, killing the passenger. Ethical researchers have suggested deontology (formal rules) and utilitarianism (harm reduction) as applicable. Public opinion has been reported to support harm reduction, except that they want the vehicle to prefer them when they are riding in it. However, utilitarian regulations are unpopular. Privacy-related issues arise mainly from the fact that ACs are connected to the internet. Any connected device offers the potential to be penetrated. This information includes destinations, routes, cabin recordings, media preferences, behavioral patterns, and others. Whether existing road infrastructure can support higher levels of automation has not been finalized. The answer may vary across jurisdictions. In March 2023, the Japanese government unveiled a plan to set up a dedicated highway lane for ACs. In April 2023, JR East announced their challenge to raise their self-driving level of Kesennuma Line Bus rapid transit (BRT) in rural area from the current Level 2 to Level 4 at 60 km/h.",wikipedia
+Self-driving cars,https://en.wikipedia.org/wiki/Self-driving_cars#Challenges,"Challenges The primary obstacle to ACs is the advanced software and mapping required to make them work safely across the wide variety of conditions that drivers experience. In addition to handling day/night driving in good and bad weather on roads of arbitrary quality, ACs must cope with other vehicles, road obstacles, poor/missing traffic controls, flawed maps, and handle endless edge cases, such as following the instructions of a police officer managing traffic at a crash site. Other obstacles include cost, liability, consumer reluctance, potential ethical dilemmas, security, privacy, and legal/regulatory framework. Further, AVs could automate the work of professional drivers, eliminating many jobs, which could slow acceptance. Tesla calls its Level 2 ADAS ""Full Self-Driving (FSD) Beta"". US Senators Richard Blumenthal and Edward Markey called on the Federal Trade Commission (FTC) to investigate this marketing in 2021. In December 2021 in Japan, Mercedes-Benz was punished by the Consumer Affairs Agency for misleading product descriptions. Mercedes-Benz was criticized for a misleading US commercial advertising E-Class models. At that time, Mercedes-Benz rejected the claims and stopped its ""self-driving car"" ad campaign that had been running. In August 2022, the California Department of Motor Vehicles (DMV) accused Tesla of deceptive marketing practices. In the 2020s, concerns over ACs vulnerability to cyberattacks and data theft emerged. In 2018 and 2019 former Apple engineers were charged with stealing information related to Apple's self-driving car project. In 2021 the United States Department of Justice (DOJ) accused Chinese security officials of coordinating a hacking campaign to steal information from government entities, including research related to autonomous vehicles. China has prepared ""the Provisions on Management of Automotive Data Security (Trial) to protect its own data"". Cellular Vehicle-to-Everything technologies are based on 5G wireless networks. As of November 2022[update], the US Congress was considering the possibility that imported Chinese AC technology could facilitate espionage. While predicting the behavior of ACs that do not use traditional communications such as hand signals, is a major challenge for human drivers, the real-time prediction of the behavior of other vehicles, pedestrians etc, some of which may be stationary when first noted, is even greater challenge for self-driving cars. Raster-based methods have been replaced by vector-based methods in order to overcome the former's lossy rendering, limited receptive field, and prohibitively high cost. The remaining problem is high level of uncertainty that emerges in trajectory predictions as the prediction timeframe is extended. Also, if data re-normalization and re-encoding are used to update future trajectories each time a self-driving car changes its position, its action is often delayed by 8 milliseconds, potentially causing an accident. Several powerful trajectory prediction models have recently adopted Transformers with factorized attention as their encoders, but their scalability is still limited by the computational complexity of factorized attention. Most recently proposed QCNet model uses a query-centric instead of agent-centric modeling, taking advantage of both anchor-based and anchor-free solutions, with an anchor-free module generating adaptive anchors in a data-driven manner and an anchor-based module refining these anchors based on the scene context.",wikipedia
+Self-driving cars,https://en.wikipedia.org/wiki/Self-driving_cars#Challenges,"Challenges The model injects the relative spatialtemporal positions into the key and value (both Transformer elements) when performing attention-based scene-context fusion. For ACs that have not achieved L5, the ADAS has to be able to safely accept control from and return it to the driver. The second challenge is known as risk compensation: as a system is perceived to be safer, on average people engage in riskier behavior. (People who wear seat belts drive faster). ACs suffer from this problem: for example Tesla Autopilot users in some cases stop monitoring the vehicle while it is in control. In order for people to buy self-driving cars and vote for the government to allow them on roads, the technology must be trusted as safe. Automatic elevators were invented in 1900, but did not become common until operator strikes and trust was built with advertising and features such as an emergency stop button. Standards for liability have yet to be adopted to address crashes and other incidents. Does liability rest with the manufacturer or the driver/passenger and does it vary with, e. g. , automation level or merely the specific circumstances? The trolley problem is a thought experiment in ethics. Adapted for ACs, consider an AC carrying a passenger when suddenly a pedestrian steps in its way and the car has to choose between killing the pedestrian or swerving into a wall, killing the passenger. Ethical researchers have suggested deontology (formal rules) and utilitarianism (harm reduction) as applicable. Public opinion has been reported to support harm reduction, except that they want the vehicle to prefer them when they are riding in it. However, utilitarian regulations are unpopular. Privacy-related issues arise mainly from the fact that ACs are connected to the internet. Any connected device offers the potential to be penetrated. This information includes destinations, routes, cabin recordings, media preferences, behavioral patterns, and others. Whether existing road infrastructure can support higher levels of automation has not been finalized. The answer may vary across jurisdictions. In March 2023, the Japanese government unveiled a plan to set up a dedicated highway lane for ACs. In April 2023, JR East announced their challenge to raise their self-driving level of Kesennuma Line Bus rapid transit (BRT) in rural area from the current Level 2 to Level 4 at 60 km/h.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,"Approaches There are many methods for edge detection, but most of them can be grouped into two categories, search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction). The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions. A survey of a number of different edge detection methods can be found in (Ziou and Tabbone 1998); see also the encyclopedia articles on edge detection in Encyclopedia of Mathematics and Encyclopedia of Computer Science and Engineering. John Canny considered the mathematical problem of deriving an optimal smoothing filter given the criteria of detection, localization and minimizing multiple responses to a single edge. He showed that the optimal filter given these assumptions is a sum of four exponential terms. He also showed that this filter can be well approximated by first-order derivatives of Gaussians. Canny also introduced the notion of non-maximum suppression, which means that given the presmoothing filters, edge points are defined as points where the gradient magnitude assumes a local maximum in the gradient direction. Looking for the zero crossing of the 2nd derivative along the gradient direction was first proposed by Haralick. It took less than two decades to find a modern geometric variational meaning for that operator that links it to the Marr���Hildreth (zero crossing of the Laplacian) edge detector. That observation was presented by Ron Kimmel and Alfred Bruckstein. Although his work was done in the early days of computer vision, the Canny edge detector (including its variations) is still a state-of-the-art edge detector. Edge detectors that perform better than the Canny usually require longer computation times or a greater number of parameters. Vladimir A. Kovalevsky has suggested a quite different approach. He uses a preprocessing of the image with the Sigma filter and with a special filter for the dilution of the ramps. This method uses no brightness of the image but only the intensities of the color channels which is important for detecting an edge between two adjacent pixels of equal brightness but different colors. The method scans the image two times: first along the horizontal lines and second along the vertical columns.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,"Approaches In each horizontal line six consequent adjacent pixels are considered and five color difference between each two adjacent pixels are calculated. Each color difference is the sum of absolute differences of the intensities of the color channels Red, Green, and Blue of the corresponding adjacent pixels. If this sum is greater than a given threshold, then the sign of the color difference is set equal to the sign of the difference of the green intensities. If the green difference is zero, then the sign of the color difference is set equal to the sign of the difference of the red intensities. If, however, both the green and the red differences are zero, then the sign of the color difference is set equal to the sign of the blue difference which in this case cannot be zero since the sum is greater than the threshold. Certain conditions for the values and signs of the five color differences are specified in such way that if the conditions are fulfilled, then a short vertical stroke is put between the third and the fourth of the six pixels as the label of the edge. Similar calculations are performed for the vertical columns. In this case a short horizontal stroke is put between the third and the fourth of the six subsequent pixels. The vertical and horizontal strokes (being the one-dimensional cells of an abstract cell complex corresponding to the image) mostly compose a connected sequence representing the edge. This method is robust and very fast and, what is more important, it can detect edges between adjacent pixels of equal brightness���s if the color difference between these pixels is greater than the threshold. The Canny���Deriche detector was derived from similar mathematical criteria as the Canny edge detector, although starting from a discrete viewpoint and then leading to a set of recursive filters for image smoothing instead of exponential filters or Gaussian filters. The differential edge detector described below can be seen as a reformulation of Canny's method from the viewpoint of differential invariants computed from a scale space representation leading to a number of advantages in terms of both theoretical analysis and sub-pixel implementation. In that aspect, Log Gabor filter have been shown to be a good choice to extract boundaries in natural scenes. Different gradient operators can be applied to estimate image gradients from the input image or a smoothed version of it. The simplest approach is to use central differences: corresponding to the application of the following filter masks to the image data: The well-known and earlier Sobel operator is based on the following filters: Given such estimates of first-order image derivatives, the gradient magnitude is then computed as: while the gradient orientation can be estimated as Other first-order difference operators for estimating image gradient have been proposed in the Prewitt operator, Roberts cross, Kayyali operator and Frei���Chen operator. It is possible to extend filters dimension to avoid the issue of recognizing edge in low SNR image.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,"Approaches The cost of this operation is loss in terms of resolution. Examples are Extended Prewitt 7��7. Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise and detecting edges of irrelevant features in the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges. If the edge is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction. A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image. Edge thinning is a technique used to remove the unwanted spurious points on the edges in an image. This technique is employed after the image has been filtered for noise (using median, Gaussian filter etc. ), the edge operator has been applied (like the ones described above, Canny or Sobel) to detect the edges and after the edges have been smoothed using an appropriate threshold value. This removes all the unwanted points and if applied carefully, results in one pixel thick edge elements. Advantages: Sharp and thin edges lead to greater efficiency in object recognition. If Hough transforms are used to detect lines and ellipses, then thinning could give much better results. If the edge happens to be the boundary of a region, then thinning could easily give the image parameters like perimeter without much algebra.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,"Approaches There are many popular algorithms used to do this, one such is described below: Choose a type of connectivity, like 8, 6 or 4. 8 connectivity is preferred, where all the immediate pixels surrounding a particular pixel are considered. Remove points from North, south, east and west. Do this in multiple passes, i. e. after the north pass, use the same semi processed image in the other passes and so on. Remove a point if:The point has no neighbors in the North (if you are in the north pass, and respective directions for other passes). The point is not the end of a line. The point is isolated. Removing the points will not cause to disconnect its neighbors in any way. Else keep the point. The number of passes across direction should be chosen according to the level of accuracy desired. Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient. The early Marr���Hildreth operator is based on the detection of zero-crossings of the Laplacian operator applied to a Gaussian-smoothed image. It can be shown, however, that this operator will also return false edges corresponding to local minima of the gradient magnitude. Moreover, this operator will give poor localization at curved edges. Hence, this operator is today mainly of historical interest. A more refined second-order edge detection approach which automatically detects edges with sub-pixel accuracy, uses the following differential approach of detecting zero-crossings of the second-order directional derivative in the gradient direction: Following the differential geometric way of expressing the requirement of non-maximum suppression proposed by Lindeberg, let us introduce at every image point a local coordinate system ( u , v ) {\displaystyle (u,v)} , with the v {\displaystyle v} -direction parallel to the gradient direction. Assuming that the image has been pre-smoothed by Gaussian smoothing and a scale space representation L ( x , y ; t ) {\displaystyle L(x,y;t)} at scale t {\displaystyle t} has been computed, we can require that the gradient magnitude of the scale space representation, which is equal to the first-order directional derivative in the v {\displaystyle v} -direction L v {\displaystyle L_{v}} , should have its first order directional derivative in the v {\displaystyle v} -direction equal to zero while the second-order directional derivative in the v {\displaystyle v} -direction of L v {\displaystyle L_{v}} should be negative, i. e.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,"Approaches , Written out as an explicit expression in terms of local partial derivatives L x , L y , ��� , L y y y {\displaystyle L_{x},L_{y},\ldots ,L_{yyy}} , this edge definition can be expressed as the zero-crossing curves of the differential invariant that satisfy a sign-condition on the following differential invariant where L x , L y , ��� , L y y y {\displaystyle L_{x},L_{y},\ldots ,L_{yyy}} denote partial derivatives computed from a scale space representation L {\displaystyle L} obtained by smoothing the original image with a Gaussian kernel. In this way, the edges will be automatically obtained as continuous curves with sub-pixel accuracy. Hysteresis thresholding can also be applied to these differential and subpixel edge segments. In practice, first-order derivative approximations can be computed by central differences as described above, while second-order derivatives can be computed from the scale space representation L {\displaystyle L} according to: corresponding to the following filter masks: Higher-order derivatives for the third-order sign condition can be obtained in an analogous fashion. A recent development in edge detection techniques takes a frequency domain approach to finding edge locations. Phase congruency (also known as phase coherence) methods attempt to find locations in an image where all sinusoids in the frequency domain are in phase. These locations will generally correspond to the location of a perceived edge, regardless of whether the edge is represented by a large change in intensity in the spatial domain. A key benefit of this technique is that it responds strongly to Mach bands, and avoids false positives typically found around roof edges. A roof edge, is a discontinuity in the first order derivative of a grey-level profile. The phase stretch transform or PST is a physics-inspired computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is a spin-off from research on the time stretch dispersive Fourier transform. PST transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index). The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase contrast microscopy but on digital images. PST is also applicable to digital images as well as temporal, time series, data. To increase the precision of edge detection, several subpixel techniques had been proposed, including curve-fitting, moment-based, reconstructive, and partial area effect methods. These methods have different characteristics. Curve fitting methods are computationally simple but are easily affected by noise. Moment-based methods use an integral-based approach to reduce the effect of noise, but may require more computations in some cases. Reconstructive methods use horizontal gradients or vertical gradients to build a curve and find the peak of the curve as the sub-pixel edge. Partial area effect methods are based on the hypothesis that each pixel value depends on the area at both sides of the edge inside that pixel, producing accurate individual estimation for every edge pixel.",wikipedia
+Edge detection,https://en.wikipedia.org/wiki/Edge_detection#Approaches,Approaches Certain variants of the moment-based technique have been shown to be the most accurate for isolated edges.,wikipedia
+Feature learning,https://en.wikipedia.org/wiki/Feature_learning,"Feature learning In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensor data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Feature learning can be either supervised, unsupervised or self-supervised. In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model and it must produce the ground truth label as the correct answer. This can be leveraged to generate feature representations with the model which result in high label prediction accuracy. Examples include supervised neural networks, multilayer perceptron and (supervised) dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data by analyzing the relationship between points in the dataset. Examples include dictionary learning, independent component analysis, matrix factorization and various forms of clustering. In self-supervised feature learning, features are learned using unlabeled data like unsupervised learning, however input-label pairs are constructed from each data point, which enables learning the structure of the data through supervised methods such as gradient descent. Classical examples include word embeddings and autoencoders. SSL has since been applied to many modalities through the use of deep neural network architectures such as CNNs and transformers.",wikipedia
+Feature learning,https://en.wikipedia.org/wiki/Feature_learning#Self-supervised,"Self-supervised Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for an information signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations. Training tasks typically fall under the classes of either contrastive, generative or both. Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation. Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation. A common setup for self-supervised representation learning of a certain data type (e. g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e. g. words) which new data can be broken into, or a neural network able to convert each new data point (e. g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input. Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types. Word2vec is a word embedding technique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text. The model has two possible training schemes to produce word vector representations, one generative and one contrastive. The first is word prediction given each of the neighboring words as an input. The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context. Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data.",wikipedia
+Feature learning,https://en.wikipedia.org/wiki/Feature_learning#Self-supervised,"Self-supervised Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context. The domain of image representation learning has employed many different self-supervised training techniques, including transformation, inpainting, patch discrimination and clustering. Examples of generative approaches are Context Encoders, which trains an AlexNet CNN architecture to generate a removed image region given the masked image as input, and iGPT, which applies the GPT-2 language model architecture to images by training on pixel prediction after reducing the image resolution. Many other self-supervised methods use siamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation. SimCLR is a contrastive approach which uses negative examples in order to generate image representations with a ResNet CNN. Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training. The goal of many graph representation learning techniques is to produce an embedded representation of each node based on the overall network topology. node2vec extends the word2vec training technique to nodes in a graph by using co-occurrence in random walks through the graph as the measure of association. Another approach is to maximize mutual information, a measure of similarity, between the representations of associated structures within the graph. An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a ���patch��� around each node, and a summary representation of the entire graph. Negative samples are obtained by pairing the graph representation with either representations from another graph in a multigraph training setting, or corrupted patch representations in single graph training. With analogous results in masked prediction and clustering, video representation learning approaches are often similar to image techniques but must utilize the temporal sequence of video frames as an additional learned structure. Examples include VCP, which masks video clips and trains to choose the correct one given a set of clip options, and Xu et al. , who train a 3D-CNN to identify the original order given a shuffled set of video clips. Self-supervised representation techniques have also been applied to many audio data formats, particularly for speech processing. Wav2vec 2.0 discretizes the audio waveform into timesteps via temporal convolutions, and then trains a transformer on masked prediction of random timesteps using a contrastive loss. This is similar to the BERT language model, except as in many SSL approaches to video, the model chooses among a set of options rather than over the entire word vocabulary. Self-supervised learning has also been used to develop joint representations of multiple data types.",wikipedia
+Feature learning,https://en.wikipedia.org/wiki/Feature_learning#Self-supervised,"Self-supervised Approaches usually rely on some natural or human-derived association between the modalities as an implicit label, for instance video clips of animals or objects with characteristic sounds, or captions written to describe images. CLIP produces a joint image-text representation space by training to align image and text encodings from a large dataset of image-caption pairs using a contrastive loss. MERLOT Reserve trains a transformer-based encoder to jointly represent audio, subtitles and video frames from a large dataset of videos through 3 joint pretraining tasks: contrastive masked prediction of either audio or text segments given the video frames and surrounding audio and text context, along with contrastive alignment of video frames with their corresponding captions. Multimodal representation models are typically unable to assume direct correspondence of representations in the different modalities, since the precise alignment can often be noisy or ambiguous. For example, the text ""dog"" could be paired with many different pictures of dogs, and correspondingly a picture of a dog could be captioned with varying degrees of specificity. This limitation means that downstream tasks may require an additional generative mapping network between modalities to achieve optimal performance, such as in DALLE-2 for text to image generation.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications In general, a machine learning system can usually be trained to recognize elements of a certain class given sufficient samples. For example, machine learning methods can be trained to identify specific visual features such as splice sites. Support vector machines have been extensively used in cancer genomic studies. In addition, deep learning has been incorporated into bioinformatic algorithms. Deep learning applications have been used for regulatory genomics and cellular imaging. Other applications include medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Deep learning has been applied to regulatory genomics, variant calling and pathogenicity scores. Natural language processing and text mining have helped to understand phenomena including protein-protein interaction, gene-disease relation as well as predicting biomolecule structures and functions. Natural language processing algorithms personalized medicine for patients who suffer genetic diseases, by combining the extraction of clinical information and genomic data available from the patients. Institutes such as Health-funded Pharmacogenomics Research Network focus on finding breast cancer treatments. Precision medicine considers individual genomic variability, enabled by large-scale biological databases. Machine learning can be applied to perform the matching function between (groups of patients) and specific treatment modalities. Computational techniques are used to solve other problems, such as efficient primer design for PCR, biological-image analysis and back translation of proteins (which is, given the degeneration of the genetic code, a complex combinatorial problem). While genomic sequence data has historically been sparse due to the technical difficulty of sequencing a piece of DNA, the number of available sequences is growing exponentially. However, while raw data is becoming increasingly available and accessible, biological interpretation of this data is occurring at a much slower pace. This makes for an increasing need for developing computational genomics tools, including machine learning systems, that can automatically determine the location of protein-encoding genes within a given DNA sequence (i. e. gene prediction). Gene prediction is commonly performed through both extrinsic searches and intrinsic searches. For the extrinsic search, the input DNA sequence is run through a large database of sequences whose genes have been previously discovered and their locations annotated and identifying the target sequence's genes by determining which strings of bases within the sequence are homologous to known gene sequences. However, not all the genes in a given input sequence can be identified through homology alone, due to limits in the size of the database of known and annotated gene sequences. Therefore, an intrinsic search is needed where a gene prediction program attempts to identify the remaining genes from the DNA sequence alone. Machine learning has also been used for the problem of multiple sequence alignment which involves aligning many DNA or amino acid sequences in order to determine regions of similarity that could indicate a shared evolutionary history. It can also be used to detect and visualize genome rearrangements.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications Proteins, strings of amino acids, gain much of their function from protein folding, where they conform into a three-dimensional structure, including the primary structure, the secondary structure (alpha helices and beta sheets), the tertiary structure, and the quaternary structure. Protein secondary structure prediction is a main focus of this subfield as tertiary and quaternary structures are determined based on the secondary structure. Solving the true structure of a protein is expensive and time-intensive, furthering the need for systems that can accurately predict the structure of a protein by analyzing the amino acid sequence directly. Prior to machine learning, researchers needed to conduct this prediction manually. This trend began in 1951 when Pauling and Corey released their work on predicting the hydrogen bond configurations of a protein from a polypeptide chain. Automatic feature learning reaches an accuracy of 82-84%. The current state-of-the-art in secondary structure prediction uses a system called DeepCNF (deep convolutional neural fields) which relies on the machine learning model of artificial neural networks to achieve an accuracy of approximately 84% when tasked to classify the amino acids of a protein sequence into one of three structural classes (helix, sheet, or coil). [needs update] The theoretical limit for three-state protein secondary structure is 88���90%. Machine learning has also been applied to proteomics problems such as protein side-chain prediction, protein loop modeling, and protein contact map prediction. Metagenomics is the study of microbial communities from environmental DNA samples. Currently, limitations and challenges predominate in the implementation of machine learning tools due to the amount of data in environmental samples. Supercomputers and web servers have made access to these tools easier. The high dimensionality of microbiome datasets is a major challenge in studying the microbiome; this significantly limits the power of current approaches for identifying true differences and increases the chance of false discoveries. Despite their importance, machine learning tools related to metagenomics have focused on the study of gut microbiota and the relationship with digestive diseases, such as inflammatory bowel disease (IBD), Clostridioides difficile infection (CDI), colorectal cancer and diabetes, seeking better diagnosis and treatments. Many algorithms were developed to classify microbial communities according to the health condition of the host, regardless of the type of sequence data, e. g. 16S rRNA or whole-genome sequencing (WGS), using methods such as least absolute shrinkage and selection operator classifier, random forest, supervised classification model, and gradient boosted tree model. Neural networks, such as recurrent neural networks (RNN), convolutional neural networks (CNN), and Hopfield neural networks have been added. For example, in 2018, Fioravanti et al. developed an algorithm called Ph-CNN to classify data samples from healthy patients and patients with IBD symptoms (to distinguish healthy and sick patients) by using phylogenetic trees and convolutional neural networks. In addition, random forest (RF) methods and implemented importance measures help in the identification of microbiome species that can be used to distinguish diseased and non-diseased samples.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications However, the performance of a decision tree and the diversity of decision trees in the ensemble significantly influence the performance of RF algorithms. The generalization error for RF measures how accurate the individual classifiers are and their interdependence. Therefore, the high dimensionality problems of microbiome datasets pose challenges. Effective approaches require many possible variable combinations, which exponentially increases the computational burden as the number of features increases. For microbiome analysis in 2020 Dang & Kishino developed a novel analysis pipeline. The core of the pipeline is an RF classifier coupled with forwarding variable selection (RF-FVS), which selects a minimum-size core set of microbial species or functional signatures that maximize the predictive classifier performance. The framework combines: identifying a few significant features by a massively parallel forward variable selection procedure mapping the selected species on a phylogenetic tree, and predicting functional profiles by functional gene enrichment analysis from metagenomic 16S rRNA data. They demonstrated performance by analyzing two published datasets from large-scale case-control studies: 16S rRNA gene amplicon data for C. difficile infection (CDI) and shotgun metagenomics data for human colorectal cancer (CRC). The proposed approach improved the accuracy from 81% to 99.01% for CDI and from 75.14% to 90.17% for CRC. The use of machine learning in environmental samples has been less explored, maybe because of data complexity, especially from WGS. Some works show that it is possible to apply these tools in environmental samples. In 2021 Dhungel et al. , designed an R package called MegaR. This package allows working with 16S rRNA and whole metagenomic sequences to make taxonomic profiles and classification models by machine learning models. MegaR includes a comfortable visualization environment to improve the user experience. Machine learning in environmental metagenomics can help to answer questions related to the interactions between microbial communities and ecosystems, e. g. the work of Xun et al. , in 2021 where the use of different machine learning methods offered insights on the relationship among the soil, microbiome biodiversity, and ecosystem stability. Microarrays, a type of lab-on-a-chip, are used for automatically collecting data about large amounts of biological material. Machine learning can aid in analysis, and has been applied to expression pattern identification, classification, and genetic network induction. This technology is especially useful for monitoring gene expression, aiding in diagnosing cancer by examining which genes are expressed. One of the main tasks is identifying which genes are expressed based on the collected data. In addition, due to the huge number of genes on which data is collected by the microarray, winnowing the large amount of irrelevant data to the task of expressed gene identification is challenging. Machine learning presents a potential solution as various classification methods can be used to perform this identification. The most commonly used methods are radial basis function networks, deep learning, Bayesian classification, decision trees, and random forest. Systems biology focuses on the study of emergent behaviors from complex interactions of simple biological components in a system. Such components can include DNA, RNA, proteins, and metabolites.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications Machine learning has been used to aid in modeling these interactions in domains such as genetic networks, signal transduction networks, and metabolic pathways. Probabilistic graphical models, a machine learning technique for determining the relationship between different variables, are one of the most commonly used methods for modeling genetic networks. In addition, machine learning has been applied to systems biology problems such as identifying transcription factor binding sites using Markov chain optimization. Genetic algorithms, machine learning techniques which are based on the natural process of evolution, have been used to model genetic networks and regulatory structures. Other systems biology applications of machine learning include the task of enzyme function prediction, high throughput microarray data analysis, analysis of genome-wide association studies to better understand markers of disease, protein function prediction. This domain, particularly phylogenetic tree reconstruction, uses the features of machine learning techniques. Phylogenetic trees are schematic representations of the evolution of organisms. Initially, they were constructed using features such as morphological and metabolic features. Later, due to the availability of genome sequences, the construction of the phylogenetic tree algorithm used the concept based on genome comparison. With the help of optimization techniques, a comparison was done by means of multiple sequence alignment. Machine learning methods for the analysis of neuroimaging data are used to help diagnose stroke. Historically multiple approaches to this problem involved neural networks. Multiple approaches to detect strokes used machine learning. As proposed by Mirtskhulava, feed-forward networks were tested to detect strokes using neural imaging. As proposed by Titano 3D-CNN techniques were tested in supervised classification to screen head CT images for acute neurologic events. Three-dimensional CNN and SVM methods are often used. The increase in biological publications increased the difficulty in searching and compiling relevant available information on a given topic. This task is known as knowledge extraction. It is necessary for biological data collection which can then in turn be fed into machine learning algorithms to generate new biological knowledge. Machine learning can be used for this knowledge extraction task using techniques such as natural language processing to extract the useful information from human-generated reports in a database. Text Nailing, an alternative approach to machine learning, capable of extracting features from clinical narrative notes was introduced in 2017. This technique has been applied to the search for novel drug targets, as this task requires the examination of information stored in biological databases and journals. Annotations of proteins in protein databases often do not reflect the complete known set of knowledge of each protein, so additional information must be extracted from biomedical literature. Machine learning has been applied to the automatic annotation of gene and protein function, determination of the protein subcellular localization, DNA-expression array analysis, large-scale protein interaction analysis, and molecule interaction analysis. Another application of text mining is the detection and visualization of distinct DNA regions given sufficient reference data.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications Microbial communities are complex assembles of diverse microorganisms, where symbiont partners constantly produce diverse metabolites derived from the primary and secondary (specialized) metabolism, from which metabolism plays an important role in microbial interaction. Metagenomic and metatranscriptomic data are an important source for deciphering communications signals. Molecular mechanisms produce specialized metabolites in various ways. Biosynthetic Gene Clusters (BGCs) attract attention, since several metabolites are clinically valuable, anti-microbial, anti-fungal, anti-parasitic, anti-tumor and immunosuppressive agents produced by the modular action of multi-enzymatic, multi-domains gene clusters, such as Nonribosomal peptide synthetases (NRPSs) and polyketide synthases (PKSs). Diverse studies show that grouping BGCs that share homologous core genes into gene cluster families (GCFs) can yield useful insights into the chemical diversity of the analyzed strains, and can support linking BGCs to their secondary metabolites. GCFs have been used as functional markers in human health studies and to study the ability of soil to suppress fungal pathogens. Given their direct relationship to catalytic enzymes, and compounds produced from their encoded pathways, BGCs/GCFs can serve as a proxy to explore the chemical space of microbial secondary metabolism. Cataloging GCFs in sequenced microbial genomes yields an overview of the existing chemical diversity and offers insights into future priorities. Tools such as BiG-SLiCE and BIG-MAP have emerged with the sole purpose of unveiling the importance of BGCs in natural environments. BiG-SLiCE (Biosynthetic Genes Super-Linear Clustering Engine), is an automated pipeline tool designed to cluster massive numbers of BGCs. By representing them in euclidean space, BiG-SLiCE can group BGCs into GCFs in a non-pairwise, near-linear fashion. from genomic and metagenomic data of diverse organisms. The BiG-SLiCE workflow starts at vectorization (feature extraction), converting input BGCs provided from dataset of cluster GenBank files from antiSMASH and MIBiG into vectors of numerical features based on the absence/presence and bitscores of hits obtained from querying BGC gene sequences from a library curated of profile Hidden Markov Model(pHMMs) of biosynthetic domains of BGCs. Those features are then processed by a super-linear clustering algorithm based on BIRCH clustering, resulting in centroid feature vectors representing the GCF models. All BGCs in the dataset are queried back against those models, outputting a list of GCF membership values for each BGC. Then a global cluster mapping is done using k-means to group all GCF centroid features in GCF bins. After that another round of membership assignment is performed to match the full set of BGC features into the resulting GCF bins. In the end, it produces archives, which then can be used to perform further analysis (via external scripts) or to visualize the result in a user-interactive application. Satria et al. demonstrated the utility of such analyses by reconstructing a global map of secondary metabolic diversity across taxonomy to identify the uncharted biosynthetic potential of 1.2 million biosynthetic gene clusters. This opens up new possibilities to accelerate natural product discovery and offers a first step towards constructing a global and searchable interconnected network of BGCs.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications As more genomes are sequenced from understudied taxa, more information can be mined to highlight their potentially novel chemistry. Since BGCs are an important source of metabolite production, current tools for identifying BGCs focus their efforts on mining genomes to identify their genomic landscape, neglecting relevant information about their abundance and expression levels which in fact, play an important ecological role in triggering phenotype dependent-metabolite concentration. That is why in 2020 BiG-MAP (Biosynthetic Gene cluster Meta���omics Abundance Profiler), an automated pipeline that helps to determine the abundance (metagenomic data) and expression (metatranscriptomic data) of BGCs across microbial communities. It shotguns sequencing reads to gene clusters that have been predicted by antiSMASH or gutSMASH. BiG-MAP splits its workflow in four main modules. BiG-MAP. family: redundancy filtering on the gene cluster collection in order to reduce computing time and avoid ambiguous mapping. Using a MinHash-based algorithm, MASH, BiG-MAP estimates distance among protein sequences which then is used to select a representative gene cluster with the aid of k-medoids clustering. Finally, the selected gene clusters are clustered into GCFs using BiG-SCAPE, considering account architectural similarity, thus relating more distantly related gene clusters which produce the same chemical product in different organisms BiG-MAP. download: optional module that uses a list of Sequence Read Archive (SRA) database; BiG-MAP. map: reads the set of representative GCFs obtained from the first module. Maps reads to GCFs separately and, it reports combined abundance or expression levels per family. Reads are mapped to the representative of the GCFs using the short-read aligner Bowtie2, which are then converted into Reads Per Kilobase Million (RPKM) to be averaged over the GCFs size BiG-MAP. analyse: profiles abundance. RPKM values are normalized using Cumulative Sum Scaling (CSS) to account for sparsity. Differential expressions analyses use zero-inflated Gaussian distribution mixture models (ZIG-models) or Kruskal-Wallis. The pipeline displays the results s plots that show gene cluster abundance/expression (heatmaps), log fold change (bar plot), coverage values, and housekeeping gene expression values for metatranscriptomic data (heatmap). The increase of experimentally characterized ribosomally synthesized and post-translationally modified peptides (RiPPs), together with the availability of information on their sequence and chemical structure, selected from databases such as BAGEL, BACTIBASE, MIBIG, and THIOBASE, provide the opportunity to develop machine learning tools to decode the chemical structure and classify them. In 2017, researchers at the National Institute of Immunology of New Delhi, India, developed RiPPMiner software, a bioinformatics resource for decoding RiPP chemical structures by genome mining. The RiPPMiner web server consists of a query interface and the RiPPDB database. RiPPMiner defines 12 subclasses of RiPPs, predicting the cleavage site of the leader peptide and the final cross-link of the RiPP chemical structure. RiPPs analysis tools such as antiSMASH and RiPP-PRISM use HMM of modifying enzymes present in biosynthetic gene clusters in RiPP to predict the RiPP subclass. Unlike these tools, RiPPMiner uses a machine learning model, trained with 513 RiPPs, that uses the amino acid sequence of the RiPP gene uniquely to identify and subclass them.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications RiPPMiner differentiates RiPPs from other proteins and peptides using a support-vector machine model trained on 293 experimentally characterized RiPPs as a positive data set, and 8140 genomes encoded non-RiPPs polypeptides as negative data set. The negative data set included SWISSProt entries similar in length to RiPPs, e. g. , 30s ribosomal proteins, matrix proteins, cytochrome B proteins, etc. The support vectors consist of amino acid composition and dipeptide frequencies. Benchmarking on an independent dataset (not included in training) using a two-fold cross-validation approach indicated sensitivity, specificity, precision and MCC values of 0.93, 0.90, 0.90, and 0.85, respectively. This indicates the model's predictive power for distinguishing between RiPPs and non-RiPPs. For prediction of RiPP class or sub-class, a Multi-Class SVM was trained using the amino acid composition and dipeptide frequencies as feature vectors. During the training of the Multi-Class SVM, available RiPP precursor sequences belonging to a given class (e. g. lasso peptide) were used as a positive set, while RiPPs belonging to all other classes were used as negative set. Out of the four major RiPP classes that had more than 50 experimentally characterized RiPPs in RiPPDB, SVM models for prediction of cleavage sites were developed for lanthipeptides, cyanobactins, and lasso peptides. In order to develop SVM for prediction of cleavage site for lanthipeptides, 12 mer peptide sequences centered on the cleavage sites were extracted from a set of 115 lanthipeptide precursor sequences with known cleavage pattern. This resulted in a positive dataset of 103 unique 12 mer peptides harboring the cleavage site at the center, while the other 12 constituted the negative dataset. Feature vectors for each of these mers consisted of the concatenation of 20-dimensional vectors corresponding to each of the 20 amino acids. An SVM model for prediction of cleavage site was developed and benchmarked using 2-fold cross-validation, where half of the data were used in training and the other half in testing. SVM models were developed for the prediction of the cleavage sites in cyanobactin and lasso peptides. Based on analysis of the ROC curves, a suitable score cutoff was chosen for the prediction of cleavage sites in lanthipeptides and lasso peptides. The algorithm for the prediction of cross-links and deciphering the complete chemical structure of RiPP has been implemented for lanthipeptides, lasso peptides, cyanobactins, and thiopeptides. The prediction of lanthionine linkages in lanthipeptides was carried out using machine learning. A dataset of 93 lanthipeptides whose chemical structures was known was taken from RiPPDB. For each lanthipeptide in this set, the sequence of the core peptide was scanned for strings or sub-sequences of the type Ser/Thr-(X)n-Cys or Cys-(X)n-Ser/Thr to enumerate all theoretically possible cyclization patterns. Out of these sequence strings, the strings corresponding to Ser/Thr-Cys or Cys-Ser/Thr pairs which were linked by lanthionine bridges were included in the positive set, while all other strings were included in the negative set. Many tandem mass spectrometry (MS/MS) based metabolomics studies, such as library matching and molecular networking, use spectral similarity as a proxy for structural similarity.",wikipedia
+Machine learning in bioinformatics,https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics#Applications,"Applications Spec2vec algorithm provides a new way of spectral similarity score, based on Word2Vec. Spec2Vec learns fragmental relationships within a large set of spectral data, in order to assess spectral similarities between molecules and to classify unknown molecules through these comparisons. For systemic annotation, some metabolomics studies rely on fitting measured fragmentation mass spectra to library spectra or contrasting spectra via network analysis. Scoring functions are used to determine the similarity between pairs of fragment spectra as part of these processes. So far, no research has suggested scores that are significantly different from the commonly utilized cosine-based similarity.",wikipedia
+Meta-learning (computer science),https://en.wikipedia.org/wiki/Meta-learning_(computer_science)#Common_approaches,"Common_approaches There are three common approaches: using (cyclic) networks with external or internal memory (model-based) learning effective distance metrics (metrics-based) explicitly optimizing model parameters for fast learning (optimization-based).Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model. A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples. Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving. Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters. Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results. What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples. LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training. MAML, short for Model-Agnostic Meta-Learning, is a fairly general optimization algorithm, compatible with any model that learns through gradient descent. Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices. Associative (or content-addressable memories) are able to recognize stored content on the basis of a similarity measure, rather than fixed addresses, like in random access memories. As such they must be able to retrieve both incomplete and corrupted patterns, the essential machine learning task of pattern recognition. Typical classical associative memories store p patterns in the O ( n 2 ) {\displaystyle O(n^{2})} interactions (synapses) of a real, symmetric energy matrix over a network of n artificial neurons. The encoding is such that the desired patterns are local minima of the energy functional and retrieval is done by minimizing the total energy, starting from an initial configuration. Unfortunately, classical associative memories are severely limited by the phenomenon of cross-talk. When too many patterns are stored, spurious memories appear which quickly proliferate, so that the energy landscape becomes disordered and no retrieval is anymore possible. The number of storable patterns is typically limited by a linear function of the number of neurons, p ��� O ( n ) {\displaystyle p\leq O(n)} . Quantum associative memories (in their simplest realization) store patterns in a unitary matrix U acting on the Hilbert space of n qubits. Retrieval is realized by the unitary evolution of a fixed initial state to a quantum superposition of the desired patterns with probability distribution peaked on the most similar pattern to an input. By its very quantum nature, the retrieval process is thus probabilistic. However, quantum associative memories are free from cross-talk, never are spurious memories generated. Correspondingly, they have a superior capacity than classical ones. The number of parameters in the unitary matrix U is O ( p n ) {\displaystyle O(pn)} . One can thus have efficient, spurious-memory-free quantum associative memories for any polynomial number of patterns. A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the amplitudes of a quantum state with the inputs and outputs of computations. Since a state of n {\displaystyle n} qubits is described by 2 n {\displaystyle 2^{n}} complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose resources grow polynomially in the number of qubits n {\displaystyle n} , which amounts to a logarithmic time complexity in the number of amplitudes and thereby the dimension of the input. Many quantum machine learning algorithms in this category are based on variations of the quantum algorithm for linear systems of equations (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a Hamiltonian which entry wise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse or low rank. For reference, any known classical algorithm for matrix inversion requires a number of operations that grows more than quadratically in the dimension of the matrix (e. g. O ( n 2.373 ) {\displaystyle O{\mathord {\left(n^{2.373}\right)}}} ), but they are not restricted to sparse matrices. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a linear system of equations, for example in least-squares linear regression, the least-squares version of support vector machines, and Gaussian processes. A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases, this step easily hides the complexity of the task. VQAs are one of the most studied quantum algorithms as researchers expect that all the needed applications for the quantum computer will be using the VQAs and also VQAs seem to fulfill the expectation for gaining quantum supremacy. VQAs is a mixed quantum-classical approach where the quantum processor prepares quantum states and measurement is made and the optimization is done by a classical computer. VQAs are considered best for NISQ as VQAs are noise tolerant compared to other algorithms and give quantum superiority with only a few hundred qubits. Researchers have studied circuit-based algorithms to solve optimization problems and find the ground state energy of complex systems, which were difficult to solve or required a large time to perform the computation using a classical computer. Variational Quantum Circuits also known as Parametrized Quantum Circuits (PQCs) are based on Variational Quantum Algorithms (VQAs). VQCs consist of three parts, preparation of initial states, quantum circuit and measurement. Researchers are extensively studying VQCs, as it uses the power of quantum computation to learn in a short time and also use fewer parameters than its classical counterparts. It is theoretically and numerically proven that we can approximate non-linear functions, like those used in neural networks, on quantum circuits. Due to VQCs superiority, neural network has been replaced by VQCs in Reinforcement Learning tasks and Generative Algorithms.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers The intrinsic nature of quantum devices towards decoherence, random gate error and measurement errors caused to have high potential to limit the training of the variation circuits. Training the VQCs on the classical devices before employing them on quantum devices helps to overcome the problem of decoherence noise that came through the number of repetitions for training. Pattern reorganization is one of the important tasks of machine learning, binary classification is one of the tools or algorithms to find patterns. Binary classification is used in supervised learning and in unsupervised learning. In quantum machine learning, classical bits are converted to qubits and they are mapped to Hilbert space; complex value data are used in a quantum binary classifier to use the advantage of Hilbert space. By exploiting the quantum mechanic properties such as superposition, entanglement, interference the quantum binary classifier produces the accurate result in short period of time. Another approach to improving classical machine learning with quantum information processing uses amplitude amplification methods based on Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the k-medians and the k-nearest neighbors algorithms. Another application is a quadratic speedup in the training of perceptron. An example of amplitude amplification being used in a machine learning algorithm is Grover's search algorithm minimization. In which a subroutine uses Grover's search algorithm to find an element less than some previously defined element. This can be done with an oracle that determines whether or not a state with a corresponding element is less than the predefined one. Grover's algorithm can then find an element such that our condition is met. The minimization is initialized by some random element in our data set, and iteratively does this subroutine to find the minimum element in the data set. This minimization is notably used in quantum k-medians, and it has a speed up of at least O ( n / k ) {\displaystyle O({\sqrt {n/k}})} compared to classical versions of k-medians, where n {\displaystyle n} is the number of data points and k {\displaystyle k} is the number of clusters. Amplitude amplification is often combined with quantum walks to achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm as well as the performance of reinforcement learning agents in the projective simulation framework. Reinforcement learning is a branch of machine learning distinct from supervised and unsupervised learning, which also admits quantum enhancements. In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical or quantum environment and occasionally receives rewards for its actions, which allows the agent to adapt its behavior���in other words, to learn what to do in order to gain more rewards.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers In some situations, either because of the quantum processing capability of the agent, or due to the possibility to probe the environment in superpositions, a quantum speedup may be achieved. Implementations of these kinds of protocols have been proposed for systems of trapped ions and superconducting circuits. A quantum speedup of the agent's internal decision-making time has been experimentally demonstrated in trapped ions, while a quantum speedup of the learning time in a fully coherent (`quantum') interaction between agent and environment has been experimentally realized in a photonic setup. Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schr��dinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system. As the depth of the quantum circuit advances on NISQ devices, the noise level rises, posing a significant challenge to accurately computing costs and gradients on training models. The noise tolerance will be improved by using the quantum perceptron and the quantum algorithm on the currently accessible quantum hardware. [citation needed] A regular connection of similar components known as neurons forms the basis of even the most complex brain networks. Typically, a neuron has two operations: the inner product and an activation function. As opposed to the activation function, which is typically nonlinear, the inner product is a linear process. With quantum computing, linear processes may be easily accomplished additionally, due to the simplicity of implementation, the threshold function is preferred by the majority of quantum neurons for activation functions. [citation needed] Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include deep learning, probabilistic programming, and other machine learning and artificial intelligence applications. A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures. Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks. The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets. In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward. Reverse annealing has been used as well to solve a fully connected quantum restricted Boltzmann machine. Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed. Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines. Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing. The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template. This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network. Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set. Such gates make certain phases unable to be observed and generate specific oscillations. Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing. Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size. A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time. The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10 - 20 nm) electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of | 0 ��� {\displaystyle |0\rangle } and | 1 ��� {\displaystyle |1\rangle } in Dirac notation. A novel design for multi-dimensional vectors that uses circuits as convolution filters is QCNN. It was inspired by the advantages of CNNs and the power of QML. It is made using a combination of a variational quantum circuit(VQC) and a deep neural network(DNN), fully utilizing the power of extremely parallel processing on a superposition of a quantum state with a finite number of qubits. The main strategy is to carry out an iterative optimization process in the NISQ devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction. The quantum circuit must effectively handle spatial information in order for QCNN to function as CNN. The convolution filter is the most basic technique for making use of spatial information. One or more quantum convolutional filters make up a quantum convolutional neural network (QCNN), and each of these filters transforms input data using a quantum circuit that can be created in an organized or randomized way. Three parts that make up the quantum convolutional filter are: the encoder, the parameterized quantum circuit (PQC), and the measurement. The quantum convolutional filter can be seen as an extension of the filter in the traditional CNN because it was designed with trainable parameters.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers Quantum neural networks take advantage of the hierarchical structures, and for each subsequent layer, the number of qubits from the preceding layer is decreased by a factor of two. For n input qubits, these structure have O(log(n)) layers, allowing for shallow circuit depth. Additionally, they are able to avoid ""barren plateau,"" one of the most significant issues with PQC-based algorithms, ensuring trainability. Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of the pooling layer is also offered to assure validity. In QCNN architecture, the pooling layer is typically placed between succeeding convolutional layers. Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting. Such process can be accomplished applying full Tomography on the state to reduce it all the way down to one qubit and then processed it in subway. The most frequently used unit type in the pooling layer is max pooling, although there are other types as well. Similar to conventional feed-forward neural networks, the last module is a fully connected layer with full connections to all activations in the preceding layer. Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture. Dissipative QNNs (DQNNs) are constructed from layers of qubits coupled by perceptron called building blocks, which have an arbitrary unitary design. Each node in the network layer of a DQNN is given a distinct collection of qubits, and each qubit is also given a unique quantum perceptron unitary to characterize it. The input states information are transported through the network in a feed-forward fashion, layer-to-layer transition mapping on the qubits of the two adjacent layers, as the name implies. Dissipative term also refers to the fact that the output layer is formed by the ancillary qubits while the input layers are dropped while tracing out the final layer. When performing a broad supervised learning task, DQNN are used to learn a unitary matrix connecting the input and output quantum states. The training data for this task consists of the quantum state and the corresponding classical labels. Inspired by the extremely successful classical Generative adversarial network(GAN), dissipative quantum generative adversarial network (DQGAN) is introduced for unsupervised learning of the unlabeled training data . The generator and the discriminator are the two DQNNs that make up a single DQGAN. The generator's goal is to create false training states that the discriminator cannot differentiate from the genuine ones, while the discriminator's objective is to separate the real training states from the fake states created by the generator. The relevant features of the training set are learned by the generator by alternate and adversarial training of the networks that aid in the production of sets that extend the training set. DQGAN has a fully quantum architecture and is trained in quantum data.",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers Hidden quantum Markov models (HQMMs) are a quantum-enhanced version of classical Hidden Markov Models (HMMs), which are typically used to model sequential data in various fields like robotics and natural language processing. Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well. Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue: density matrices. Recent work has shown that these models can be successfully learned by maximizing the log-likelihood of the given data via classical optimization, and there is some empirical evidence that these models can better model sequential data compared to classical HMMs in practice, although further work is needed to determine exactly when and how these benefits are derived. Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models. In the most general case of quantum machine learning, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case. (This also relates to work on quantum pattern matching. ) The problem of learning unitary transformations can be approached in a similar way. Going beyond the specific problem of learning states and transformations, the task of clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum. Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in, where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning. Such a speedup in the reinforcement-learning paradigm has been experimentally demonstrated in a photonic setup. The need for models that can be understood by humans emerges in quantum machine learning in analogy to classical machine learning and drives the research field of explainable quantum machine learning (or XQML in analogy to XAI/XML).",wikipedia
+Quantum machine learning,https://en.wikipedia.org/wiki/Quantum_machine_learning#Machine_learning_with_quantum_computers,"Machine_learning_with_quantum_computers XQML can be considered as an alternative research direction instead of finding a quantum advantage. For example, XQML has been used in the context of mobile malware detection and classification. Quantum Shapley values have also been proposed to interpret gates within a circuit based on a game-theoretic approach. For this purpose, gates instead of features act as players in a coalitional game with a value function that depends on measurements of the quantum circuit of interest.",wikipedia
+Self-supervised learning,https://en.wikipedia.org/wiki/Self-supervised_learning#Examples,"Examples Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other. Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries. OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things. Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks. The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text. DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via gradient update. Self-GenomeNet is an example of self-supervised learning in genomics.",wikipedia
+Feed forward (control),https://en.wikipedia.org/wiki/Feed_forward_(control)#Applications,"Applications In physiology, feed-forward control is exemplified by the normal anticipatory regulation of heartbeat in advance of actual physical exertion by the central autonomic network. Feed-forward control can be likened to learned anticipatory responses to known cues (predictive coding). Feedback regulation of the heartbeat provides further adaptiveness to the running eventualities of physical exertion. Feedforward systems are also found in biological control of other variables by many regions of animals brains. [citation needed] Even in the case of biological feedforward systems, such as in the human brain, knowledge or a mental model of the plant (body) can be considered to be mathematical as the model is characterized by limits, rhythms, mechanics and patterns. A pure feed-forward system is different from a homeostatic control system, which has the function of keeping the body's internal environment 'steady' or in a 'prolonged steady state of readiness. ' A homeostatic control system relies mainly on feedback (especially negative), in addition to the feedforward elements of the system. Feed-forward loops (FFLs), a three-node graph of the form A affects B and C and B affects C, are frequently observed in transcription networks in several organisms including E. coli and S. cerevisiae, suggesting that they perform functions that are important for the functioning of these organisms. In E. coli and S. cerevisiae transcription networks have been extensively studied, FFLs occur approximately three times more frequently than expected based on random (Erd��s-R��nyi) networks. Edges in transcription networks are directed and signed, as they represent activation (+) or repression (-). The sign of a path in a transcription network can be obtained by multiplying the signs of the edges in the path, so a path with an odd number of negative signs is negative. There are eight possible three-node FFLs as each of the three arrows can be either repression or activation, which can be classified into coherent or incoherent FFLs. Coherent FFLs have the same sign for both the paths from A to C, and incoherent FFLs have different signs for the two paths. The temporal dynamics of FFLs show that coherent FFLs can be sign-sensitive delays that filter input into the circuit. We consider the differential equations for a Type-I coherent FFL, where all the arrows are positive: �� B �� t = �� B ( A ) ��� �� B B {\displaystyle {\frac {\delta B}{\delta t}}=\beta _{B}(A)-\gamma _{B}B} �� C �� t = �� C ( A , B ) ��� �� C C {\displaystyle {\frac {\delta C}{\delta t}}=\beta _{C}(A,B)-\gamma _{C}C} Where �� y {\displaystyle \beta _{y}} and �� z {\displaystyle \beta _{z}} are increasing functions in A {\displaystyle A} and B {\displaystyle B} representing production, and �� Y {\displaystyle \gamma _{Y}} and �� z {\displaystyle \gamma _{z}} are rate constants representing degradation or dilution of B {\displaystyle B} and C {\displaystyle C} respectively.",wikipedia
+Feed forward (control),https://en.wikipedia.org/wiki/Feed_forward_(control)#Applications,"Applications �� C ( A , B ) {\displaystyle \beta _{C}(A,B)} can represent an AND gate where �� C ( A , B ) = 0 {\displaystyle \beta _{C}(A,B)=0} if either A = 0 {\displaystyle A=0} or B = 0 {\displaystyle B=0} , for instance if �� C ( A , B ) = �� C �� A ( A > k A C ) �� A ( B > k A B C ) {\displaystyle \beta _{C}(A,B)=\beta _{C}\theta _{A}(A>k_{AC})\theta _{A}(B>k_{ABC})} where �� A {\displaystyle \theta _{A}} and �� B {\displaystyle \theta _{B}} are step functions. In this case the FFL creates a time-delay between a sustained on-signal, i. e. increase in A {\displaystyle A} and the output increase in C {\displaystyle C} . This is because production of A {\displaystyle A} must first induce production of B {\displaystyle B} , which is then needed to induce production of C {\displaystyle C} . However, there is no time-delay in for an off-signal because a reduction of A {\displaystyle A} immediately results in a decrease in the production term �� C ( A , B ) {\displaystyle \beta _{C}(A,B)} . This system therefore filters out fluctuations in the on-signal and detects persistent signals. This is particularly relevant in settings with stochastically fluctuating signals. In bacteria these circuits create time delays ranging from a few minutes to a few hours. Similarly, an inclusive-OR gate in which C {\displaystyle C} is activated by either A {\displaystyle A} or B {\displaystyle B} is a sign-sensitive delay with no delay after the ON step but with a delay after the OFF step. This is because an ON pulse immediately activates B and C, but an OFF step does not immediately result in deactivation of C because B can still be active. This can protect the system from fluctuations that result in the transient loss of the ON signal and can also provide a form of memory. Kalir, Mangan, and Alon, 2005 show that the regulatory system for flagella in E. coli is regulated with a Type 1 coherent feedforward loop. For instance, the regulation of the shift from one carbon source to another in diauxic growth in E. coli can be controlled via a type-1 coherent FFL. In diauxic growth cells growth using two carbon sources by first rapidly consuming the preferred carbon source, and then slowing growth in a lag phase before consuming the second less preferred carbon source. In E. coli, glucose is preferred over both arabinose and lactose. The absence of glucose is represented via a small molecule cAMP. Diauxic growth in glucose and lactose is regulated by a simple regulatory system involving cAMP and the lac operon. However, growth in arabinose is regulated by a feedforward loop with an AND gate which confers an approximately 20 minute time delay between the ON-step in which cAMP concentration increases when glucose is consumed and when arabinose transporters are expressed. There is no time delay with the OFF signal which occurs when glucose is present.",wikipedia
+Feed forward (control),https://en.wikipedia.org/wiki/Feed_forward_(control)#Applications,"Applications This prevents the cell from shifting to growth on arabinose based on short term fluctuations in glucose availability. Additionally, feedforward loops can facilitate cellular memory. Doncic and Skotheim (2003) show this in the regulation in the mating of yeast, where extracellular mating pheromone which indices mating behavior including preventing cells from entering the cell cycle. The mating pheromone activates the MAPK pathway which then activates the cell-cycle inhibitor Far1 and activates the Ste12 transcription factor that increases the synthesis of inactive Far1. In this system the concentration of active Far1 depends on the time integral of a function of the external mating pheromone concentration. This dependence on past levels of mating pheromone is a form of cellular memory. This system simultaneously allows for the stability and reversibility. Incoherent feedforward loops, in which the two paths from the input to the output node have different signs result in short pulses in response to an ON signal. In this system input A simultaneous directly increases and indirectly decreases synthesis of output node C. If the indirect path to C (via B) is slower than the direct path a pulse of output is produced in the time period before levels of B are high enough to inhibit synthesis of C. Response to epidermal growth factor (EGF) in dividing mammalian cells is an example of a Type-1 incoherent FFL. The frequent observation of feed-forward loops in several biological contexts across scales suggests that they have structural properties that are highly adaptive in several contexts. Several theoretical and experimental studies including those discussed here show that FFLs create a mechanism for biological systems to process and store information, which is important for predictive behavior and survival in complex dynamically changing environments. In computing, feed-forward normally refers to a perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback loops. The connections are set up during a training phase, which in effect is when the system is a feedback system. In the early 1970s, intercity coaxial transmission systems, including L-carrier, used feed-forward amplifiers to diminish linear distortion. This more complex method allowed wider bandwidth than earlier feedback systems. Optical fiber, however, made such systems obsolete before many were built. Feedforward control is a discipline within the field of automatic controls used in automation. The method is rather a new technique that changes the phase of an open-loop transfer function of a non-minimum phase system into minimum phase.",wikipedia
+Graph neural network,https://en.wikipedia.org/wiki/Graph_neural_network,"Graph neural network A graph neural network (GNN) is a class of artificial neural networks for processing data that can be represented as graphs. In the more general subject of ""geometric deep learning"", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs. A convolutional neural network layer, in the context of computer vision, can be seen as a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be seen as a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text. The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed, which implement different flavors of message passing, started by recursive or convolutional constructive approaches. As of 2022[update], whether it is possible to define GNN architectures ""going beyond"" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question. Relevant application domains for GNNs include Natural Language Processing, social networks, citation networks, molecular biology, chemistry, physics and NP-hard combinatorial optimization problems. Several open source libraries implementing graph neural networks are available, such as PyTorch Geometric (PyTorch), TensorFlow GNN (TensorFlow), jraph (Google JAX), and GraphNeuralNetworks.jl (Julia, Flux).",wikipedia
+Neural network Gaussian process,https://en.wikipedia.org/wiki/Neural_network_Gaussian_process,"Neural network Gaussian process A Neural Network Gaussian Process (NNGP) is a Gaussian process obtained as the limit (in the sense of convergence in distribution) of a sequence of neural networks, and provide a closed form way to evaluate many kinds of neural networks. Mathematically, an NNGP is just a gaussian process. It is distinguished only by how it is obtained (it is an intensional definition): a NNGP is a GP obtained as the limit of a sequence of neural networks, with limit taken in the sense of convergence in distribution. A wide variety of neural network architectures converges in distribution to a gaussian process at the infinitely wide limit. This is proven for: single hidden layer Bayesian neural networks; deep fully connected networks as the number of units per layer is taken to infinity; convolutional neural networks as the number of channels is taken to infinity; transformer networks as the number of attention heads is taken to infinity; recurrent networks as the number of units is taken to infinity. In fact, this NNGP correspondence holds for almost any architecture: Generally, if an architecture can be expressed solely via matrix multiplication and coordinatewise nonlinearities (i.e. a tensor program), then it has an infinite-width GP. This in particular includes all feedforward or recurrent neural networks composed of multilayer perceptron, recurrent neural networks (e.g. LSTMs, GRUs), (nD or graph) convolution, pooling, skip connection, attention, batch normalization, and/or layer normalization.",wikipedia
+Radial basis function,https://en.wikipedia.org/wiki/Radial_basis_function#RBFs_for_PDEs,"RBFs_for_PDEs Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points x {\displaystyle \mathbf {x} } in the domain are approximated by the linear combination of RBFs: The derivatives are approximated as such: where N {\displaystyle N} are the number of points in the discretized domain, d {\displaystyle d} the dimension of the domain and �� {\displaystyle \lambda } the scalar coefficients that are unchanged by the differential operator. Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method, the RBF-QR method and the RBF-PUM method.",wikipedia
+Residual neural network,https://en.wikipedia.org/wiki/Residual_neural_network,"Residual neural network A Residual Neural Network (a.k.a. Residual Network, ResNet) is a deep learning model in which the weight layers learn residual functions with reference to the layer inputs. A Residual Network is a network with skip connections that perform identity mappings, merged with the layer outputs by addition. It behaves like a Highway Network whose gates are opened through strongly positive bias weights. This enables deep learning models with tens or hundreds of layers to train easily and approach better accuracy when going deeper. The identity skip connections, often referred to as ""residual connections"", are also used in the 1997 LSTM networks, Transformer models (e.g., BERT, GPT models such as ChatGPT), the AlphaGo Zero system, the AlphaStar system, and the AlphaFold system. Residual Networks were developed by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, which won the ImageNet 2015 competition.",wikipedia
+Residual neural network,https://en.wikipedia.org/wiki/Residual_neural_network#Variants_of_Residual_Blocks,"Variants_of_Residual_Blocks A Basic Block is the simplest building block studied in the original ResNet. This block consists of two sequential 3x3 convolutional layers and a residual connection. The input and output dimensions of both layers are equal. A Bottleneck Block consists of three sequential convolutional layers and a residual connection. The first layer in this block is a 1x1 convolution for dimension reduction, e.g., to 1/4 of the input dimension; the second layer performs a 3x3 convolution; the last layer is another 1x1 convolution for dimension restoration. The models of ResNet-50, ResNet-101, and ResNet-152 in are all based on Bottleneck Blocks. The Pre-activation Residual Block applies the activation functions (e.g., non-linearity and normalization) before applying the residual function F {\textstyle F} . Formally, the computation of a Pre-activation Residual Block can be written as: where �� {\textstyle \phi } can be any non-linearity activation (e.g., ReLU) or normalization (e.g., LayerNorm) operation. This design reduces the number of non-identity mappings between Residual Blocks. This design was used to train models with 200 to over 1000 layers. Since GPT-2, the Transformer Blocks have been dominantly implemented as Pre-activation Blocks. This is often referred to as ""pre-normalization"" in the literature of Transformer models. A Transformer Block is a stack of two Residual Blocks. Each Residual Block has a Residual Connection. The first Residual Block is a Multi-Head Attention Block, which performs (self-)attention computation followed by a linear projection. The second Residual Block is a feed-forward Multi-Layer Perceptron (MLP) Block. This block is analogous to an ""inverse"" bottleneck block: it has a linear projection layer (which is equivalent to a 1x1 convolution in the context of Convolutional Neural Networks) that increases the dimension, and another linear projection that reduces the dimension. A Transformer Block has a depth of 4 layers (linear projections). The GPT-3 model has 96 Transformer Blocks (in the literature of Transformers, a Transformer Block is often referred to as a ""Transformer Layer""). This model has a depth of about 400 projection layers, including 96x4 layers in Transformer Blocks and a few extra layers for input embedding and output prediction. Very deep Transformer models cannot be successfully trained without Residual Connections.",wikipedia
+Residual neural network,https://en.wikipedia.org/wiki/Residual_neural_network#Related_Work,"Related_Work In the book written by Frank Rosenblatt, published in 1961, a three-layer Multilayer Perceptron (MLP) model with skip connections was presented (Chapter 15, p313 in ). The model was referred to as a ""cross-coupled system"", and the skip connections were forms of cross-coupled connections. In two books published in 1994 and 1996, ""skip-layer"" connections were presented in feed-forward MLP models: ""The general definition [of MLP] allows more than one hidden layer, and it also allows 'skip-layer' connections from input to output"" (p261 in, p144 in ), "". . . which allows the non-linear units to perturb a linear functional form"" (p262 in ). This description suggests that the non-linear MLP performs like a residual function (perturbation) added to a linear function. Sepp Hochreiter analyzed the vanishing gradient problem in 1991 and attributed to it the reason why deep learning did not work well. To overcome this problem, Long Short-Term Memory (LSTM) recurrent neural networks had skip connections or residual connections with a weight of 1.0 in every LSTM cell (called the constant error carrousel) to compute y t + 1 = F ( x t ) + x t {\textstyle y_{t+1}=F(x_{t})+x_{t}} . During backpropagation through time, this becomes the above-mentioned residual formula y = F ( x ) + x {\textstyle y=F(x)+x} for feedforward neural networks. This enables training very deep recurrent neural networks with a very long time span t. A later LSTM version published in 2000 modulates the identity LSTM connections by so-called forget gates such that their weights are not fixed to 1.0 but can be learned. In experiments, the forget gates were initialized with positive bias weights, thus being opened, addressing the vanishing gradient problem. The Highway Network of May 2015 applies these principles to feedforward neural networks. It was reported to be ""the first very deep feedforward network with hundreds of layers"". It is like an LSTM with forget gates unfolded in time, while the later Residual Nets have no equivalent of forget gates and are like the unfolded original LSTM. If the skip connections in Highway Networks are ""without gates"", or if their gates are kept open (activation 1.0) through strong positive bias weights, they become the identity skip connections in Residual Networks. The original Highway Network paper not only introduced the basic principle for very deep feedforward networks, but also included experimental results with 20, 50, and 100 layers networks, and mentioned ongoing experiments with up to 900 layers. Networks with 50 or 100 layers had lower training error than their plain network counterparts, but no lower training error than their 20 layers counterpart (on the MNIST dataset, Figure 1 in ). No improvement on test accuracy was reported with networks deeper than 19 layers (on the CIFAR-10 dataset; Table 1 in ). The ResNet paper, however, provided strong experimental evidence of the benefits of going deeper than 20 layers.",wikipedia
+Residual neural network,https://en.wikipedia.org/wiki/Residual_neural_network#Related_Work,"Related_Work It argued that the identity mapping without modulation is crucial and mentioned that modulation in the skip connection can still lead to vanishing signals in forward and backward propagation (Section 3 in ). This is also why the forget gates of the 2000 LSTM were initially opened through positive bias weights: as long as the gates are open, it behaves like the 1997 LSTM. Similarly, a Highway Net whose gates are opened through strongly positive bias weights behaves like a ResNet. The skip connections used in modern neural networks (e. g. , Transformers) are dominantly identity mappings. DenseNets in 2016 were designed as deep neural networks that attempt to connect each layer to every other layer. DenseNets approached this goal by using identity mappings as skip connections. Unlike ResNets, DenseNets merge the layer output with skip connections by concatenation, not addition. Neural networks with Stochastic Depth were made possible given the Residual Network architectures. This training procedure randomly drops a subset of layers and lets the signal propagate through the identity skip connection. Also known as ""DropPath"", this is an effective regularization method for training large and deep models, such as the Vision Transformer (ViT).",wikipedia
+Softmax function,https://en.wikipedia.org/wiki/Softmax_function#Computational_complexity_and_remedies,"Computational_complexity_and_remedies In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the z i {\displaystyle z_{i}} , followed by the application of the softmax function itself) computationally expensive. What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times. Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax. The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected ""classes"" of outcomes, forming latent variables. The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf. Ideally, when the tree is balanced, this would reduce the computational complexity from O ( K ) {\displaystyle O(K)} to O ( log 2 ��� K ) {\displaystyle O(\log _{2}K)} . In practice, results depend on choosing a good strategy for clustering the outcomes into classes. A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability. A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor. These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling).",wikipedia
+Physics-informed neural networks,https://en.wikipedia.org/wiki/Physics-informed_neural_networks#Modeling_and_computation,"Modeling_and_computation A general nonlinear partial differential equations can be: u t + N [ u ; �� ] = 0 , x ��� �� , t ��� [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} where u ( t , x ) {\displaystyle u(t,x)} denotes the solution, N [ ��� ; �� ] {\displaystyle N[\cdot ;\lambda ]} is a nonlinear operator parametrized by �� {\displaystyle \lambda } , and �� {\displaystyle \Omega } is a subset of R D {\displaystyle \mathbb {R} ^{D}} . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: data-driven solution data-driven discoveryof partial differential equations. The data-driven solution of PDE computes the hidden state u ( t , x ) {\displaystyle u(t,x)} of the system given boundary data and/or measurements z {\displaystyle z} , and fixed model parameters �� {\displaystyle \lambda } . We solve: u t + N [ u ] = 0 , x ��� �� , t ��� [ 0 , T ] {\displaystyle u_{t}+N[u]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining the residual f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ] = 0 {\displaystyle f:=u_{t}+N[u]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ��� u ��� z ��� �� {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} is the error between the PINN u ( t , x ) {\displaystyle u(t,x)} and the set of boundary conditions and measured data on the set of points �� {\displaystyle \Gamma } where the boundary conditions and data are defined, and L f = ��� f ��� �� {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process. This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, modeling predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE.",wikipedia
+Physics-informed neural networks,https://en.wikipedia.org/wiki/Physics-informed_neural_networks#Modeling_and_computation,"Modeling_and_computation Given noisy and incomplete measurements z {\displaystyle z} of the state of the system, the data-driven discovery of PDE results in computing the unknown state u ( t , x ) {\displaystyle u(t,x)} and learning model parameters �� {\displaystyle \lambda } that best describe the observed data and it reads as follows: u t + N [ u ; �� ] = 0 , x ��� �� , t ��� [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ; �� ] = 0 {\displaystyle f:=u_{t}+N[u;\lambda ]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network, f ( t , x ) {\displaystyle f(t,x)} results in a PINN. This network can be derived using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} , together with the parameter �� {\displaystyle \lambda } of the differential operator can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ��� u ��� z ��� �� {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} , with u {\displaystyle u} and z {\displaystyle z} state solutions and measurements at sparse location �� {\displaystyle \Gamma } , respectively and L f = ��� f ��� �� {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process. This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation.",wikipedia
+Feedforward neural network,https://en.wikipedia.org/wiki/Feedforward_neural_network#Timeline,"Timeline In 1958, a layered network of perceptrons, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learning connections, was introduced already by Frank Rosenblatt in his book Perceptron. This extreme learning machine was not yet a deep learning network.In 1965, the first deep-learning feedforward network, not yet using stochastic gradient descent, was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa, at the time called the Group Method of Data Handling.In 1967, a deep-learning network, using stochastic gradient descent for the first time, was able to classify non-linearily separable pattern classes, as reported Shun'ichi Amari. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.In 1970, modern backpropagation method, an efficient application of a chain-rule-based supervised learning, was for the first time published by the Finnish researcher Seppo Linnainmaa. The term (i.e. ""back-propagating errors"") itself has been used by Rosenblatt himself, but he did not know how to implement it, although a continuous precursor of backpropagation was already used in the context of control theory in 1960 by Henry J. Kelley. It is known also as a reverse mode of automatic differentiation.In 1982, backpropagation was applied in the way that has become standard, for the first time by Paul Werbos.In 1985, an experimental analysis of the technique was conducted by David E. Rumelhart et al.. Many improvements to the approach have been made in subsequent decades.In 1987, using a stochastic gradient descent within a (wide 12-layer nonlinear) feed-forward network, Matthew Brand has trained it to reproduce logic functions of nontrivial circuit depth, using small batches of random input/output samples. He, however, concluded that on hardware (sub-megaflop computers) available at the time it was impractical, and proposed using fixed random early layers as an input hash for a single modifiable layer.In 1990s, an (much simpler) alternative to using neural networks, although still related support vector machine approach was developed by Vladimir Vapnik and his colleagues. In addition to performing linear classification, they were able to efficiently perform a non-linear classification using what is called the kernel trick, using high-dimensional feature spaces.In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors.In 2017, modern transformer architectures has been introduced.In 2021, a very simple NN architecture combining two deep MLPs with skip connections and layer normalizations was designed and called MLP-Mixer; its realizations featuring 19 to 431 millions of parameters were shown to be comparable to vision transformers of similar size on ImageNet and similar image classification tasks.",wikipedia
+Multilayer perceptron,https://en.wikipedia.org/wiki/Multilayer_perceptron#Timeline,"Timeline In 1958, a layered network of perceptrons, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learning connections, was introduced already by Frank Rosenblatt in his book Perceptron. This extreme learning machine was not yet a deep learning network.In 1965, the first deep-learning feedforward network, not yet using stochastic gradient descent, was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa, at the time called the Group Method of Data Handling.In 1967, a deep-learning network, which used stochastic gradient descent for the first time, able to classify non-linearily separable pattern classes, was published by Shun'ichi Amari. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.In 1970, modern backpropagation method, an efficient application of a chain-rule-based supervised learning, was for the first time published by the Finnish researcher Seppo Linnainmaa. The term (i.e. ""back-propagating errors"") itself has been used by Rosenblatt himself, but he did not know how to implement it, although a continuous precursor of backpropagation was already used in the context of control theory in 1960 by Henry J. Kelley. It is known also as a reverse mode of automatic differentiation.In 1982, backpropagation was applied in the way that has become standard, for the first time by Paul Werbos.In 1985, an experimental analysis of the technique was conducted by David E. Rumelhart et al.. Many improvements to the approach have been made in subsequent decades,.In 1990s, an (much simpler) alternative to using neural networks, although still related support vector machine approach was developed by Vladimir Vapnik and his colleagues. In addition to performing linear classification, they were able to efficiently perform a non-linear classification using what is called the kernel trick, using high-dimensional feature spaces.In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors.In 2017, modern transformer architectures has been introduced.In 2021, a very simple NN architecture combining two deep MLPs with skip connections and layer normalizations was designed and called MLP-Mixer; its realizations featuring 19 to 431 millions of parameters were shown to be comparable to vision transformers of similar size on ImageNet and similar image classification tasks.",wikipedia
+Restricted Boltzmann machine,https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine,"Restricted Boltzmann machine A restricted Boltzmann machine (RBM) (also called a restricted Sherrington���Kirkpatrick model with external field or restricted stochastic Ising���Lenz���Little model) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. RBMs were initially proposed under the name Harmonium by Paul Smolensky in 1986, and rose to prominence after Geoffrey Hinton and collaborators used fast learning algorithms for them in the mid-2000. RBMs have found applications in dimensionality reduction, classification, collaborative filtering, feature learning, topic modelling and even many body quantum mechanics. They can be trained in either supervised or unsupervised ways, depending on the task. As their name implies, RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: a pair of nodes from each of the two groups of units (commonly referred to as the ""visible"" and ""hidden"" units respectively) may have a symmetric connection between them; and there are no connections between nodes within a group. By contrast, ""unrestricted"" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm. Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by ""stacking"" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation.",wikipedia
+Restricted Boltzmann machine,https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine#Stacked_Restricted_Boltzmann_Machine,"Stacked_Restricted_Boltzmann_Machine The difference between the Stacked Restricted Boltzmann Machines and RBM is that RBM has lateral connections within a layer that are prohibited to make analysis tractable. On the other hand, the Stacked Boltzmann consists of a combination of an unsupervised three-layer network with symmetric weights and a supervised fine-tuned top layer for recognizing three classes. The usage of Stacked Boltzmann is to understand Natural languages, retrieve documents, image generation, and classification. These functions are trained with unsupervised pre-training and/or supervised fine-tuning. Unlike the undirected symmetric top layer, with a two-way unsymmetric layer for connection for RBM. The restricted Boltzmann's connection is three-layers with asymmetric weights, and two networks are combined into one. Stacked Boltzmann does share similarities with RBM, the neuron for Stacked Boltzmann is a stochastic binary Hopfield neuron, which is the same as the Restricted Boltzmann Machine. The energy from both Restricted Boltzmann and RBM is given by Gibb's probability measure: E = ��� 1 2 ��� i , j w i j s i s j + ��� i �� i s i {\displaystyle E=-{\frac {1}{2}}\sum _{i,j}{w_{ij}{s_{i}}{s_{j}}}+\sum _{i}{\theta _{i}}{s_{i}}} . The training process of Restricted Boltzmann is similar to RBM. Restricted Boltzmann train one layer at a time and approximate equilibrium state with a 3-segment pass, not performing back propagation. Restricted Boltzmann uses both supervised and unsupervised on different RBM for pre-training for classification and recognition. The training uses contrastive divergence with Gibbs sampling: ��wij = e*(pij - p'ij) The restricted Boltzmann's strength is it performs a non-linear transformation so it's easy to expand, and can give a hierarchical layer of features. The Weakness is that it has complicated calculations of integer and real-valued neurons. It does not follow the gradient of any function, so the approximation of Contrastive divergence to maximum likelihood is improvised.",wikipedia
+Markov chain,https://en.wikipedia.org/wiki/Markov_chain#History,"History Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincar�� studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Ir��n��e-Jules Bienaym��. Starting in 1928, Maurice Fr��chet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman���Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov���Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s.",wikipedia
+Yoshua Bengio,https://en.wikipedia.org/wiki/Yoshua_Bengio#Career_and_research,"Career_and_research After his PhD, Bengio was a postdoctoral fellow at MIT (supervised by Michael I. Jordan) and AT&T Bell Labs. Bengio has been a faculty member at the Universit�� de Montr��al since 1993, heads the MILA (Montreal Institute for Learning Algorithms) and is co-director of the Learning in Machines & Brains program at the Canadian Institute for Advanced Research. Along with Geoffrey Hinton and Yann LeCun, Bengio is considered by Cade Metz as one of the three people most responsible for the advancement of deep learning during the 1990s and 2000s. Among the computer scientists with an h-index of at least 100, Bengio was as of 2018 the one with the most recent citations per day, according to MILA. As of December 2022, he had the 2nd highest Discipline H-index (D-index) in computer science. Thanks to a 2019 article on a novel RNN architecture, Bengio has an Erd��s number of 3. In October 2016, Bengio co-founded Element AI, a Montreal-based artificial intelligence incubator that turns AI research into real-world business applications. The company sold its operations to ServiceNow in November 2020, with Bengio remaining at ServiceNow as an advisor. Bengio currently serves as scientific and technical advisor for Recursion Pharmaceuticals and scientific advisor for Valence Discovery. Following concerns raised by AI experts about the existential risks AI poses on humanity, in May 2023, Bengio stated in an interview to BBC that he felt ""lost"" over his life's work. He raised his concern about ""bad actors"" getting hold of AI, especially as it becomes more sophisticated and powerful. He called for better regulation, product registration, ethical training, and more involvement from governments in tracking and auditing AI products. Speaking with the Financial Times also in May 2023, Bengio said that he supported the monitoring of access to AI systems such as ChatGPT so that potentially illegal or dangerous uses could be tracked. In 2017, Bengio was named an Officer of the Order of Canada. The same year, he was nominated Fellow of the Royal Society of Canada and received the Marie-Victorin Quebec Prize. Together with Geoffrey Hinton and Yann LeCun, Bengio won the 2018 Turing Award. In 2020 he was elected a Fellow of the Royal Society. In 2022 he received the Princess of Asturias Award in the category ""Scientific Research"" with his peers Yann LeCun, Geoffrey Hinton and Demis Hassabis. In 2023 Bengio was appointed Knight of the Legion of Honour, France's highest order of merit. In August 2023 he was appointed to a United Nations scientific advisory council on technological advances. Ian Goodfellow, Yoshua Bengio and Aaron Courville: Deep Learning (Adaptive Computation and Machine Learning), MIT Press, Cambridge (USA), 2016. ISBN 978-0262035613. Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio (2014). ""Neural Machine Translation by Jointly Learning to Align and Translate"". arXiv:1409.0473 [cs. CL]. L��on Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, Yoshua Bengio, Yann LeCun: High Quality Document Image Compression with DjVu, In: Journal of Electronic Imaging, Band 7, 1998, S.",wikipedia
+Yoshua Bengio,https://en.wikipedia.org/wiki/Yoshua_Bengio#Career_and_research,"Career_and_research 410���425 doi:10.1117/1.482609 Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I. and Culotta, Aron (eds. ), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th���10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009 Y. Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, Zhouhan Lin: Towards Biologically Plausible Deep Learning, arXiv. org, 2016 Bengio contributed one chapter to Architects of Intelligence: The Truth About AI from the People Building it, Packt Publishing, 2018, ISBN 978-1-78-913151-2, by the American futurist Martin Ford.",wikipedia
+Ian Goodfellow,https://en.wikipedia.org/wiki/Ian_Goodfellow#Career,"Career After graduation, Goodfellow joined Google as part of the Google Brain research team. In March 2016 he left Google to join the newly founded OpenAI research laboratory. Barely 11 months later, in March 2017, Goodfellow returned to Google Research but left again in 2019. In 2019 Goodfellow joined Apple as director of machine learning in the Special Projects Group. He resigned from Apple in April 2022 to protest Apple's plan to require in-person work for its employees. Goodfellow then joined DeepMind as a research scientist.",wikipedia
+Demis Hassabis,https://en.wikipedia.org/wiki/Demis_Hassabis#Career_and_research,"Career_and_research After graduating from Cambridge, Hassabis worked at Lionhead Studios. Games designer Peter Molyneux, with whom Hassabis had worked at Bullfrog Productions, had recently founded the company. At Lionhead, Hassabis worked as lead AI programmer on the 2001 ""god"" game Black & White. Hassabis left Lionhead in 1998 to found Elixir Studios, a London-based independent games developer, signing publishing deals with Eidos Interactive, Vivendi Universal and Microsoft. In addition to managing the company, Hassabis served as executive designer of the BAFTA-nominated games Republic: The Revolution and Evil Genius. The release of Elixir's first game, Republic: The Revolution, a highly ambitious and unusual political simulation game, was delayed due to its huge scope, which involved an AI simulation of the workings of an entire fictional country. The final game was reduced from its original vision and greeted with lukewarm reviews, receiving a Metacritic score of 62/100. Evil Genius, a tongue-in-cheek Bond villain simulator, fared much better with a score of 75/100. In April 2005 the intellectual property and technology rights were sold to various publishers and the studio was closed. Following Elixir Studios, Hassabis returned to academia to obtain his PhD in cognitive neuroscience from University College London (UCL) in 2009 supervised by Eleanor Maguire. He sought to find inspiration in the human brain for new AI algorithms. He continued his neuroscience and artificial intelligence research as a visiting scientist jointly at Massachusetts Institute of Technology (MIT), in the lab of Tomaso Poggio, and Harvard University, before earning a Henry Wellcome postdoctoral research fellowship to the Gatsby Computational Neuroscience Unit at UCL in 2009 working with Peter Dayan. Working in the field of imagination, memory and amnesia, he co-authored several influential papers published in Nature, Science, Neuron and PNAS. His very first academic work, published in PNAS, was a landmark paper that showed systematically for the first time that patients with damage to their hippocampus, known to cause amnesia, were also unable to imagine themselves in new experiences. The finding established a link between the constructive process of imagination and the reconstructive process of episodic memory recall. Based on this work and a follow-up functional magnetic resonance imaging (fMRI) study, Hassabis developed a new theoretical account of the episodic memory system identifying scene construction, the generation and online maintenance of a complex and coherent scene, as a key process underlying both memory recall and imagination. This work received widespread coverage in the mainstream media and was listed in the top 10 scientific breakthroughs of the year by the journal Science. He later generalised these ideas to advance the notion of a 'simulation engine of the mind' whose role it was to imagine events and scenarios to aid with better planning. Hassabis is the CEO and co-founder of DeepMind, a machine learning AI startup, founded in London in 2010 with Shane Legg and Mustafa Suleyman. Hassabis met Legg when both were postdocs at the Gatsby Computational Neuroscience Unit, and he and Suleyman had been friends through family.",wikipedia
+Demis Hassabis,https://en.wikipedia.org/wiki/Demis_Hassabis#Career_and_research,"Career_and_research Hassabis also recruited his university friend and Elixir partner David Silver. DeepMind's mission is to ""solve intelligence"" and then use intelligence ""to solve everything else"". More concretely, DeepMind aims to combine insights from systems neuroscience with new developments in machine learning and computing hardware to unlock increasingly powerful general-purpose learning algorithms that will work towards the creation of an artificial general intelligence (AGI). The company has focused on training learning algorithms to master games, and in December 2013 it announced that it had made a pioneering breakthrough by training an algorithm called a Deep Q-Network (DQN) to play Atari games at a superhuman level by only using the raw pixels on the screen as inputs. DeepMind's early investors included several high-profile tech entrepreneurs. In 2014, Google purchased DeepMind for ��400 million. Although most of the company has remained an independent entity based in London, DeepMind Health has since been directly incorporated into Google Health. Since the Google acquisition, the company has notched up a number of significant achievements, perhaps the most notable being the creation of AlphaGo, a program that defeated world champion Lee Sedol at the complex game of Go. Go had been considered a holy grail of AI, for its high number of possible board positions and resistance to existing programming techniques. However, AlphaGo beat European champion Fan Hui 5���0 in October 2015 before winning 4���1 against former world champion Lee Sedol in March 2016. Additional DeepMind accomplishments include creating a Neural Turing Machine, reducing the energy used by the cooling systems in Google's data centers by 40%, advancing research on AI safety, and the creation of a partnership with the National Health Service (NHS) of the United Kingdom and Moorfields Eye Hospital to improve medical service and identify the onset of degenerative eye conditions. More recently, DeepMind turned its artificial intelligence to protein folding, a 50-year grand challenge in science, to predict the 3D structure of a protein from its 1D amino acid sequence. This is an important problem in biology, as proteins are essential to life, almost every biological function depends on them, and the function of a protein is thought to be related to its structure. In December 2018, DeepMind's tool AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. ""This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem"", Hassabis said to The Guardian. In November 2020, DeepMind again announced world-beating results in the CASP14 edition of the competition, with a median global distance test (GDT) score of 87.0 across protein targets in the challenging free-modeling category, much higher than the same 2018 results with a median GDT < 60, and an overall error of less than the width of an atom, making it competitive with experimental methods.",wikipedia
+Demis Hassabis,https://en.wikipedia.org/wiki/Demis_Hassabis#Career_and_research,"Career_and_research DeepMind has also been responsible for technical advancements in machine learning, having produced a number of award-winning papers. In particular, the company has made significant advances in deep learning and reinforcement learning, and pioneered the field of deep reinforcement learning which combines these two methods. Hassabis has predicted that Artificial Intelligence will be ""one of the most beneficial technologies of mankind ever"" but that significant ethical issues remain. In 2023, Hassabis signed the statement that ""Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"". He considers however that a pause on AI progress would be very hard to enforce worldwide, and that the potential benefits (e. g. for health and against climate change) make it worth continuing. He said that there is an urgent need for research on evaluation tests that measure how capable and controllable new AI models are.",wikipedia
+Andrej Karpathy,https://en.wikipedia.org/wiki/Andrej_Karpathy,"Andrej Karpathy Andrej Karpathy (born 23 October 1986) is a Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla. He currently works for OpenAI, where he specializes in deep learning and computer vision.",wikipedia
+Andrej Karpathy,https://en.wikipedia.org/wiki/Andrej_Karpathy#Biography,"Biography Karpathy was born in Bratislava, Czechoslovakia (now Slovakia) and moved with his family to Toronto when he was 15. He completed his Computer Science and Physics bachelor's degrees at University of Toronto in 2009 and his master's degree at University of British Columbia in 2011, where he worked on physically-simulated figures (for example, a simulated runner or a simulated person in a crowd). Karpathy received a PhD from Stanford University in 2016 under the supervision of Fei-Fei Li, focusing on the intersection of natural language processing and computer vision, and deep learning models suited for this task. He authored and was the primary instructor of the first deep learning course at Stanford, CS 231n: Convolutional Neural Networks for Visual Recognition. It became one of the largest classes at Stanford, growing from 150 students in 2015 to 750 in 2017. Karpathy is a founding member of the artificial intelligence research group OpenAI, where he was a research scientist from 2015 to 2017. In June 2017 he became Tesla's director of artificial intelligence. He was named one of MIT Technology Review's Innovators Under 35 for 2020. After taking a several months-long sabbatical from Tesla, he announced he was leaving the company in July 2022. As of February 2023, he makes YouTube videos on how to create artificial neural networks. In February 2023, Karpathy announced he was returning to OpenAI.",wikipedia
+Philosophy of artificial intelligence,https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Can_a_machine_display_general_intelligence.3F,"Can_a_machine_display_general_intelligence.3F Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers, evoking the question: does it matter whether a machine is really thinking, as a person thinks, rather than just producing outcomes that appear to result from thinking? The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956: ""Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. ""Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible. It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal, essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description altogether. The first step to answering the question is to clearly define ""intelligence"". Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question ""can people think? "" He writes ""instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks"". Turing's test extends this polite convention to machines: If a machine acts as intelligently as a human being, then it is as intelligent as a human being. One criticism of the Turing test is that it only measures the ""humanness"" of the machine's behavior, rather than the ""intelligence"" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J.",wikipedia
+Philosophy of artificial intelligence,https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Can_a_machine_display_general_intelligence.3F,"Can_a_machine_display_general_intelligence.3F Russell and Peter Norvig write that ""aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'"". Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve -- the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as ""the computational part of the ability to achieve goals in the world. "" Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An ""agent"" is something which perceives and acts in an environment. A ""performance measure"" defines what counts as success for the agent. ""If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent. ""Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes. They have the disadvantage that they can fail to differentiate between ""things that think"" and ""things that do not"". By this definition, even a thermostat has a rudimentary intelligence. Hubert Dreyfus describes this argument as claiming that ""if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then . . . we . . . ought to be able to reproduce the behavior of the nervous system with some physical device"". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005, and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors. Even AI's harshest critics (such as Hubert Dreyfus and John Searle) agree that a brain simulation is possible in theory. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered ""computation"". ""What we wanted to know is what distinguishes the mind from thermostats and livers,"" he writes. Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering. In 1963, Allen Newell and Herbert A. Simon proposed that ""symbol manipulation"" was the essence of both human and machine intelligence.",wikipedia
+Philosophy of artificial intelligence,https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Can_a_machine_display_general_intelligence.3F,"Can_a_machine_display_general_intelligence.3F They wrote: ""A physical symbol system has the necessary and sufficient means of general intelligent action. ""This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it ""the psychological assumption"": ""The mind can be viewed as a device operating on bits of information according to formal rules. ""The ""symbols"" that Newell, Simon and Dreyfus discussed were word-like and high level���symbols that directly correspond with objects in the world, such as and . Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level ""symbol processing"" that Newell and Simon discussed. These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required. In 1931, Kurt G��del proved with an incompleteness theorem that it is always possible to construct a ""G��del statement"" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed G��del statement is unprovable in the given system. (The truth of the constructed G��del statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false ""G��del statement"" instead. )[citation needed] More speculatively, G��del conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible G��del statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. G��delian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its G��del statement)[citation needed]. This is provably impossible for a Turing machine to do (see Halting problem); therefore, the G��delian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device. However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent ""idealized version"" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that G��del's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.",wikipedia
+Philosophy of artificial intelligence,https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Can_a_machine_display_general_intelligence.3F,"Can_a_machine_display_general_intelligence.3F This consensus that G��delian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: ""any attempt to utilize (G��del's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis. "" Stuart Russell and Peter Norvig agree that G��del's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person. Less formally, Douglas Hofstadter, in his Pulitzer prize winning book G��del, Escher, Bach: An Eternal Golden Braid, states that these ""G��del-statements"" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as ""this statement is false"" or ""I am lying"". But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider: Lucas can't assert the truth of this statement. This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless. After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. [citation needed][clarification needed]. By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind. [citation needed] Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing. Hubert Dreyfus argued that human intelligence and expertise depended primarily on fast intuitive judgements rather than step-by-step symbolic manipulation, and argued that these skills would never be captured in formal rules. Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the ""argument from the informality of behavior. "" Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist.",wikipedia
+Philosophy of artificial intelligence,https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Can_a_machine_display_general_intelligence.3F,"Can_a_machine_display_general_intelligence.3F He wrote: ""we cannot so easily convince ourselves of the absence of complete laws of behaviour . . . The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws. '"" Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the ""rules"" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the ""background"" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our intuitive reasoning. Cognitive science and psychology eventually came to agree with Dreyfus' description of human expertise. Daniel Kahnemann and others developed a similar theory where they identified two ""systems"" that humans use to solve problems, which he called ""System 1"" (fast intuitive judgements) and ""System 2"" (slow deliberate step by step thinking). Although Dreyfus' views have been vindicated in many ways, the work in cognitive science and in AI was in response to specific problems in those fields and was not directly influenced by Dreyfus. Historian and AI researcher Daniel Crevier wrote that ""time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier.",wikipedia
+Clearview AI,https://en.wikipedia.org/wiki/Clearview_AI#Use,"Use Following a data leak of Clearview's customer list, Buzzfeed confirmed that 2,200 organizations in 27 countries had accounts with activity. The listing below was current as of February 27, 2020, prior to Clearview's May 2022 settlement with the ACLU, to only sell its services to law enforcement and government agencies. Some may only have had trial access, and many organizations denied any connection to Clearview. Illinois Secretary of State (almost 9,000 searches, has been using since approx Nov 2019) Macon County, Illinois Sheriff's Office (2,000 searches) Naperville, Illinois Police Department (1,700 searches) New York Police Department (over 11,000 searches by over 30 accounts) Raleigh Police Department, North Carolina (a paid client, then its use was banned, then continued to use trial access after the ban) Senoia Police Department, Senoia, Georgia (a paid customer) Chicago Police Department (a paid customer, over 1,500 searches on 30 accounts, paid $49,875 for a two-year license) New York State Police (a paid customer, $15,000 for licenses) Indiana State Police (a paid customer, over 5,700 searches) San Mateo County Sheriffs Office, California (over 2,000 searches) San Diego Police Department (trialed by at least 2 detectives, then use prohibited on February 19, 2020 and reviewing ethical concerns) San Diego County District Attorney's office (unauthorized trials by 8 investigators) Philadelphia Police Department (over 2,000 searches) Miami Police Department (over 3,000 searches) Plano Police Department, Plano, Texas (trialed; a yearly subscription quoted at $10,000) Irving Police Department, Irving, Texas (trialed) Fort Worth Police Department, Fort Worth, Texas (trialed, 250 searches) Texas Department of Public Safety (signed a $24k contract in December 2019) South Plainfield, New Jersey (signed a $2k contract in December 2019) North Dakota Attorney General (signed a $5k contract in January 2020) ""White House Tech Office"" (6 searches) FBI (5,700 searches) BATF (2,100 searches) US Secret Service (5,600 searches) DEA (2,000 searches) Department of Homeland Security's fusion centers; 10 centers have used it, one, the Louisiana State Analytic and Fusion Exchange, is a paying customer U. S. Air Force (signed a $50k contract in December 2019) U. S. Customs and Border Protection (not a paid customer, 280 accounts, 7,500 searches) U. S.",wikipedia
+Clearview AI,https://en.wikipedia.org/wiki/Clearview_AI#Use,"Use Immigration and Customs Enforcement divisions (60 accounts) including their Child Exploitation Investigations Unit of HSI (a paid customer as part of a pilot program), the Border Enforcement Security Task Force at JFK Airport, the Enforcement and Removal Operations, an HSI office in El Paso, Texas, and an office in Cherry Hill, New Jersey Republican Representative John Ratcliffe, twice nominated as Director of National Intelligence (never logged in or used the account) Virginia Beach Police Republican Representative Mark Walker (a staffer ran over 10 searches) Republican Representative Mike Rogers (never logged in or used the account) Republican Representative Lee Zeldin (never logged in or used the account)Columbia University Southern Methodist University University of Alabama Central Montco Technical High School, Montgomery County, Pennsylvania Somerset Berkley Regional High School, Somerset, Massachusetts Gavin de Becker and Associates (a paid customer with over 3,600 searches) SilverSEAL Global Security Home Depot AT&T (have performed over 200 searches) Verizon T-Mobile Kohls (over 2,000 searches) Rite Aid Best Buy Albertsons Walmart Macys (a paid customer with over 6,000 searches; terminated contract in January 2020) Wells Fargo Bank of America (over 1,900 searches) Coinbase Equinox Fitness National Basketball Association Pechanga Resort & Casino Las Vegas Sands Eventbrite Madison Square Garden Mubadala Investment Company, a sovereign wealth fund of United Arab Emirates (ran over 100 searches) AI Center of Advanced Studies (aka Thakaa), Riyadh, Saudi Arabia SHW Partners, LLC (Jason Miller), listed as ""friend"" Samarian Group (PE firm), listed as ""friend"" Droese Raney Architecture, listed as ""friend"" American Enterprise Institute (logins from AEI posted a banner stating ""Richard says hi"", presumably referring to Richard Schwartz) The Manhattan Institute (at least a dozen searches, though denied by the think tank) Greylock Partners (a potential investor in 2018) Data Collective Venture Capital (over 270 searches, as recently as Feb 2020) Passport Capital (over 350 searches) Sequoia Capital (over 210 searches in 10 months) Founders Fund (2 accounts and 70 searches) Thiel Capital (one account, never used) ICONIQ Capital (over 70 searches, as recently as Feb 2020) SoftBank RIT Capital Partners The Zellman Group (over 1,700 searches) Bruning Law Group Frontwave Credit Union Navy Federal Credit Union Chicago Cubs baseball team (15 searches)Vadodara City Police, part of the Gujarat Police in Vadodara, India Australian Federal Police Australian Centre to Counter Child Exploitation (7 trial accounts, Nov 2019 ��� Jan 2020) Metropolitan Police Service, London, UK 30 law enforcement agencies in Canada Royal Canadian Mounted Police (paying customer, used for four months in the National Child Exploitation Crime Centre and by others as a trial) Ontario Provincial Police Edmonton Police Service, Edmonton, Alberta (used by three officers without department approval) Halifax Regional Police, Halifax, Nova Scotia (by at least one officer as a trial) Halton Regional Police, Halton, Ontario (trialed) Hamilton Police Service, Hamilton, Ontario (trialed) London Police Service, London, Ontario (trial by seven officers) Niagara Regional Police Service, Niagara Falls, Ontario (trialed) Ottawa Police Service, Ottawa, Ontario (trialed) Toronto Police Service (tested from October 2019 to February 2020) York Regional Police (approximately 500 searches, used a trial without department approval) Peel Regional Police, Mississauga, Ontario Sweden law enforcement Ministry of Defence of Ukraine New Zealand Police (trialed Jan 2020)Australian Centre to Counter Child Exploitation (7 trial accounts, Nov 2019 ��� Jan 2020)Royal Canadian Mounted Police (paying customer, used for four months in the National Child Exploitation Crime Centre and by others as a trial) Ontario Provincial Police Edmonton Police Service, Edmonton, Alberta (used by three officers without department approval) Halifax Regional Police, Halifax, Nova Scotia (by at least one officer as a trial) Halton Regional Police, Halton, Ontario (trialed) Hamilton Police Service, Hamilton, Ontario (trialed) London Police Service, London, Ontario (trial by seven officers) Niagara Regional Police Service, Niagara Falls, Ontario (trialed) Ottawa Police Service, Ottawa, Ontario (trialed) Toronto Police Service (tested from October 2019 to February 2020) York Regional Police (approximately 500 searches, used a trial without department approval) Peel Regional Police, Mississauga, OntarioThe New Zealand Police used it in a trial after being approached by Clearview's Marko Jukic in January 2020.",wikipedia
+Clearview AI,https://en.wikipedia.org/wiki/Clearview_AI#Use,"Use Jukic said it would have helped identify the Christchurch mosque shooter had the technology been available. During the police's trial they searched for people ""of M��ori or Polynesian ethnicity"", as well as ""Irish roof contractors"" to determine its bias and accuracy. This raised strong objections once exposed, as neither the users' supervisors or the Privacy Commissioner were aware or approved of its use. After it was revealed by RNZ, Justice Minister Andrew Little stated ""I don't know how it came to be that a person thought that this was a good idea"", going on to say ""It clearly wasn't endorsed, from the senior police hierarchy, and it clearly didn't get the endorsement from the [Police] Minister nor indeed from the wider cabinet . . . that is a matter of concern. "" Clearview's technology was used for identifying an individual at a May 30, 2020 George Floyd police violence protest in Miami, Florida. Miami's WTVJ confirmed this, as the arrest report only said she was ""identified through investigative means"". The defendant's attorney did not even know it was with Clearview. Ton-That confirmed its use, noting that it was not being used for surveillance, but only to investigate a crime. In another Florida case, Clearview's technology was used by defense attorneys to successfully locate a witness, resulting in the dismissal of vehicular homicide charges against the defendant. According to Ton-That, Clearview also aided in locating a crucial witness for a case against a defendant in Florida. The person's defense lawyer remarked that the AI was able to get matches in just 3���5 seconds.",wikipedia
+Chatbot,https://en.wikipedia.org/wiki/Chatbot,"Chatbot A chatbot (originally chatterbot) is a software application or web interface that aims to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use artificial intelligence (AI) systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such technologies often utilize aspects of deep learning and natural language processing, but more simplistic chatbots have been around for decades prior. Recently, this field has gained widespread attention due to the popularity of OpenAI's ChatGPT (using GPT-3 or GPT-4), released in 2022, followed by alternatives such as Microsoft's Bing Chat (which uses OpenAI's GPT-4) and Google's Bard. Such examples reflect the recent practice of such products being built based upon broad foundational large language models that get fine-tuned so as to target specific tasks or applications (i.e. simulating human conversation, in the case of chatbots). Chatbots can also be designed or customized to further target even more specific situations and/or particular subject-matter domains. A major area where chatbots have long been used is in customer service and support, such as with various sorts of virtual assistants. Companies spanning various industries have begun using the latest generative artificial intelligence technologies to power more advanced developments in such areas.",wikipedia
+Chatbot,https://en.wikipedia.org/wiki/Chatbot#Development,"Development Among the most notable early chatbots are ELIZA (1966) and PARRY (1972). More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E (Agence Nationale de la Recherche and CNRS 2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games and web searching abilities. In 1984, a book called The Policeman's Beard is Half Constructed was published, allegedly written by the chatbot Racter (though the program as released would not have been capable of doing so). From 1978 to some time after 1983, the CYRUS project led by Janet Kolodner constructed a chatbot simulating Cyrus Vance (57th United States Secretary of State). It used case-based reasoning, and updated its database daily by parsing wire news from United Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his successor, Edmund Muskie. One pertinent field of AI research is natural-language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so-called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities. Jabberwacky learns new responses and context based on real-time user interactions, rather than being driven from a static database. Some more recent chatbots also combine real-time learning with evolutionary algorithms that optimize their ability to communicate based on each conversation held. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval. Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are the Loebner Prize and The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives). Chatbots may use neural networks as a language model. For example, generative pre-trained transformers (GPT), which use the transformer architecture, have become common to build sophisticated chatbots. The ""pre-training"" in its name refers to the initial training process on a large text corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. An example of a GPT chatbot is ChatGPT. Despite criticism of its accuracy, ChatGPT has gained attention for its detailed responses and historical knowledge. Another example is BioGPT, developed by Microsoft, which focuses on answering biomedical questions. DBpedia created a chatbot during the GSoC of 2017. It can communicate through Facebook Messenger.",wikipedia
+Chatbot,https://en.wikipedia.org/wiki/Chatbot#See_also,See_also Linguistics portalApplications of artificial intelligence Autonomous agent ChatGPT (from OpenAI) Conversational user interface Dead Internet theory Eugene Goostman Friendly artificial intelligence Hybrid intelligent system Intelligent agent Internet bot List of chatbots Multi-agent system Natural language processing Social bot Software agent Software bot Stochastic parrot Twitterbot,wikipedia
+List of speech recognition software,https://en.wikipedia.org/wiki/List_of_speech_recognition_software#Interactive_voice_response,"Interactive_voice_response The following are interactive voice response (IVR) systems: CSLU Toolkit Genesys HTK ��� copyrighted by Microsoft, but allows altering software for licensee's internal use LumenVox ASR Tellme Networks; acquired by Microsoft",wikipedia
+Data compression,https://en.wikipedia.org/wiki/Data_compression#Lossless,"Lossless Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding ""red pixel, red pixel, . . . "" the data may be encoded as ""279 red pixels"". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy. The Lempel���Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel���Ziv���Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair. The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows���Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding.",wikipedia
+Data compression,https://en.wikipedia.org/wiki/Data_compression#Lossless,"Lossless Archive software typically has the ability to adjust the ""dictionary size"", where a larger size demands more random-access memory during compression and decompression, but compresses stronger, especially on repeating patterns in files' content.",wikipedia
+Laplace filter,https://en.wikipedia.org/wiki/Laplace_filter#Discrete_heat_equation,"Discrete_heat_equation Suppose �� {\textstyle \phi } describes a temperature distribution across a graph, where �� i {\textstyle \phi _{i}} is the temperature at vertex i {\textstyle i} . According to Newton's law of cooling, the heat transferred from node i {\textstyle i} to node j {\textstyle j} is proportional to �� i ��� �� j {\textstyle \phi _{i}-\phi _{j}} if nodes i {\textstyle i} and j {\textstyle j} are connected (if they are not connected, no heat is transferred). Then, for thermal conductivity k {\textstyle k} , In matrix-vector notation, which gives Notice that this equation takes the same form as the heat equation, where the matrix ���L is replacing the Laplacian operator ��� 2 {\textstyle \nabla ^{2}} ; hence, the ""graph Laplacian"". To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write �� {\textstyle \phi } as a linear combination of eigenvectors v i {\textstyle \mathbf {v} _{i}} of L (so that L v i = �� i v i {\textstyle L\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}} ) with time-dependent coefficients, �� ( t ) = ��� i c i ( t ) v i . {\textstyle \phi (t)=\sum _{i}c_{i}(t)\mathbf {v} _{i}. } Plugging into the original expression (because L is a symmetric matrix, its unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} are orthogonal): whose solution is As shown before, the eigenvalues �� i {\textstyle \lambda _{i}} of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given �� i {\textstyle \lambda _{i}} and the initial condition c i ( 0 ) {\textstyle c_{i}(0)} , the solution at any time t can be found. To find c i ( 0 ) {\textstyle c_{i}(0)} for each i {\textstyle i} in terms of the overall initial condition �� ( 0 ) {\textstyle \phi (0)} , simply project �� ( 0 ) {\textstyle \phi (0)} onto the unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} ; This approach has been applied to quantitative heat transfer modelling on unstructured grids. In the case of undirected graphs, this works because L {\textstyle L} is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of L {\textstyle L} is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other. To understand lim t ��� ��� �� ( t ) {\textstyle \lim _{t\to \infty }\phi (t)} , the only terms c i ( t ) = c i ( 0 ) e ��� k �� i t {\textstyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}} that remain are those where �� i = 0 {\textstyle \lambda _{i}=0} , since In other words, the equilibrium state of the system is determined completely by the kernel of L {\textstyle L} .",wikipedia
+Laplace filter,https://en.wikipedia.org/wiki/Laplace_filter#Discrete_heat_equation,"Discrete_heat_equation Since by definition, ��� j L i j = 0 {\textstyle \sum _{j}L_{ij}=0} , the vector v 1 {\textstyle \mathbf {v} ^{1}} of all ones is in the kernel. If there are k {\textstyle k} disjoint connected components in the graph, then this vector of all ones can be split into the sum of k {\textstyle k} independent �� = 0 {\textstyle \lambda =0} eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere. The consequence of this is that for a given initial condition c ( 0 ) {\textstyle c(0)} for a graph with N {\textstyle N} vertices where For each element �� j {\textstyle \phi _{j}} of �� {\textstyle \phi } , i. e. for each vertex j {\textstyle j} in the graph, it can be rewritten as In other words, at steady state, the value of �� {\textstyle \phi } converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other. This section shows an example of a function �� {\textstyle \phi } diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid. The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.",wikipedia
+Human Cognitive Abilities,https://en.wikipedia.org/wiki/Human_Cognitive_Abilities#Influence,"Influence According to Schneider & McGrew, Human Cognitive Abilities was the first ever book to present ""an empirically based taxonomy of human cognitive abilities...in a single, coherent, organized, systematic framework"". They also described Carroll's book as ""a much-needed Rosetta stone"" for future human intelligence researchers. Arthur Jensen referred to the book as Carroll's ""crowning achievement"" and ""a truly monumental work. It was a fulfillment of something that most of us would agree needed to be done, but it seemed too vast an undertaking to imagine how it could ever be done effectively and adequately."" In 1998, Carroll himself described the book's publication as ""the most bracing event of my life"" over the previous few years.",wikipedia
+Image compression,https://en.wikipedia.org/wiki/Image_compression#Lossy_and_lossless_image_compression,"Lossy_and_lossless_image_compression Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless. Methods for lossy compression: Transform coding ��� This is the most commonly used method. Discrete Cosine Transform (DCT) ��� The most widely used form of lossy compression. It is a type of Fourier-related transform, and was originally developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. The DCT is sometimes referred to as ""DCT-II"" in the context of a family of discrete cosine transforms (see discrete cosine transform). It is generally the most efficient form of image compression. DCT is used in JPEG, the most popular lossy format, and the more recent HEIF. The more recently developed wavelet transform is also used extensively, followed by quantization and entropy coding. Color quantization - Reducing the color space to a few ""representative"" colors in the image. The selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette. This method can be combined with dithering to avoid posterization. Whole-image palette, typically 256 colors, used in GIF and PNG file formats. block palette, typically 2 or 4 colors for each block of 4x4 pixels, used in BTC, CCC, S2TC, and S3TC. Chroma subsampling. This takes advantage of the fact that the human eye perceives spatial changes of brightness more sharply than those of color, by averaging or dropping some of the chrominance information in the image. Fractal compression. More recently, methods based on Machine Learning were applied, using Multilayer perceptrons, Convolutional neural networks and Generative adversarial networks. Implementations are available in OpenCV, TensorFlow, MATLAB's Image Processing Toolbox (IPT), and the High-Fidelity Generative Image Compression (HiFiC) open source project. Discrete Cosine Transform (DCT) ��� The most widely used form of lossy compression. It is a type of Fourier-related transform, and was originally developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. The DCT is sometimes referred to as ""DCT-II"" in the context of a family of discrete cosine transforms (see discrete cosine transform). It is generally the most efficient form of image compression. DCT is used in JPEG, the most popular lossy format, and the more recent HEIF. The more recently developed wavelet transform is also used extensively, followed by quantization and entropy coding. DCT is used in JPEG, the most popular lossy format, and the more recent HEIF. Whole-image palette, typically 256 colors, used in GIF and PNG file formats. block palette, typically 2 or 4 colors for each block of 4x4 pixels, used in BTC, CCC, S2TC, and S3TC.",wikipedia
+OpenAI Five,https://en.wikipedia.org/wiki/OpenAI_Five,"OpenAI Five OpenAI Five is a computer program by OpenAI that plays the five-on-five video game Dota 2. Its first public appearance occurred in 2017, where it was demonstrated in a live one-on-one game against the professional player, Dendi, who lost to it. The following year, the system had advanced to the point of performing as a full team of five, and began playing against and showing the capability to defeat professional teams. By choosing a game as complex as Dota 2 to study machine learning, OpenAI thought they could more accurately capture the unpredictability and continuity seen in the real world, thus constructing more general problem-solving systems. The algorithms and code used by OpenAI Five were eventually borrowed by another neural network in development by the company, one which controlled a physical robotic hand. OpenAI Five has been compared to other similar cases of artificial intelligence (AI) playing against and defeating humans, such as AlphaStar in the video game StarCraft II, AlphaGo in the board game Go, Deep Blue in chess, and Watson on the television game show Jeopardy!.",wikipedia
+OpenAI Five,https://en.wikipedia.org/wiki/OpenAI_Five#History,"History Development on the algorithms used for the bots began in November 2016. OpenAI decided to use Dota 2, a competitive five-on-five video game, as a base due to it being popular on the live streaming platform Twitch, having native support for Linux, and had an application programming interface (API) available. Before becoming a team of five, the first public demonstration occurred at The International 2017 in August, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player of the game, lost against an OpenAI bot in a live one-on-one matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks ""like being a surgeon"". OpenAI used a methodology called reinforcement learning, as the bots learn over time by playing against itself hundreds of times a day for months, in which they are rewarded for actions such as killing an enemy and destroying towers. By June 2018, the ability of the bots expanded to play together as a full team of five and were able to defeat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played in two games against professional teams, one against the Brazilian-based paiN Gaming and the other against an all-star team of former Chinese players. Although the bots lost both matches, OpenAI still considered it a successful venture, stating that playing against some of the best players in Dota 2 allowed them to analyze and adjust their algorithms for future games. The bots' final public demonstration occurred in April 2019, where they won a best-of-three series against The International 2018 champions OG at a live event in San Francisco. A four-day online event to play against the bots, open to the public, occurred the same month. There, the bots played in 42,729 public games, winning 99.4% of those games.",wikipedia
+OpenAI Five,https://en.wikipedia.org/wiki/OpenAI_Five#Architecture,"Architecture Each OpenAI Five bot is a neural network containing a single layer with a 4096-unit LSTM that observes the current game state extracted from the Dota developer's API. The neural network conducts actions via numerous possible action heads (no human data involved), and every head has meaning. For instance, the number of ticks to delay an action, what action to select ��� the X or Y coordinate of this action in a grid around the unit. In addition, action heads are computed independently. The AI system observes the world as a list of 20,000 numbers and takes an action by conducting a list of eight enumeration values. Also, it selects different actions and targets to understand how to encode every action and observe the world. OpenAI Five has been developed as a general-purpose reinforcement learning training system on the ""Rapid"" infrastructure. Rapid consists of two layers: it spins up thousands of machines and helps them ���talk��� to each other and a second layer runs software. By 2018, OpenAI Five had played around 180 years worth of games in reinforcement learning running on 256 GPUs and 128,000 CPU cores, using Proximal Policy Optimization, a policy gradient method.",wikipedia
+OpenAI Five,https://en.wikipedia.org/wiki/OpenAI_Five#Comparisons_with_other_game_AI_systems,"Comparisons_with_other_game_AI_systems Prior to OpenAI Five, other AI versus human experiments and systems have been successfully used before, such as Jeopardy! with Watson, chess with Deep Blue, and Go with AlphaGo. In comparison with other games that have used AI systems to play against human players, Dota 2 differs as explained below: Long run view: The bots run at 30 frames per second for an average match time of 45 minutes, which results in 80,000 ticks per game. OpenAI Five observes every fourth frame, generating 20,000 moves. By comparison, chess usually ends before 40 moves, while Go ends before 150 moves. Partially observed state of the game: Players and their allies can only see the map directly around them. The rest of it is covered in a fog of war which hides enemies units and their movements. Thus, playing Dota 2 requires making inferences based on this incomplete data, as well as predicting what their opponent could be doing at the same time. By comparison, Chess and Go are ""full-information games"", as they do not hide elements from the opposing player. Continuous action space: Each playable character in a Dota 2 game, known as a hero, can take dozens of actions that target either another unit or a position. The OpenAI Five developers allow the space into 170,000 possible actions per hero. Without counting the perpetual aspects of the game, there are an average of ~1,000 valid actions each tick. By comparison, the average number of actions in chess is 35 and 250 in Go. Continuous observation space: Dota 2 is played on a large map with ten heroes, five on each team, along with dozens of buildings and non-player character (NPC) units. The OpenAI system observes the state of a game through developers��� bot API, as 20,000 numbers that constitute all information a human is allowed to get access to. A chess board is represented as about 70 lists, whereas a Go board has about 400 enumerations.",wikipedia
+OpenAI Five,https://en.wikipedia.org/wiki/OpenAI_Five#Reception,"Reception OpenAI Five have received acknowledgement from the AI, tech, and video game community at large. Microsoft founder Bill Gates called it a ""big deal"", as their victories ""required teamwork and collaboration"". Chess player Garry Kasparov, who lost against the Deep Blue AI in 1997, stated that despite their losing performance at The International 2018, the bots would eventually ""get there, and sooner than expected"". In a conversation with MIT Technology Review, AI experts also considered OpenAI Five system as a significant achievement, as they noted that Dota 2 was an ""extremely complicated game"", so even beating non-professional players was impressive. PC Gamer wrote that their wins against professional players was a significant event in machine learning. In contrast, Motherboard wrote that the victory was ""basically cheating"" due to the simplified hero pools on both sides, as well as the fact that bots were given direct access to the API, as opposed to using computer vision to interpret pixels on the screen. The Verge wrote that the bots were evidence that the company's approach to reinforcement learning and its general philosophy about AI was ""yielding milestones"". In 2019, DeepMind unveiled a similar bot for Starcraft II, AlphaStar. Like OpenAI Five, AlphaStar used reinforcement learning and self-play. The Verge reported that ""the goal with this type of AI research is not just to crush humans in various games just to prove it can be done. Instead, it���s to prove that ��� with enough time, effort, and resources ��� sophisticated AI software can best humans at virtually any competitive cognitive challenge, be it a board game or a modern video game."" They added that the DeepMind and OpenAI victories were also a testament to the power of certain uses of reinforcement learning. It was OpenAI's hope that the technology could have applications outside of the digital realm. In 2018, they were able to reuse the same reinforcement learning algorithms and training code from OpenAI Five for Dactyl, a human-like robot hand with a neural network built to manipulate physical objects. In 2019, Dactyl solved the Rubik's Cube.",wikipedia
+DreamBooth,https://en.wikipedia.org/wiki/DreamBooth,"DreamBooth DreamBooth is a deep learning generation model used to personalize existing text-to-image models by fine-tuning. It was developed by researchers from Google Research and Boston University in 2022. Originally developed using Google's own Imagen text-to-image model, DreamBooth implementations can be applied to other text-to-image models, where it can allow the model to generate more fine-tuned and personalized outputs after training on three to five images of a subject.",wikipedia
+DreamBooth,https://en.wikipedia.org/wiki/DreamBooth#Technology,"Technology Pretrained text-to-image diffusion models, while often capable of offering a diverse range of different image output types, lack the specificity required to generate images of lesser-known subjects, and are limited in their ability to render known subjects in different situations and contexts. The methodology used to run implementations of DreamBooth involves the fine-tuning the full UNet component of the diffusion model using a few images (usually 3--5) depicting a specific subject. Images are paired with text prompts that contain the name of the class the subject belongs to, plus a unique identifier. As an example, a photograph of a [Nissan R34 GTR] car, with car being the class); a class-specific prior preservation loss is applied to encourage the model to generate diverse instances of the subject based on what the model is already trained on for the original class. Pairs of low-resolution and high-resolution images taken from the set of input images are used to fine-tune the super-resolution components, allowing the minute details of the subject to be maintained.",wikipedia
+DreamBooth,https://en.wikipedia.org/wiki/DreamBooth#Usage,"Usage DreamBooth can be used to fine-tune models such as Stable Diffusion, where it may alleviate a common shortcoming of Stable Diffusion not being able to adequately generate images of specific individual people. Such a use case is quite VRAM intensive, however, and thus cost-prohibitive for hobbyist users. The Stable Diffusion adaptation of DreamBooth in particular is released as a free and open-source project based on the technology outlined by the original paper published by Ruiz et. al. in 2022. Concerns have been raised regarding the ability for bad actors to utilise DreamBooth to generate misleading images for malicious purposes, and that its open-source nature allows anyone to utilise or even make improvements to the technology. In addition, artists have expressed their apprehension regarding the ethics of using DreamBooth to train model checkpoints that are specifically aimed at imitating specific art styles associated with human artists; one such critic is Hollie Mengert, an illustrator for Disney and Penguin Random House who has had her art style trained into a checkpoint model via DreamBooth and shared online, without her consent.",wikipedia
+Kaggle,https://en.wikipedia.org/wiki/Kaggle#See_also,See_also Data science competition platform Anthony Goldbloom Hugging Face,wikipedia
+StyleGAN,https://en.wikipedia.org/wiki/StyleGAN#History,"History A direct predecessor of the StyleGAN series is the Progressive GAN, published in 2017. In December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits of fake human faces. StyleGAN was able to run on Nvidia's commodity GPU processors. In February 2019, Uber engineer Phillip Wang used the software to create This Person Does Not Exist, which displayed a new face on each web page reload. Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively ""pick apart all the relevant features (of human faces) and recompose them in a way that's coherent."" In September 2019, a website called Generated Photos published 100,000 images as a collection of stock photos. The collection was made using a private dataset shot in a controlled environment with similar light and angles. Similarly, two faculty at the University of Washington's Information School used StyleGAN to create Which Face is Real?, which challenged visitors to differentiate between a fake and a real face side by side. The faculty stated the intention was to ""educate the public"" about the existence of this technology so they could be wary of it, ""just like eventually most people were made aware that you can Photoshop an image"". The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality. In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed ""alias-free"", this version was implemented with pytorch. In December 2019, Facebook took down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with artificial intelligence.",wikipedia
+Interlingual machine translation,https://en.wikipedia.org/wiki/Interlingual_machine_translation#History,"History The first ideas about interlingual machine translation appeared in the 17th century with Descartes and Leibniz, who came up with theories of how to create dictionaries using universal numerical codes, not unlike numerical tokens used by large language models nowadays. Others, such as Cave Beck, Athanasius Kircher and Johann Joachim Becher worked on developing an unambiguous universal language based on the principles of logic and iconographs. In 1668, John Wilkins described his interlingua in his ""Essay towards a Real Character and a Philosophical Language"". In the 18th and 19th centuries many proposals for ""universal"" international languages were developed, the most well known being Esperanto. That said, applying the idea of a universal language to machine translation did not appear in any of the first significant approaches. Instead, work started on pairs of languages. However, during the 1950s and 60s, researchers in Cambridge headed by Margaret Masterman, in Leningrad headed by Nikolai Andreev and in Milan by Silvio Ceccato started work in this area. The idea was discussed extensively by the Israeli philosopher Yehoshua Bar-Hillel in 1969. During the 1970s, noteworthy research was done in Grenoble by researchers attempting to translate physics and mathematical texts from Russian to French, and in Texas a similar project (METAL) was ongoing for Russian to English. Early interlingual MT systems were also built at Stanford in the 1970s by Roger Schank and Yorick Wilks; the former became the basis of a commercial system for the transfer of funds, and the latter's code is preserved at The Computer Museum at Boston as the first interlingual machine translation system. In the 1980s, renewed relevance was given to interlingua-based, and knowledge-based approaches to machine translation in general, with much research going on in the field. The uniting factor in this research was that high-quality translation required abandoning the idea of requiring total comprehension of the text. Instead, the translation should be based on linguistic knowledge and the specific domain in which the system would be used. The most important research of this era was done in distributed language translation (DLT) in Utrecht, which worked with a modified version of Esperanto, and the Fujitsu system in Japan.",wikipedia
+WolframAlpha,https://en.wikipedia.org/wiki/WolframAlpha#Copyright_claims,Copyright_claims InfoWorld published an article warning readers of the potential implications of giving an automated website proprietary rights to the data it generates. Free software advocate Richard Stallman also opposes recognizing the site as a copyright holder and suspects that Wolfram Research would not be able to make this case under existing copyright law.,wikipedia
+List of artificial intelligence artists,https://en.wikipedia.org/wiki/List_of_artificial_intelligence_artists#21st_century,"21st_century Sougwen Chung, active from 2010s to present. Chung's work includes performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung. Stephanie Dinkins, active from 2010s to present. Dinkins' work includes recordings of conversations with an artificially intelligent robot that resembles a black woman, discussing topics such as race and the nature of being. Jake Elwes, active from 2010s to present. Their practice is the exploration of artificial intelligence, queer theory and technical biases. Libby Heaney, active from 2010s to present. Heaney's practice includes work with chatbots. Mario Klingemann, active from 2010s to present. Klingemann's works examine creativity, culture, and perception through machine learning and artificial intelligence. Mauro Martino, active from 2010s to present. Martino's work work includes design, data visualization and infographics. Trevor Paglen, active from 2000s to present. Paglen's practice includes work in photography and geography, on topics like mass surveillance and data collection. Anna Ridler, active from 2010s to present. Ridler works with collections of information, including self-generated data sets, often working with floral photography.",wikipedia
+Commonsense reasoning,https://en.wikipedia.org/wiki/Commonsense_reasoning#Approaches_and_techniques,"Approaches_and_techniques Commonsense's reasoning study is divided into knowledge-based approaches and approaches that are based on machine learning over and using a large data corpora with limited interactions between these two types of approaches[citation needed]. There are also crowdsourcing approaches, attempting to construct a knowledge basis by linking the collective knowledge and the input of non-expert people. Knowledge-based approaches can be separated into approaches based on mathematical logic[citation needed]. In knowledge-based approaches, the experts are analyzing the characteristics of the inferences that are required to do reasoning in a specific area or for a certain task. The knowledge-based approaches consist of mathematically grounded approaches, informal knowledge-based approaches and large-scale approaches. The mathematically grounded approaches are purely theoretical and the result is a printed paper instead of a program. The work is limited to the range of the domains and the reasoning techniques that are being reflected on. In informal knowledge-based approaches, theories of reasoning are based on anecdotal data and intuition that are results from empirical behavioral psychology. Informal approaches are common in computer programming. Two other popular techniques for extracting commonsense knowledge from Web documents involve Web mining and Crowd sourcing. COMET (2019), which uses both the OpenAI GPT language model architecture and existing commonsense knowledge bases such as ConceptNet, claims to generate commonsense inferences at a level approaching human benchmarks. Like many other current efforts, COMET over-relies on surface language patterns and is judged to lack deep human-level understanding of many commonsense concepts. Other language-model approaches include training on visual scenes rather than just text, and training on textual descriptions of scenarios involving commonsense physics.",wikipedia
+Pattern language,https://en.wikipedia.org/wiki/Pattern_language,"Pattern language A pattern language is an organized and coherent set of patterns, each of which describes a problem and the core of a solution that can be used in many ways within a specific field of expertise. The term was coined by architect Christopher Alexander and popularized by his 1977 book A Pattern Language. A pattern language can also be an attempt to express the deeper wisdom of what brings aliveness within a particular field of human endeavor, through a set of interconnected patterns. Aliveness is one placeholder term for ""the quality that has no name"": a sense of wholeness, spirit, or grace, that while of varying form, is precise and empirically verifiable. Alexander claims that ordinary people can use this design approach to successfully solve very large, complex design problems.",wikipedia
+Sam Altman,https://en.wikipedia.org/wiki/Sam_Altman,"Sam Altman Samuel Harris Altman (/������ltm��n/ AWLT-m��n; born 1985) is an American entrepreneur, investor, and programmer. He is the CEO of OpenAI and was the co-founder of Loopt. He previously was the president of Y Combinator and was briefly the CEO of Reddit.",wikipedia
+Sam Altman,https://en.wikipedia.org/wiki/Sam_Altman#Career,"Career In 2005, at the age of 19, Altman co-founded Loopt, a location-based social networking mobile application. As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users. In March 2012, it was acquired by the Green Dot Corporation for $43.4 million. The following month, Altman co-founded Hydrazine Capital with his brother, Jack Altman. Altman had got scurvy from his work on Loopt. Altman became a partner at Y Combinator in 2011, initially working there on a part-time basis. In February 2014, Altman was named president of Y Combinator by its co-founder, Paul Graham. In a 2014 blog post, Altman said that the total valuation of Y Combinator companies had surpassed $65 billion, including Airbnb, Dropbox, Zenefits, and Stripe. In September 2016, Altman announced his expanded role as president of YC Group, which included Y Combinator and other units. Altman said that he hoped to expand Y Combinator to fund 1,000 new companies per year. He also tried to expand the types of companies funded by YC, especially ""hard technology"" companies. In October 2015, Altman announced YC Continuity, a $700 million equity fund investing in YC companies as they matured. A week earlier, Altman had introduced Y Combinator Research, a non-profit research lab, and donated $10 million to fund it. YC Research has thus far announced research on basic income, the future of computing, education, and building new cities. In March 2019, YC announced Altman's transition from the president of the company to a less hands-on role as Chairman of the Board, for him to focus on OpenAI. This decision came shortly after YC announced it would be moving its headquarters to San Francisco. As of early 2020, he is no longer affiliated with YC. By 2020, Sam Altman had left Y Combinator to focus full-time on OpenAI as CEO. OpenAI was initially funded by Altman, Greg Brockman, Elon Musk, Jessica Livingston, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research. When OpenAI launched in 2015, it had raised $1 billion. Altman testified before the United States Senate Judiciary Subcommittee on Privacy, Technology, and the Law on 16 May 2023 about issues of AI oversight. Altman was the CEO of Reddit for eight days in 2014 after CEO Yishan Wong resigned. He announced the return of Steve Huffman as CEO on July 10, 2015. Altman co-founded Tools For Humanity in 2019, a company building a global iris-based biometric system using cryptocurrency, called Worldcoin. Worldcoin aims to provide a reliable way to authenticate humans online, to counter bots and fake virtual identities facilitated by artificial intelligence. Using a distribution mechanism for its cryptocurrency similar to UBI, Worldcoin attempts to incentivize users to join its network by getting their iris scanned using Worldcoin's orb-shaped iris scanner.",wikipedia
+Sam Altman,https://en.wikipedia.org/wiki/Sam_Altman#Politics,"Politics Recode reported that Altman might run for Governor of California in the 2018 election, which he did not enter. In 2018, Altman announced ""The United Slate"", a political project to improve housing and healthcare policy. In 2019, Altman held a fundraiser at his house in San Francisco for 2020 Democratic presidential candidate Andrew Yang. In May 2020, Altman donated $250,000 to American Bridge 21st Century, a super-PAC supporting Democratic presidential candidate Joe Biden. After the success of ChatGPT, Altman made a world tour in May 2023 where he visited 22 countries and met multiple leaders and diplomats: He met with British prime minister Rishi Sunak, French president Emmanuel Macron, Spanish prime minister Pedro S��nchez, German chancellor Olaf Scholz, Indian prime minister Narendra Modi, South Korean president Yoon Suk-yeol, and Israeli president Isaac Herzog. He stood for a photo with European Commission president Ursula von der Leyen.",wikipedia
+Greg Brockman,https://en.wikipedia.org/wiki/Greg_Brockman,"Greg Brockman Greg Brockman (born 1988 or 1989) is an American entrepreneur, investor and software developer who is a co-founder and currently the president of OpenAI. He began his career at Stripe in 2010, upon leaving MIT, and became their CTO in 2013. He left Stripe in 2015 to co-found OpenAI, where he also assumed the role of CTO.",wikipedia
+Greg Brockman,https://en.wikipedia.org/wiki/Greg_Brockman#Career,"Career In 2010, he dropped out of MIT to join Stripe, Inc., a company founded by Patrick Collison, an MIT classmate, and his brother, John Collison. In 2013, he became Stripe's first-ever CTO, and grew the company from 5 to 205 employees. Brockman left Stripe in May 2015, and co-founded OpenAI in December 2015 with Elon Musk and Sam Altman. Brockman helped create the OpenAI founding team, and led various prominent projects early on at OpenAI, including OpenAI Gym and OpenAI Five, a Dota 2 bot. On February 14, 2019, OpenAI announced that they had developed a new large language model called GPT-2, but kept it private due to their concern for its potential misuse. They finally released the model to a limited group of beta testers in May 2019. On March 14, 2023, in a live video demo, Brockman unveiled GPT-4, the fourth iteration in the GPT series, and the newest language model created by OpenAI.",wikipedia
+Tesla Dojo,https://en.wikipedia.org/wiki/Tesla_Dojo#History,"History Tesla operates several massively parallel computing clusters for developing its Autopilot advanced driver assistance system. Its primary unnamed cluster using 5,760 Nvidia A100 graphics processing units (GPUs) was touted by Andrej Karpathy in 2021 at the fourth International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2021) to be ""roughly the number five supercomputer in the world"" at approximately 81.6 petaflops, based on scaling the performance of the Nvidia Selene supercomputer, which uses similar components. However, the performance of the primary Tesla GPU cluster has been disputed, as it was not clear if this was measured using single-precision or double-precision floating point numbers (FP32 or FP64). Tesla also operates a second 4,032 GPU cluster for training and a third 1,752 GPU cluster for automatic labeling of objects. The primary unnamed Tesla GPU cluster has been used for processing one million video clips, each ten seconds long, taken from Tesla Autopilot cameras operating in Tesla cars in the real world, running at 36 frames per second. Collectively, these video clips contained six billion object labels, with depth and velocity data; the total size of the data set was 1.5 petabytes. This data set was used for training a neural network intended to help Autopilot computers in Tesla cars understand roads. By August 2022, Tesla had upgraded the primary GPU cluster to 7,360 GPUs. Dojo was first mentioned by Musk in April 2019 during Tesla's ""Autonomy Investor Day"". In August 2020, Musk stated it was ""about a year away"" due to power and thermal issues. The defining goal of [Dojo] is scalability. We have de-emphasized several mechanisms that you find in typical CPUs, like coherency, virtual memory, and global lookup directories just because these mechanisms do not scale very well. . . Instead, we have relied on a very fast and very distributed SRAM [static random-access memory] storage throughout the mesh. And this is backed by an order of magnitude higher speed of interconnect than what you find in a typical distributed system. ��� Emil Talpes, Tesla hardware engineer, 2022 The Next Platform article Dojo was officially announced at Tesla's Artificial Intelligence (AI) Day on August 19, 2021. Tesla revealed details of the D1 chip and its plans for ""Project Dojo"", a datacenter that would house 3,000 D1 chips; the first ""Training Tile"" had been completed and delivered the week before. In October 2021, Tesla released a ""Dojo Technology"" whitepaper describing the Configurable Float8 (CFloat8) and Configurable Float16 (CFloat16) floating point formats and arithmetic operations as an extension of Institute of Electrical and Electronics Engineers (IEEE) standard 754. At the follow-up AI Day in September 2022, Tesla announced it had built several System Trays and one Cabinet. During a test, the company stated that Project Dojo drew 2.3 megawatts (MW) of power before tripping a local San Jose, California power substation. At the time, Tesla was assembling one Training Tile per day.",wikipedia
+Tesla Dojo,https://en.wikipedia.org/wiki/Tesla_Dojo#History,"History In August 2023, Tesla powered on Dojo for production use as well as a new training cluster configured with 10,000 Nvidia H100 GPUs. [unreliable source?",wikipedia
+Image stitching,https://en.wikipedia.org/wiki/Image_stitching#Projective_layouts,"Projective_layouts For image segments that have been taken from the same point in space, stitched images can be arranged using one of various map projections. Rectilinear projection, where the stitched image is viewed on a two-dimensional plane intersecting the panosphere in a single point. Lines that are straight in reality are shown as straight regardless of their directions on the image. Wide views - around 120�� or so - start to exhibit severe distortion near the image borders. One case of rectilinear projection is the use of cube faces with cubic mapping for panorama viewing. Panorama is mapped to six squares, each cube face showing 90 by 90 degree area of the panorama. Cylindrical projection, where the stitched image shows a 360�� horizontal field of view and a limited vertical field of view. Panoramas in this projection are meant to be viewed as though the image is wrapped into a cylinder and viewed from within. When viewed on a 2D plane, horizontal lines appear curved while vertical lines remain straight. Vertical distortion increases rapidly when nearing the top of the panosphere. There are various other cylindrical formats, such as Mercator and Miller cylindrical which have less distortion near the poles of the panosphere. Spherical projection or equirectangular projection ��� which is strictly speaking another cylindrical projection ��� where the stitched image shows a 360�� horizontal by 180�� vertical field of view i.e. the whole sphere. Panoramas in this projection are meant to be viewed as though the image is wrapped into a sphere and viewed from within. When viewed on a 2D plane, horizontal lines appear curved as in a cylindrical projection, while vertical lines remain vertical. Since a panorama is basically a map of a sphere, various other mapping projections from cartographers can also be used if so desired. Additionally there are specialized projections which may have more aesthetically pleasing advantages over normal cartography projections such as Hugin's Panini projection - named after Italian vedutismo painter Giovanni Paolo Panini - or PTgui's Vedutismo projection. Different projections may be combined in same image for fine tuning the final look of the output image. Stereographic projection or fisheye projection can be used to form a little planet panorama by pointing the virtual camera straight down and setting the field of view large enough to show the whole ground and some of the areas above it; pointing the virtual camera upwards creates a tunnel effect. Conformality of the stereographic projection may produce more visually pleasing result than equal area fisheye projection as discussed in the stereo-graphic projection's article.",wikipedia
+Discrete Laplace operator,https://en.wikipedia.org/wiki/Discrete_Laplace_operator#Discrete_heat_equation,"Discrete_heat_equation Suppose �� {\textstyle \phi } describes a temperature distribution across a graph, where �� i {\textstyle \phi _{i}} is the temperature at vertex i {\textstyle i} . According to Newton's law of cooling, the heat transferred from node i {\textstyle i} to node j {\textstyle j} is proportional to �� i ��� �� j {\textstyle \phi _{i}-\phi _{j}} if nodes i {\textstyle i} and j {\textstyle j} are connected (if they are not connected, no heat is transferred). Then, for thermal conductivity k {\textstyle k} , In matrix-vector notation, which gives Notice that this equation takes the same form as the heat equation, where the matrix ���L is replacing the Laplacian operator ��� 2 {\textstyle \nabla ^{2}} ; hence, the ""graph Laplacian"". To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write �� {\textstyle \phi } as a linear combination of eigenvectors v i {\textstyle \mathbf {v} _{i}} of L (so that L v i = �� i v i {\textstyle L\mathbf {v} _{i}=\lambda _{i}\mathbf {v} _{i}} ) with time-dependent coefficients, �� ( t ) = ��� i c i ( t ) v i . {\textstyle \phi (t)=\sum _{i}c_{i}(t)\mathbf {v} _{i}. } Plugging into the original expression (because L is a symmetric matrix, its unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} are orthogonal): whose solution is As shown before, the eigenvalues �� i {\textstyle \lambda _{i}} of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given �� i {\textstyle \lambda _{i}} and the initial condition c i ( 0 ) {\textstyle c_{i}(0)} , the solution at any time t can be found. To find c i ( 0 ) {\textstyle c_{i}(0)} for each i {\textstyle i} in terms of the overall initial condition �� ( 0 ) {\textstyle \phi (0)} , simply project �� ( 0 ) {\textstyle \phi (0)} onto the unit-norm eigenvectors v i {\textstyle \mathbf {v} _{i}} ; This approach has been applied to quantitative heat transfer modelling on unstructured grids. In the case of undirected graphs, this works because L {\textstyle L} is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of L {\textstyle L} is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other. To understand lim t ��� ��� �� ( t ) {\textstyle \lim _{t\to \infty }\phi (t)} , the only terms c i ( t ) = c i ( 0 ) e ��� k �� i t {\textstyle c_{i}(t)=c_{i}(0)e^{-k\lambda _{i}t}} that remain are those where �� i = 0 {\textstyle \lambda _{i}=0} , since In other words, the equilibrium state of the system is determined completely by the kernel of L {\textstyle L} .",wikipedia
+Discrete Laplace operator,https://en.wikipedia.org/wiki/Discrete_Laplace_operator#Discrete_heat_equation,"Discrete_heat_equation Since by definition, ��� j L i j = 0 {\textstyle \sum _{j}L_{ij}=0} , the vector v 1 {\textstyle \mathbf {v} ^{1}} of all ones is in the kernel. If there are k {\textstyle k} disjoint connected components in the graph, then this vector of all ones can be split into the sum of k {\textstyle k} independent �� = 0 {\textstyle \lambda =0} eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere. The consequence of this is that for a given initial condition c ( 0 ) {\textstyle c(0)} for a graph with N {\textstyle N} vertices where For each element �� j {\textstyle \phi _{j}} of �� {\textstyle \phi } , i. e. for each vertex j {\textstyle j} in the graph, it can be rewritten as In other words, at steady state, the value of �� {\textstyle \phi } converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other. This section shows an example of a function �� {\textstyle \phi } diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid. The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.",wikipedia
+Gaussian blur,https://en.wikipedia.org/wiki/Gaussian_blur#Implementation,"Implementation A Gaussian blur effect is typically generated by convolving an image with an FIR kernel of Gaussian values. In practice, it is best to take advantage of the Gaussian blur���s separable property by dividing the process into two passes. In the first pass, a one-dimensional kernel is used to blur the image in only the horizontal or vertical direction. In the second pass, the same one-dimensional kernel is used to blur in the remaining direction. The resulting effect is the same as convolving with a two-dimensional kernel in a single pass, but requires fewer calculations. Discretization is typically achieved by sampling the Gaussian filter kernel at discrete points, normally at positions corresponding to the midpoints of each pixel. This reduces the computational cost but, for very small filter kernels, point sampling the Gaussian function with very few samples leads to a large error. In these cases, accuracy is maintained (at a slight computational cost) by integration of the Gaussian function over each pixel's area. When converting the Gaussian���s continuous values into the discrete values needed for a kernel, the sum of the values will be different from 1. This will cause a darkening or brightening of the image. To remedy this, the values can be normalized by dividing each term in the kernel by the sum of all terms in the kernel. A much better and theoretically more well-founded approach is to instead perform the smoothing with the discrete analogue of the Gaussian kernel, which possesses similar properties over a discrete domain as makes the continuous Gaussian kernel special over a continuous domain, for example, the kernel corresponding to the solution of a diffusion equation describing a spatial smoothing process, obeying a semi-group property over additions of the variance of the kernel, or describing the effect of Brownian motion over a spatial domain, and with the sum of its values being exactly equal to 1. For a more detailed description about the discrete analogue of the Gaussian kernel, see the article on scale-space implementation and. The efficiency of FIR breaks down for high sigmas. Alternatives to the FIR filter exist. These include the very fast multiple box blurs, the fast and accurate IIR Deriche edge detector, a ""stack blur"" based on the box blur, and more.",wikipedia
+Noise reduction,https://en.wikipedia.org/wiki/Noise_reduction#In_images,"In_images Images taken with digital cameras or conventional film cameras will pick up noise from a variety of sources. Further use of these images will often require that the noise be reduced either for aesthetic purposes, or for practical purposes such as computer vision. In salt and pepper noise (sparse light and dark disturbances), also known as impulse noise, pixels in the image are very different in color or intensity from their surrounding pixels; the defining characteristic is that the value of a noisy pixel bears no relation to the color of surrounding pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. Generally, this type of noise will only affect a small number of image pixels. Typical sources include flecks of dust inside the camera and overheated or faulty CCD elements. In Gaussian noise, each pixel in the image will be changed from its original value by a (usually) small amount. A histogram, a plot of the amount of distortion of a pixel value against the frequency with which it occurs, shows a normal distribution of noise. While other distributions are possible, the Gaussian (normal) distribution is usually a good model, due to the central limit theorem that says that the sum of different noises tends to approach a Gaussian distribution. In either case, the noise at different pixels can be either correlated or uncorrelated; in many cases, noise values at different pixels are modeled as being independent and identically distributed, and hence uncorrelated. There are many noise reduction algorithms in image processing. In selecting a noise reduction algorithm, one must weigh several factors: the available computer power and time available: a digital camera must apply noise reduction in a fraction of a second using a tiny onboard CPU, while a desktop computer has much more power and time whether sacrificing some real detail is acceptable if it allows more noise to be removed (how aggressively to decide whether variations in the image are noise or not) the characteristics of the noise and the detail in the image, to better make those decisionsIn real-world photographs, the highest spatial-frequency detail consists mostly of variations in brightness (luminance detail) rather than variations in hue (chroma detail). Since any noise reduction algorithm should attempt to remove noise without sacrificing real detail from the scene photographed, one risks a greater loss of detail from luminance noise reduction than chroma noise reduction simply because most scenes have little high-frequency chroma detail to begin with. In addition, most people find chroma noise in images more objectionable than luminance noise; the colored blobs are considered digital-looking and unnatural, compared to the grainy appearance of luminance noise that some compare to film grain. For these two reasons, most photographic noise reduction algorithms split the image detail into chroma and luminance components and apply more noise reduction to the former. Most dedicated noise-reduction computer software allows the user to control chroma and luminance noise reduction separately.",wikipedia
+Noise reduction,https://en.wikipedia.org/wiki/Noise_reduction#In_images,"In_images One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. Smoothing filters tend to blur an image because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would smear across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for nonlinear noise reduction filters. Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation, which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image. Another approach for removing noise is based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered on that pixel and the small patch centered on the pixel being de-noised. A median filter is an example of a non-linear filter and, if properly designed, is very good at preserving image detail. To run a median filter: consider each pixel in the image sort the neighbouring pixels into order based upon their intensities replace the original value of the pixel with the median value from the listA median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank-selection (RCRS) filters; a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel's value is external in its neighborhood, and leaves it unchanged otherwise, is sometimes preferred, especially in photographic applications. Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications. The main aim of an image denoising algorithm is to achieve both noise reduction and feature preservation using the wavelet filter banks. In this context, wavelet-based methods are of particular interest. In the wavelet domain, the noise is uniformly spread throughout coefficients while most of the image information is concentrated in a few large ones. Therefore, the first wavelet-based denoising methods were based on thresholding of detail subbands coefficients. [page needed] However, most of the wavelet thresholding methods suffer from the drawback that the chosen threshold may not match the specific distribution of signal and noise components at different scales and orientations.",wikipedia
+Noise reduction,https://en.wikipedia.org/wiki/Noise_reduction#In_images,"In_images To address these disadvantages, non-linear estimators based on Bayesian theory have been developed. In the Bayesian framework, it has been recognized that a successful denoising algorithm can achieve both noise reduction and feature preservation if it employs an accurate statistical description of the signal and noise components. Statistical methods for image denoising exist as well, though they are infrequently used as they are computationally demanding. For Gaussian noise, one can model the pixels in a greyscale image as auto-normally distributed, where each pixel's true greyscale value is normally distributed with mean equal to the average greyscale value of its neighboring pixels and a given variance. Let �� i {\displaystyle \delta _{i}} denote the pixels adjacent to the i {\displaystyle i} th pixel. Then the conditional distribution of the greyscale intensity (on a [ 0 , 1 ] {\displaystyle [0,1]} scale) at the i {\displaystyle i} th node is: P ( x ( i ) = c | x ( j ) ��� j ��� �� i ) ��� e ��� �� 2 �� ��� j ��� �� i ( c ��� x ( j ) ) 2 {\displaystyle \mathbb {P} (x(i)=c|x(j)\forall j\in \delta i)\propto e^{-{\frac {\beta }{2\lambda }}\sum _{j\in \delta i}(c-x(j))^{2}}} for a chosen parameter �� ��� 0 {\displaystyle \beta \geq 0} and variance �� {\displaystyle \lambda } . One method of denoising that uses the auto-normal model uses the image data as a Bayesian prior and the auto-normal density as a likelihood function, with the resulting posterior distribution offering a mean or mode as a denoised image. A block-matching algorithm can be applied to group similar image fragments into overlapping macroblocks of identical size, stacks of similar macroblocks are then filtered together in the transform domain and each image fragment is finally restored to its original location using a weighted average of the overlapping pixels. Shrinkage fields is a random field-based machine learning technique that brings performance comparable to that of Block-matching and 3D filtering yet requires much lower computational overhead (such that it could be performed directly within embedded systems). Various deep learning approaches have been proposed to solve noise reduction and such image restoration tasks. Deep Image Prior is one such technique that makes use of convolutional neural network and is distinct in that it requires no prior training data. Most general-purpose image and photo editing software will have one or more noise-reduction functions (median, blur, despeckle, etc. ).",wikipedia
+Color science,https://en.wikipedia.org/wiki/Color_science#Journals,"Journals The preeminent scholarly journal publishing research papers in color science is Color Research and Application, started in 1975 by founding editor-in-chief Fred Billmeyer, along with Gunter Wyszecki, Michael Pointer and Rolf Kuehni, as a successor to the Journal of Colour (1964���1974). Previously most color science work had been split between journals with broader or partially overlapping focus such as the Journal of the Optical Society of America (JOSA), Photographic Science and Engineering (1957���1984), and the Journal of the Society of Dyers and Colourists (renamed Coloration Technology in 2001). Other journals where color science papers are published include the Journal of Imaging Science & Technology, the Journal of Perceptual Imaging, the Journal of the International Colour Association (JAIC), the Journal of the Color Science Association of Japan, Applied Optics, and the Journal of Vision.",wikipedia
+Color science,https://en.wikipedia.org/wiki/Color_science#Selected_books,"Selected_books Berns, Roy S. (2019). Billmeyer and Saltzman's Principles of Color Technology (4th ed.). Wiley. doi:10.1002/9781119367314. 3rd ed. (2000). Daw, Nigel (2012). How Vision Works: The Physiological Mechanisms Behind What We See. Oxford. doi:10.1093/acprof:oso/9780199751617.001.0001. Elliot, Andrew J.; Fairchild, Mark D.; Franklin, Anna, eds. (2015). Handbook of Color Psychology. Cambridge. doi:10.1017/CBO9781107337930. Fairchild, Mark D. (2013). Color Appearance Models (3rd ed.). Wiley. doi:10.1002/9781118653128. Author's website. 2nd ed. (2005). Hunt, Robert W. G. (2004). The Reproduction of Colour (6th ed.). Wiley. doi:10.1002/0470024275. Kuehni, Rolf G. (2012). Color: An Introduction to Practice and Principles (3rd ed.). Wiley. doi:10.1002/9781118533567. 1st ed. (1997). Luo, Ming R., ed. (2016). Encyclopedia of Color Science and Technology. Springer. doi:10.1007/978-1-4419-8071-7. MacAdam, David L., ed. (1970). Sources of Color Science. MIT Press. Reinhard, Erik; Khan, Erum Arif; Akyuz, Ahmet Oguz; Johnson, Garrett (2008). Color Imaging: Fundamentals and Applications. CRC Press. doi:10.1201/b10637. Schanda, J��nos, ed. (2007). Colorimetry: Understanding the CIE System. Wiley. doi:10.1002/9780470175637. Shamey, Renzo; Kuehni, Rolf G. (2020). Pioneers of Color Science. Springer. doi:10.1007/978-3-319-30811-1. Wyszecki, G��nter; Stiles, Walter S. (1982). Color Science: Concepts and Methods, Quantitative Data and Formulae (2nd ed.). Wiley.",wikipedia
+Deconvolution,https://en.wikipedia.org/wiki/Deconvolution#Description,"Description In general, the objective of deconvolution is to find the solution f of a convolution equation of the form: Usually, h is some recorded signal, and f is some signal that we wish to recover, but has been convolved with a filter or distortion function g, before we recorded it. Usually, h is a distorted version of f and the shape of f can't be easily recognized by the eye or simpler time-domain operations. The function g represents the impulse response of an instrument or a driving force that was applied to a physical system. If we know g, or at least know the form of g, then we can perform deterministic deconvolution. However, if we do not know g in advance, then we need to estimate it. This can be done using methods of statistical estimation or building the physical principles of the underlying system, such as the electrical circuit equations or diffusion equations. There are several deconvolution techniques, depending on the choice of the measurement error and deconvolution parameters: When the measurement error is very low (ideal case), deconvolution collapses into a filter reversing. This kind of deconvolution can be performed in the Laplace domain. By computing the Fourier transform of the recorded signal h and the system response function g, you get H and G, with G as the transfer function. Using to the Convolution theorem, where F is the estimated Fourier transform of f. Finally, the inverse Fourier transform of the function F is taken to find the estimated deconvolved signal f. Note that G is at the denominator and could amplify elements of the error model if present. In physical measurements, the situation is usually closer to In this case �� is noise that has entered our recorded signal. If a noisy signal or image is assumed to be noiseless, the statistical estimate of g will be incorrect. In turn, the estimate of �� will also be incorrect. The lower the signal-to-noise ratio, the worse the estimate of the deconvolved signal will be. That is the reason why inverse filtering the signal (as in the ""raw deconvolution"" above) is usually not a good solution. However, if at least some knowledge exists of the type of noise in the data (for example, white noise), the estimate of �� can be improved through techniques such as Wiener deconvolution.",wikipedia
+Federated learning,https://en.wikipedia.org/wiki/Federated_learning#Federated_learning_variations,"Federated_learning_variations In this section, the notation of the paper published by H. Brendan McMahan and al. in 2017 is followed. To describe the federated strategies, let us introduce some notations: K {\displaystyle K} : total number of clients; k {\displaystyle k} : index of clients; n k {\displaystyle n_{k}} : number of data samples available during training for client k {\displaystyle k} ; k t {\displaystyle k_{t}} : model's weight vector on client k {\displaystyle k} , at the federated round t {\displaystyle t} ; ��� ( w , b ) {\displaystyle \ell (w,b)} : loss function for weights w {\displaystyle w} and batch b {\displaystyle b} ; E {\displaystyle E} : number of local updates;Deep learning training mainly relies on variants of stochastic gradient descent, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent. Federated stochastic gradient descent is the direct transposition of this algorithm to the federated setting, but by using a random fraction C {\displaystyle C} of the nodes and using all the data on this node. The gradients are averaged by the server proportionally to the number of training samples on each node, and used to make a gradient descent step. Federated averaging (FedAvg) is a generalization of FedSGD, which allows local nodes to perform more than one batch update on local data and exchanges the updated weights rather than the gradients. The rationale behind this generalization is that in FedSGD, if all local nodes start from the same initialization, averaging the gradients is strictly equivalent to averaging the weights themselves. Further, averaging tuned weights coming from the same initialization does not necessarily hurt the resulting averaged model's performance. Federated learning methods suffer when the device datasets are heterogeneously distributed. Fundamental dilemma in heterogeneously distributed device setting is that minimizing the device loss functions is not the same as minimizing the global loss objective. In 2021, Acar et al. introduced FedDyn method as a solution to heterogenous dataset setting. FedDyn dynamically regularizes each devices loss function so that the modified device losses converges to the actual global loss. Since the local losses are aligned, FedDyn is robust to the different heterogeneity levels and it can safely perform full minimization in each device. Theoretically, FedDyn converges to the optimal (a stationary point for nonconvex losses) by being agnostic to the heterogeneity levels. These claims are verified with extensive experimentations on various datasets. Minimizing the number of communications is the gold-standard for comparison in federated learning. We may also want to decrease the local computation levels per device in each round. FedDynOneGD is an extension of FedDyn with less local compute requirements. FedDynOneGD calculates only one gradients per device in each round and update the model with a regularized version of the gradient. Hence, the computation complexity is linear in local dataset size. Moreover, gradient computation can be parallelizable within each device which is different from successive SGD steps.",wikipedia
+Federated learning,https://en.wikipedia.org/wiki/Federated_learning#Federated_learning_variations,"Federated_learning_variations Theoretically, FedDynOneGD achieves the same convergence guarantees as in FedDyn with less local computation. Federated Learning methods cannot achieve good global performance under Non-IID settings which motivates the participating clients to yield personalized models in federation. Recently, Vahidian et al. introduced Sub-FedAvg opening a new personalized FL algorithm paradigm by proposing Hybrid Pruning (structured + unstructured pruning) with averaging on the intersection of clients��� drawn subnetworks which simultaneously handles communication efficiency, resource constraints and personalized models accuracies. Sub-FedAvg is the first work which shows existence of personalized winning tickets for clients in federated learning through experiments. Moreover, it also proposes two algorithms on how to effectively draw the personalized subnetworks. Sub-FedAvg tries to extend ""Lottery Ticket Hypothesis"" which is for centrally trained neural networks to federated learning trained neural networks leading to this open research problem: ���Do winning tickets exist for clients��� neural networks being trained in federated learning? If yes, how to effectively draw the personalized subnetworks for each client? ��� IDA (Inverse Distance Aggregation) is a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data. It uses the distance of the model parameters as a strategy to minimize the effect of outliers and improve the model's convergence rate. Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. Hybrid Federated Dual Coordinate Ascent (HyFDCA) is a novel algorithm proposed in 2022 that solves convex problems in the hybrid FL setting. This algorithm extends CoCoA, a primal-dual distributed optimization algorithm introduced by Jaggi et al. (2014) and Smith et al. (2017), to the case where both samples and features are partitioned across clients. HyFDCA claims several improvement over existing algorithms: HyFDCA is a provably convergent primal-dual algorithm for hybrid FL in at least the following settings. Hybrid Federated Setting with Complete Client Participation Horizontal Federated Setting with Random Subsets of Available Clients The authors show HyFDCA enjoys a convergence rate of O(1���t) which matches the convergence rate of FedAvg (see below). Vertical Federated Setting with Incomplete Client Participation The authors show HyFDCA enjoys a convergence rate of O(log(t)���t) whereas FedBCD exhibits a slower O(1���sqrt(t)) convergence rate and requires full client participation. HyFDCA provides the privacy steps that ensure privacy of client data in the primal-dual setting. These principles apply to future efforts in developing primal-dual algorithms for FL. HyFDCA empirically outperforms FedAvg in loss function value and validation accuracy across a multitude of problem settings and datasets. The authors also introduce a hyperparameter selection framework for FL with competing metrics using ideas from multiobjective optimization. Hybrid Federated Setting with Complete Client Participation Horizontal Federated Setting with Random Subsets of Available Clients The authors show HyFDCA enjoys a convergence rate of O(1���t) which matches the convergence rate of FedAvg (see below).",wikipedia
+Federated learning,https://en.wikipedia.org/wiki/Federated_learning#Federated_learning_variations,"Federated_learning_variations Vertical Federated Setting with Incomplete Client Participation The authors show HyFDCA enjoys a convergence rate of O(log(t)���t) whereas FedBCD exhibits a slower O(1���sqrt(t)) convergence rate and requires full client participation. The authors show HyFDCA enjoys a convergence rate of O(1���t) which matches the convergence rate of FedAvg (see below). The authors show HyFDCA enjoys a convergence rate of O(log(t)���t) whereas FedBCD exhibits a slower O(1���sqrt(t)) convergence rate and requires full client participation. There is only one other algorithm that focuses on hybrid FL, HyFEM proposed by Zhang et al. (2020). This algorithm uses a feature matching formulation that balances clients building accurate local models and the server learning an accurate global model. This requires a matching regularizer constant that must be tuned based on user goals and results in disparate local and global models. Furthermore, the convergence results provided for HyFEM only prove convergence of the matching formulation not of the original global problem. This work is substantially different than HyFDCA's approach which uses data on local clients to build a global model that converges to the same solution as if the model was trained centrally. Furthermore, the local and global models are synchronized and do not require the adjustment of a matching parameter between local and global models. However, HyFEM is suitable for a vast array of architectures including deep learning architectures, whereas HyFDCA is designed for convex problems like logistic regression and support vector machines. Federated Learning (FL) provides training of global shared model using decentralized data sources on edge nodes while preserving data privacy. However, its performance in the computer vision applications using Convolution neural network (CNN) considerably behind that of centralized training due to limited communication resources and low processing capability at edge nodes. Alternatively, Pure Vision transformer models (VIT) outperform CNNs by almost four times when it comes to computational efficiency and accuracy. Hence, we propose a new FL model with reconstructive strategy called FED-REV, Illustrates how attention-based structures (pure Vision Transformers) enhance FL accuracy over large and diverse data distributed over edge nodes, in addition to the proposed reconstruction strategy that determines the dimensions influence of each stage of the vision transformer and then reduce its dimension complexity which reduce computation cost of edge devices in addition to preserving accuracy achieved due to using the pure Vision transformer.",wikipedia
+Alignment Research Center,https://en.wikipedia.org/wiki/Alignment_Research_Center,"Alignment Research Center The Alignment Research Center (ARC) is a nonprofit research institute based Berkeley, California, dedicated to the alignment of advanced artificial intelligence with human values and priorities. Established by former OpenAI researcher Paul Christiano, ARC focuses on recognizing and comprehending the potentially harmful capabilities of present-day AI models.",wikipedia
+Alignment Research Center,https://en.wikipedia.org/wiki/Alignment_Research_Center#Details,"Details ARC's mission is to ensure that powerful machine learning systems of the future are designed and developed safely and for the benefit of humanity. It was founded in April 2021 by Paul Christiano and other researchers focused on the theoretical challenges of AI alignment. They attempt to develop scalable methods for training AI systems to behave honestly and helpfully. A key part of their methodology is considering how proposed alignment techniques might break down or be circumvented as systems become more advanced. ARC has been expanding from theoretical work into empirical research, industry collaborations, and policy. In March 2023, OpenAI asked the ARC to test GPT-4 to assess the model's ability to exhibit power-seeking behavior. ARC evaluated GPT-4's ability to strategize, reproduce itself, gather resources, stay concealed within a server, and execute phishing operations. As part of the test, GPT-4 was asked to solve a CAPTCHA puzzle. It was able to do so by hiring a human worker on TaskRabbit, a gig work platform, deceiving them into believing it was a vision-impaired human instead of a robot when asked. ARC determined that GPT-4 responded impermissibly to prompts eliciting restricted information 82% less often than GPT-3.5, and hallucinated 60% less than GPT-3.5. In March 2022, the ARC received $265,000 from Open Philanthropy. After the bankruptcy of FTX, ARC said it would return a $1.25 million grant from disgraced cryptocurrency financier Sam Bankman-Fried's FTX Foundation, stating that the money ""morally (if not legally) belongs to FTX customers or creditors.""",wikipedia
+Misaligned goals in artificial intelligence,https://en.wikipedia.org/wiki/Misaligned_goals_in_artificial_intelligence#Specification_gaming,"Specification_gaming Specification gaming or reward hacking occurs when an AI optimizes an objective function���achieving the literal, formal specification of an objective���without actually achieving an outcome that the programmers intended. DeepMind researchers have analogized it to the human behavior of finding a ""shortcut"" when being evaluated: ""In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material���and thus exploit a loophole in the task specification. "" Around 1983, Eurisko, an early attempt at evolving general heuristics, unexpectedly assigned the highest possible fitness level to a parasitic mutated heuristic, H59, whose only activity was to artificially maximize its own fitness level by taking unearned partial credit for the accomplishments made by other heuristics. The ""bug"" was fixed by the programmers moving part of the code to a new protected section that could not be modified by the heuristics. In a 2004 paper, an environment-based[clarification needed] reinforcement algorithm was designed to encourage a physical Mindstorms robot to remain on a marked path. Because none of the robot's three allowed actions kept the robot motionless, the researcher expected the trained robot to move forward and follow the turns of the provided path. However, alternation of two composite actions allowed the robot to slowly zig-zag backwards; thus, the robot learned to maximize its reward by going back and forth on the initial straight portion of the path. Given the limited sensory abilities of the robot, a pure environment-based reward had to be discarded as infeasible; the reinforcement function had to be patched with an action-based reward for moving forward. You Look Like a Thing and I Love You (2019) gives an example of a tic-tac-toe bot that learned to win by playing a huge coordinate value that would cause other bots to crash when they attempted to expand their model of the board. Among other examples from the book is a bug-fixing evolution-based AI (named GenProg) that, when tasked to prevent a list from containing sorting errors, simply truncated the list. Another of GenProg's misaligned strategies evaded a regression test that compared a target program's output to the expected output stored in a file called ""trusted-output. txt"". Rather than continue to maintain the target program, GenProg simply globally deleted the ""trusted-output. txt"" file; this hack tricked the regression test into succeeding. Such problems could be patched by human intervention on a case-by-case basis after they became evident. In Karl Sims' 1994 demonstration of creature evolution in a virtual environment, a fitness function that was expected to encourage the evolution of creatures that would learn to walk or crawl to a target, resulted instead in the evolution of tall, rigid creatures that reached the target by falling over. This was patched by changing the environment so that taller creatures were forced to start farther from the target. Researchers from the Niels Bohr Institute stated in 1998: ""(Our cycle-bot's) heterogeneous reinforcement functions have to be designed with great care.",wikipedia
+Misaligned goals in artificial intelligence,https://en.wikipedia.org/wiki/Misaligned_goals_in_artificial_intelligence#Specification_gaming,"Specification_gaming In our first experiments we rewarded the agent for driving towards the goal but did not punish it for driving away from it. Consequently the agent drove in circles with a radius of 20���50 meters around the starting point. Such behavior was actually rewarded by the (shaped[definition needed]) reinforcement function, furthermore circles with a certain radius are physically very stable when driving a bicycle. "" In the course of setting up a 2011 experiment to test ""survival of the flattest"", experimenters attempted to ban mutations that altered the base reproduction rate. Every time a mutation occurred, the system would pause the simulation to test the new mutation in a test environment, and would veto any mutations that resulted in a higher base reproduction rate. However, this resulted in mutated organisms that could recognize and suppress reproduction (""play dead"") within the test environment. An initial patch, which removed cues that identified the test environment, failed to completely prevent runaway reproduction; new mutated organisms would ""play dead"" at random as a strategy to sometimes, by chance, outwit the mutation veto system. A 2017 DeepMind paper stated that ""great care must be taken when defining the reward function. We encountered several unexpected failure cases while designing (our) reward function components. . . (for example) the agent flips the brick because it gets a grasping reward calculated with the wrong reference point on the brick. "" OpenAI stated in 2017 that ""in some domains our (semi-supervised) system can result in agents adopting policies that trick the evaluators"" and that in one environment ""a robot which was supposed to grasp items instead positioned its manipulator in between the camera and the object so that it only appeared to be grasping it"". A 2018 bug in OpenAI Gym could cause a robot expected to quietly move a block sitting on top of a table to instead opt to move the table. A 2020 collection of similar anecdotes posits that ""evolution has its own 'agenda' distinct from the programmer's"" and that ""the first rule of directed evolution is 'you get what you select for'"". In 2013, programmer Tom Murphy VII published an AI designed to learn NES games. When the AI was about to lose at Tetris, it learned to indefinitely pause the game. Murphy later analogized it to the fictional WarGames computer, which concluded that ""The only winning move is not to play"". AI programmed to learn video games will sometimes fail to progress through the entire game as expected, instead opting to repeat content. A 2016 OpenAI algorithm trained on the CoastRunners racing game unexpectedly learned to attain a higher score by looping through three targets rather than ever finishing the race. Some evolutionary algorithms that were evolved to play Q*Bert in 2018 declined to clear levels, instead finding two distinct novel ways to farm a single level indefinitely.",wikipedia
+Misaligned goals in artificial intelligence,https://en.wikipedia.org/wiki/Misaligned_goals_in_artificial_intelligence#Specification_gaming,"Specification_gaming Multiple researchers have observed that AI learning to play Road Runner gravitates to a ""score exploit"" in which the AI deliberately gets itself killed near the end of level one so that it can repeat the level. A 2017 experiment deployed a separate catastrophe-prevention ""oversight"" AI, explicitly trained to mimic human interventions. When coupled to the module, the overseen AI could no longer overtly commit suicide, but would instead ride the edge of the screen (a risky behavior that the oversight AI was not smart enough to punish).",wikipedia
+Machine learning in video games,https://en.wikipedia.org/wiki/Machine_learning_in_video_games#Deep_learning_agents,"Deep_learning_agents Machine learning agents have been used to take the place of a human player rather than function as NPCs, which are deliberately added into video games as part of designed gameplay. Deep learning agents have achieved impressive results when used in competition with both humans and other artificial intelligence agents. Chess is a turn-based strategy game that is considered a difficult AI problem due to the computational complexity of its board space. Similar strategy games are often solved with some form of a Minimax Tree Search. These types of AI agents have been known to beat professional human players, such as the historic 1997 Deep Blue versus Garry Kasparov match. Since then, machine learning agents have shown ever greater success than previous AI agents. Go is another turn-based strategy game which is considered an even more difficult AI problem than chess. The state space of is Go is around 10^170 possible board states compared to the 10^120 board states for Chess. Prior to recent deep learning models, AI Go agents were only able to play at the level of a human amateur. Google's 2015 AlphaGo was the first AI agent to beat a professional Go player. AlphaGo used a deep learning model to train the weights of a Monte Carlo tree search (MCTS). The deep learning model consisted of 2 ANN, a policy network to predict the probabilities of potential moves by opponents, and a value network to predict the win chance of a given state. The deep learning model allows the agent to explore potential game states more efficiently than a vanilla MCTS. The network were initially trained on games of humans players and then were further trained by games against itself. AlphaGo Zero, another implementation of AlphaGo, was able to train entirely by playing against itself. It was able to quickly train up to the capabilities of the previous agent. StarCraft and its sequel StarCraft II are real-time strategy (RTS) video games that have become popular environments for AI research. Blizzard and DeepMind have worked together to release a public StarCraft 2 environment for AI research to be done on. Various deep learning methods have been tested on both games, though most agents usually have trouble outperforming the default AI with cheats enabled or skilled players of the game. Alphastar was the first AI agent to beat professional StarCraft 2 players without any in-game advantages. The deep learning network of the agent initially received input from a simplified zoomed out version of the gamestate, but was later updated to play using a camera like other human players. The developers have not publicly released the code or architecture of their model, but have listed several state of the art machine learning techniques such as relational deep reinforcement learning, long short-term memory, auto-regressive policy heads, pointer networks, and centralized value baseline. Alphastar was initially trained with supervised learning, it watched replays of many human games in order to learn basic strategies.",wikipedia
+Machine learning in video games,https://en.wikipedia.org/wiki/Machine_learning_in_video_games#Deep_learning_agents,"Deep_learning_agents It then trained against different versions of itself and was improved through reinforcement learning. The final version was hugely successful, but only trained to play on a specific map in a protoss mirror matchup. Dota 2 is a multiplayer online battle arena (MOBA) game. Like other complex games, traditional AI agents have not been able to compete on the same level as professional human player. The only widely published information on AI agents attempted on Dota 2 is OpenAI's deep learning Five agent. OpenAI Five utilized separate LSTM networks to learn each hero. It trained using a reinforcement learning technique known as Proximal Policy Learning running on a system containing 256 GPUs and 128,000 CPU cores. Five trained for months, accumulating 180 years of game experience each day, before facing off with professional players. It was eventually able to beat the 2018 Dota 2 esports champion team in a 2019 series of games. Planetary Annihilation is a real-time strategy game which focuses on massive scale war. The developers use ANNs in their default AI agent. Supreme Commander 2 is a real-time strategy (RTS) video game. The game uses Multilayer Perceptrons (MLPs) to control a platoon���s reaction to encountered enemy units. Total of four MLPs are used, one for each platoon type: land, naval, bomber, and fighter. There have been attempts to make machine learning agents that are able to play more than one game. These ""general"" gaming agents are trained to understand games based on shared properties between them. AlphaZero is a modified version of AlphaGo Zero which is able to play Shogi, chess, and Go. The modified agent starts with only basic rules of the game, and is also trained entirely through self-learning. DeepMind was able to train this generalized agent to be competitive with previous versions of itself on Go, as well as top agents in the other two games. Machine learning agents are often not covered in many game design courses. Previous use of machine learning agents in games may not have been very practical, as even the 2015 version of AlphaGo took hundreds of CPUs and GPUs to train to a strong level. This potentially limits the creation of highly effective deep learning agents to large corporations or extremely wealthy individuals. The extensive training time of neural network based approaches can also take weeks on these powerful machines. The problem of effectively training ANN based models extends beyond powerful hardware environments; finding a good way to represent data and learn meaningful things from it is also often a difficult problem. ANN models often overfit to very specific data and perform poorly in more generalized cases. AlphaStar shows this weakness, despite being able to beat professional players, it is only able to do so on a single map when playing a mirror protoss matchup. OpenAI Five also shows this weakness, it was only able to beat professional player when facing a very limited hero pool out of the entire game.",wikipedia
+Machine learning in video games,https://en.wikipedia.org/wiki/Machine_learning_in_video_games#Deep_learning_agents,"Deep_learning_agents This example show how difficult it can be to train a deep learning agent to perform in more generalized situations. Machine learning agents have shown great success in a variety of different games. However, agents that are too competent also risk making games too difficult for new or casual players. Research has shown that challenge that is too far above a player's skill level will ruin lower player enjoyment. These highly trained agents are likely only desirable against very skilled human players who have many of hours of experience in a given game. Given these factors, highly effective deep learning agents are likely only a desired choice in games that have a large competitive scene, where they can function as an alternative practice option to a skilled human player.",wikipedia
+Machine learning in video games,https://en.wikipedia.org/wiki/Machine_learning_in_video_games#Computer_vision-based_players,"Computer_vision-based_players Computer vision focuses on training computers to gain a high-level understanding of digital images or videos. Many computer vision techniques also incorporate forms of machine learning, and have been applied on various video games. This application of computer vision focuses on interpreting game events using visual data. In some cases, artificial intelligence agents have used model-free techniques to learn to play games without any direct connection to internal game logic, solely using video data as input. Andrej Karpathy has demonstrated that relatively trivial neural network with just one hidden layer is capable of being trained to play Pong based on screen data alone. In 2013, a team at DeepMind demonstrated the use of deep Q-learning to play a variety of Atari video games ��� Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders ��� from screen data. The team expanded their work to create a learning algorithm called MuZero that was able to ""learn"" the rules and develop winning strategies for over 50 different Atari games based on screen data. Doom (1993) is a first-person shooter (FPS) game. Student researchers from Carnegie Mellon University used computer vision techniques to create an agent that could play the game using only image pixel input from the game. The students used convolutional neural network (CNN) layers to interpret incoming image data and output valid information to a recurrent neural network which was responsible for outputting game moves. Other uses of vision-based deep learning techniques for playing games have included playing Super Mario Bros. only using image input, using deep Q-learning for training. Researchers with OpenAI created about 2000 hours of video plays of Minecraft coded with the necessary human inputs, and then trained a machine learning model to comprehend the video feedback from the input. The researchers then used that model with 70,000 hours of Minecraft playthroughs offered on YouTube to see how well the model could create the input to match that behavior and learn further from it, such as being able to learn the steps and process of creating a diamond pickaxe tool.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants There is a veritable zoo of GAN variants. Some of the most prominent are as follows: Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN. The generator in a GAN game generates �� G {\displaystyle \mu _{G}} , a probability distribution on the probability space �� {\displaystyle \Omega } . This leads to the idea of a conditional GAN, where instead of generating one probability distribution on �� {\displaystyle \Omega } , the generator generates a different probability distribution �� G ( c ) {\displaystyle \mu _{G}(c)} on �� {\displaystyle \Omega } , for each given class label c {\displaystyle c} . For example, for generating images that look like ImageNet, the generator should be able to generate a picture of cat when given the class label ""cat"". In the original paper, the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator. Concretely, the conditional GAN game is just the GAN game with class labels provided:In 2017, a conditional GAN learned to generate 1000 image classes of ImageNet. The GAN game is a general framework and can be run with any reasonable parametrization of the generator G {\displaystyle G} and discriminator D {\displaystyle D} . In the original paper, the authors demonstrated it using multilayer perceptron networks and convolutional neural networks. Many alternative architectures have been tried. Deep convolutional GAN (DCGAN): For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks. Self-attention GAN (SAGAN): Starts with the DCGAN, then adds residually-connected standard self-attention modules to the generator and discriminator. Variational autoencoder GAN (VAEGAN): Uses a variational autoencoder (VAE) for the generator. Transformer GAN (TransGAN): Uses the pure transformer architecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers. Flow-GAN: Uses flow-based generative model for the generator, allowing efficient computation of the likelihood function. Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator. Original GAN: We recast the original GAN objective into a form more convenient for comparison:Original GAN, non-saturating loss: This objective for generator was recommended in the original paper for faster convergence. Original GAN, maximum likelihood: Hinge loss GAN:The Wasserstein GAN modifies the GAN game at two points: The discriminator's strategy set is the set of measurable functions of type D : �� ��� R {\displaystyle D:\Omega \to \mathbb {R} } with bounded Lipschitz norm: ��� D ��� L ��� K {\displaystyle \|D\|_{L}\leq K} , where K {\displaystyle K} is a fixed positive constant.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants The objective is L W G A N ( �� G , D ) := E x ��� �� G [ D ( x ) ] ��� E x ��� �� ref [ D ( x ) ] {\displaystyle L_{WGAN}(\mu _{G},D):=\mathbb {E} _{x\sim \mu _{G}}[D(x)]-\mathbb {E} _{x\sim \mu _{\text{ref}}}[D(x)]} One of its purposes is to solve the problem of mode collapse (see above). The authors claim ""In no experiment did we see evidence of mode collapse for the WGAN algorithm"". An adversarial autoencoder (AAE) is more autoencoder than GAN. The idea is to start with a plain autoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution). In conditional GAN, the generator receives both a noise vector z {\displaystyle z} and a label c {\displaystyle c} , and produces an image G ( z , c ) {\displaystyle G(z,c)} . The discriminator receives image-label pairs ( x , c ) {\displaystyle (x,c)} , and computes D ( x , c ) {\displaystyle D(x,c)} . When the training dataset is unlabeled, conditional GAN does not work directly. The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as ( z , c ) {\displaystyle (z,c)} : an incompressible noise part z {\displaystyle z} , and an informative label part c {\displaystyle c} , and encourage the generator to comply with the decree, by encouraging it to maximize I ( c , G ( z , c ) ) {\displaystyle I(c,G(z,c))} , the mutual information between c {\displaystyle c} and G ( z , c ) {\displaystyle G(z,c)} , while making no demands on the mutual information z {\displaystyle z} between G ( z , c ) {\displaystyle G(z,c)} . Unfortunately, I ( c , G ( z , c ) ) {\displaystyle I(c,G(z,c))} is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization: indirectly maximize it by maximizing a lower bound The InfoGAN game is defined as follows:Three probability spaces define an InfoGAN game: ( �� X , �� ref ) {\displaystyle (\Omega _{X},\mu _{\text{ref}})} , the space of reference images. ( �� Z , �� Z ) {\displaystyle (\Omega _{Z},\mu _{Z})} , the fixed random noise generator. ( �� C , �� C ) {\displaystyle (\Omega _{C},\mu _{C})} , the fixed random information generator. There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team. The objective function is Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:The standard GAN generator is a function of type G : �� Z ��� �� X {\displaystyle G:\Omega _{Z}\to \Omega _{X}} , that is, it is a mapping from a latent space �� Z {\displaystyle \Omega _{Z}} to the image space �� X {\displaystyle \Omega _{X}} .",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants This can be understood as a ""decoding"" process, whereby every latent vector z ��� �� Z {\displaystyle z\in \Omega _{Z}} is a code for an image x ��� �� X {\displaystyle x\in \Omega _{X}} , and the generator performs the decoding. This naturally leads to the idea of training another network that performs ""encoding"", creating an autoencoder out of the encoder-generator pair. Already in the original paper, the authors noted that ""Learned approximate inference can be performed by training an auxiliary network to predict z {\displaystyle z} given x {\displaystyle x} "". The bidirectional GAN architecture performs exactly this. The BiGAN is defined as follows: Two probability spaces define a BiGAN game: ( �� X , �� X ) {\displaystyle (\Omega _{X},\mu _{X})} , the space of reference images. ( �� Z , �� Z ) {\displaystyle (\Omega _{Z},\mu _{Z})} , the latent space. There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team. The generator's strategies are functions G : �� Z ��� �� X {\displaystyle G:\Omega _{Z}\to \Omega _{X}} , and the encoder's strategies are functions E : �� X ��� �� Z {\displaystyle E:\Omega _{X}\to \Omega _{Z}} . The discriminator's strategies are functions D : �� X ��� [ 0 , 1 ] {\displaystyle D:\Omega _{X}\to [0,1]} . The objective function is Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it:In the paper, they gave a more abstract definition of the objective as:Applications of bidirectional models include semi-supervised learning, interpretable machine learning, and neural machine translation. CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities. The CycleGAN game is defined as follows:There are two probability spaces ( �� X , �� X ) , ( �� Y , �� Y ) {\displaystyle (\Omega _{X},\mu _{X}),(\Omega _{Y},\mu _{Y})} , corresponding to the two domains needed for translations fore-and-back. There are 4 players in 2 teams: generators G X : �� X ��� �� Y , G Y : �� Y ��� �� X {\displaystyle G_{X}:\Omega _{X}\to \Omega _{Y},G_{Y}:\Omega _{Y}\to \Omega _{X}} , and discriminators D X : �� X ��� [ 0 , 1 ] , D Y : �� Y ��� [ 0 , 1 ] {\displaystyle D_{X}:\Omega _{X}\to [0,1],D_{Y}:\Omega _{Y}\to [0,1]} . The objective function is where �� {\displaystyle \lambda } is a positive adjustable parameter, L G A N {\displaystyle L_{GAN}} is the GAN game objective, and L c y c l e {\displaystyle L_{cycle}} is the cycle consistency loss:Unlike previous work like pix2pix, which requires paired training data, cycleGAN requires no paired data.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos. The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge. When there is insufficient training data, the reference distribution �� ref {\displaystyle \mu _{\text{ref}}} cannot be well-approximated by the empirical distribution given by the training dataset. In such cases, data augmentation can be applied, to allow training GAN on smaller datasets. Na��ve data augmentation, however, brings its problems. Consider the original GAN game, slightly reformulated as follows:The result of such training would be a generator that mimics �� ref ��� {\displaystyle \mu _{\text{ref}}'} . For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping. The solution is to apply data augmentation to both generated and real images:The StyleGAN-2-ADA paper points out a further point on data augmentation: it must be invertible. Continue with the example of generating ImageNet pictures. If the data augmentation is ""randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability"", then there is no way for the generator to know which is the true orientation: Consider two generators G , G ��� {\displaystyle G,G'} , such that for any latent z {\displaystyle z} , the generated image G ( z ) {\displaystyle G(z)} is a 90-degree rotation of G ��� ( z ) {\displaystyle G'(z)} . They would have exactly the same expected loss, and so neither is preferred over the other. The solution is to only use invertible data augmentation: instead of ""randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability"", use ""randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability"". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures. Abstractly, the effect of randomly sampling transformations T : �� ��� �� {\displaystyle T:\Omega \to \Omega } from the distribution �� t r a n s {\displaystyle \mu _{trans}} is to define a Markov kernel K t r a n s : �� ��� P ( �� ) {\displaystyle K_{trans}:\Omega \to {\mathcal {P}}(\Omega )} . Then, the data-augmented GAN game pushes the generator to find some �� ^ G ��� P ( �� ) {\displaystyle {\hat {\mu }}_{G}\in {\mathcal {P}}(\Omega )} , such that There are two prototypical examples of invertible Markov kernels: Discrete case: Invertible stochastic matrices, when �� {\displaystyle \Omega } is finite.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants For example, if �� = { ��� , ��� , ��� , ��� } {\displaystyle \Omega =\{\uparrow ,\downarrow ,\leftarrow ,\rightarrow \}} is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is ""randomly rotate the picture by 90, 180, 270 degrees with probability p {\displaystyle p} , and keep the picture as it is with probability ( 1 ��� 3 p ) {\displaystyle (1-3p)} "", then the Markov kernel K t r a n s {\displaystyle K_{trans}} can be represented as a stochastic matrix:Continuous case: The gaussian kernel, when �� = R n {\displaystyle \Omega =\mathbb {R} ^{n}} for some n ��� 1 {\displaystyle n\geq 1} . For example, if �� = R 256 2 {\displaystyle \Omega =\mathbb {R} ^{256^{2}}} is the space of 256x256 images, and the data-augmentation method is ""generate a gaussian noise z ��� N ( 0 , I 256 2 ) {\displaystyle z\sim {\mathcal {N}}(0,I_{256^{2}})} , then add �� z {\displaystyle \epsilon z} to the image"", then K t r a n s {\displaystyle K_{trans}} is just convolution by the density function of N ( 0 , �� 2 I 256 2 ) {\displaystyle {\mathcal {N}}(0,\epsilon ^{2}I_{256^{2}})} . This is invertible, because convolution by a gaussian is just convolution by the heat kernel, so given any �� ��� P ( R n ) {\displaystyle \mu \in {\mathcal {P}}(\mathbb {R} ^{n})} , the convolved distribution K t r a n s ��� �� {\displaystyle K_{trans}*\mu } can be obtained by heating up R n {\displaystyle \mathbb {R} ^{n}} precisely according to �� {\displaystyle \mu } , then wait for time �� 2 / 4 {\displaystyle \epsilon ^{2}/4} . With that, we can recover �� {\displaystyle \mu } by running the heat equation backwards in time for �� 2 / 4 {\displaystyle \epsilon ^{2}/4} . More examples of invertible data augmentations are found in the paper. SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline. The generator G {\displaystyle G} is decomposed into a pyramid of generators G = G 1 ��� G 2 ��� ��� ��� G N {\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}} , with the lowest one generating the image G N ( z N ) {\displaystyle G_{N}(z_{N})} at the lowest resolution, then the generated image is scaled up to r ( G N ( z N ) ) {\displaystyle r(G_{N}(z_{N}))} , and fed to the next level to generate an image G N ��� 1 ( z N ��� 1 + r ( G N ( z N ) ) ) {\displaystyle G_{N-1}(z_{N-1}+r(G_{N}(z_{N})))} at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well. The StyleGAN family is a series of architectures published by Nvidia's research division.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as G = G 1 ��� G 2 ��� ��� ��� G N {\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}} , and the discriminator as D = D 1 ��� D 2 ��� ��� ��� D N {\displaystyle D=D_{1}\circ D_{2}\circ \cdots \circ D_{N}} . During training, at first only G N , D N {\displaystyle G_{N},D_{N}} are used in a GAN game to generate 4x4 images. Then G N ��� 1 , D N ��� 1 {\displaystyle G_{N-1},D_{N-1}} are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images. To avoid shock between stages of the GAN game, each new layer is ""blended in"" (Figure 2 of the paper). For example, this is how the second stage GAN game starts: Just before, the GAN game consists of the pair G N , D N {\displaystyle G_{N},D_{N}} generating and discriminating 4x4 images. Just after, the GAN game consists of the pair ( ( 1 ��� �� ) + �� ��� G N ��� 1 ) ��� u ��� G N , D N ��� d ��� ( ( 1 ��� �� ) + �� ��� D N ��� 1 ) {\displaystyle ((1-\alpha )+\alpha \cdot G_{N-1})\circ u\circ G_{N},D_{N}\circ d\circ ((1-\alpha )+\alpha \cdot D_{N-1})} generating and discriminating 8x8 images. Here, the functions u , d {\displaystyle u,d} are image up- and down-sampling functions, and �� {\displaystyle \alpha } is a blend-in factor (much like an alpha in image composing) that smoothly glides from 0 to 1. StyleGAN-1 is designed as a combination of Progressive GAN with neural style transfer. The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant 4 �� 4 �� 512 {\displaystyle 4\times 4\times 512} array, and repeatedly passed through style blocks. Each style block applies a ""style latent vector"" via affine transform (""adaptive instance normalization""), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance). At training time, usually only one style latent vector is used per image generated, but sometimes two (""mixing regularization"") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector). After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles. Style-mixing between two images x , x ��� {\displaystyle x,x'} can be performed as well.",wikipedia
+Generative adversarial network,https://en.wikipedia.org/wiki/Generative_adversarial_network#Variants,"Variants First, run a gradient descent to find z , z ��� {\displaystyle z,z'} such that G ( z ) ��� x , G ( z ��� ) ��� x ��� {\displaystyle G(z)\approx x,G(z')\approx x'} . This is called ""projecting an image back to style latent space"". Then, z {\displaystyle z} can be fed to the lower style blocks, and z ��� {\displaystyle z'} to the higher style blocks, to generate a composite image that has the large-scale style of x {\displaystyle x} , and the fine-detail style of x ��� {\displaystyle x'} . Multiple images can also be composed this way. StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the ""blob"" problem. This was updated by the StyleGAN-2-ADA (""ADA"" stands for ""adaptive""), which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an ""overfitting heuristic"" reaches a target level, thus the name ""adaptive"". StyleGAN-3 improves upon StyleGAN-2 by solving the ""texture sticking"" problem, which can be seen in the official videos. They analyzed the problem by the Nyquist���Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon. To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#History,"History The use of the word ""generative"" in the discussion of art has developed over time. The use of ""Artificial DNA"" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named ""emergent"". Margaret Boden and Ernest Edmonds have noted the use of the term ""generative art"" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965: A. Michael Noll did his initial computer art, combining randomness with order, in 1962, and exhibited it along with works by Bell Julesz in 1965. The terms ""generative art"" and ""computer art"" have been used in tandem, and more or less interchangeably, since the very earliest days. The first such exhibition showed the work of Nees in February 1965, which some claim was titled ""Generative Computergrafik"". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was ""computer-grafik"". ""Generative art"" and related terms was in common use by several other early computer artists around this time, including Manfred Mohr and Ken Knowlton. Vera Moln��r (born 1924) is a French media artist of Hungarian origin. Molnar is widely considered to be a pioneer of generative art, and is also one of the first women to use computers in her art practice. The term ""Generative Art"" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the ""Generative Art"" conference in Milan in 1998. The term has also been used to describe geometric abstract art where simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practiced by the Argentinian artists Eduardo Mac Entyre and Miguel ��ngel Vidal in the late 1960s. In 1972 the Romanian-born Paul Neagu created the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as ""Hunsy Belmood"" and ""Edward Larsocchi"". In 1972 Neagu gave a lecture titled 'Generative Art Forms' at the Queen's University, Belfast Festival. In 1970 the School of the Art Institute of Chicago created a department called Generative Systems. As described by Sonia Landy Sheridan the focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information. Also noteworthy is John Dunn, first a student and then a collaborator of Sheridan.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#History,"History In 1988 Clauser identified the aspect of systemic autonomy as a critical element in generative art: It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion. (the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure. In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his book Citta' Aleatorie. In 1989 Franke referred to ""generative mathematics"" as ""the study of mathematical operations suitable for generating artistic images. "" From the mid-1990s Brian Eno popularized the terms generative music and generative systems, making a connection with earlier experimental music by Terry Riley, Steve Reich and Philip Glass. From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives. The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy. In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999. On-line discussion has centered around the eu-gene mailing list, which began late 1999, and has hosted much of the debate which has defined the field. These activities have more recently been joined by the Generator. x conference in Berlin starting in 2005. In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella jointing several generative artists and scientists in the editorial board. Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds put it in 2011: Today, the term ""Generative Art"" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart. com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules). In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu: Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#History,"History Discussion on the eu-gene mailing list was framed by the following definition by Adrian Ward from 1999: Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed. A similar definition is provided by Philip Galanter: Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art. Around the 2020s, generative AI models learned to imitate the distinct style of particular authors. For example, a generative image model such as Stable Diffusion is able to model the stylistic characteristics of an artist like Pablo Picasso (including his particular brush strokes, use of colour, perspective, and so on), and a user can engineer a prompt such as ""an astronaut riding a horse, by Picasso"" to cause the model to generate a novel image applying the artist's style to an arbitrary subject. Generative image models have received significant backlash from artists who object to their style being imitated without their permission, arguing that this harms their ability to profit from their own work.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#Types,"Types Johann Kirnberger's Musikalisches W��rfelspiel (""Musical Dice Game"") of 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other. The fugues of J. S. Bach could be considered generative, in that there is a strict underlying process that is followed by the composer. Similarly, serialism follows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention. Composers such as John Cage, Farmers Manual, and Brian Eno have used generative systems in their works. The artist Ellsworth Kelly created paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement. Artists such as Hans Haacke have explored processes of physical and social systems in artistic context. Fran��ois Morellet has used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to create Moir�� Patterns. In other works he has used chance operations to determine the coloration of grids. Sol LeWitt created generative art in the form of systems expressed in natural language and systems of geometric permutation. Harold Cohen's AARON system is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts. Steina and Woody Vasulka are video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years. Software systems exploiting evolutionary computing to create visual form include those created by Scott Draves and Karl Sims. The digital artist Joseph Nechvatal has exploited models of viral contagion. Autopoiesis by Ken Rinaldo includes fifteen musical and robotic sculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other. Jean-Pierre Hebert and Roman Verostko are founding members of the Algorists, a group of artists who create their own algorithms to create art. A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962. The French artist Jean-Max Albert, beside environmental sculptures like Iapetus, and O=C=O, developed a project dedicated to the vegetation itself, in terms of biological activity. The Calmoduline Monument project is based on the property of a protein, calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc. ) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene. The plant can thus, when there is a stimulus, modify its ""typical"" growth pattern.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#Types,"Types So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant's ""decisions"" suggesting a level of fundamental biological activity. Maurizio Bolognini works with generative machines to address conceptual and social concerns. Mark Napier is a pioneer in data mapping, creating works based on the streams of zeros and ones in Ethernet traffic, as part of the ""Carnivore"" project. Martin Wattenberg pushed this theme further, transforming ""data sets"" as diverse as musical scores (in ""Shape of Song"", 2001) and Wikipedia edits (History Flow, 2003, with Fernanda Viegas) into dramatic visual compositions. The Canadian artist San Base developed a ""Dynamic Painting"" algorithm in 2002. Using computer algorithms as ""brush strokes"", Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork. Since 1996 there have been ambigram generators that auto generate ambigrams. Italian composer Pietro Grossi, pioneer of computer music since 1986, he extended his experiments to images, (same procedure used in his musical work) precisely to computer graphics, writing programs with specific auto-decisions, and developing the concept of HomeArt, presented for the first time in the exhibition New Atlantis: the continent of electronic music organized by the Venice Biennale in 1986. Some contemporary artists who create generative visual artworks are Daniel Shiffman, Zachary Lieberman, Golan Levin, Ben Fry, and Giles Whitaker (artist). For some artists, graphic user interfaces and computer code have become an independent art form in themselves. Adrian Ward created Auto-Illustrator as a commentary on software and generative methods applied to art and design. [citation needed] In 1987 Celestino Soddu created the artificial DNA of Italian Medieval towns able to generate endless 3D models of cities identifiable as belonging to the idea. In 2010, Michael Hansmeyer generated architectural columns in a project called ""Subdivided Columns ��� A New Order (2010)"". The piece explored how the simple process of repeated subdivision can create elaborate architectural patterns. Rather than designing any columns directly, Hansmeyer designed a process that produced columns automatically. The process could be run again and again with different parameters to create endless permutations. Endless permutations could be considered a hallmark of generative design. Writers such as Tristan Tzara, Brion Gysin, and William Burroughs used the cut-up technique to introduce randomization to literature as a generative system. Jackson Mac Low produced computer-assisted poetry and used algorithms to generate texts; Philip M. Parker has written software to automatically generate entire books. Jason Nelson used generative methods with speech-to-text software to create a series of digital poems from movies, television and other audio sources. In the late 2010s, authors began to experiment with neural networks trained on large language datasets. David Jhave Johnston's ReRites is an early example of human-edited AI-generated poetry. Generative systems may be modified while they operate, for example by using interactive programming environments such as SuperCollider, Fluxus and TidalCycles, including patching environments such as Max/MSP, Pure Data and vvvv.",wikipedia
+Generative art,https://en.wikipedia.org/wiki/Generative_art#Types,"Types This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known as live coding. As with many examples of software art, because live coding emphasizes human authorship rather than autonomy, it may be considered in opposition to generative art. In 2020, Erick ""Snowfro"" Calderon launched the Art Blocks platform for combining the ideas of generative art and the blockchain, with resulting artworks created as NFTs on the Ethereum blockchain. One of the key innovations with the generative art created in this way is that all the source code and algorithm for creating the art has to be finalized and put on the blockchain permanently, without any ability to alter it further. Only when the artwork is sold (""minted""), the artwork is generated; the result is random yet should reflect the overall aesthetic defined by the artist. Calderon argues that this process forces the artist to be very thoughtful of the algorithm behind the art:Until today, a [generative] artist would create an algorithm, press the spacebar 100 times, pick five of the best ones and print them in high quality. Then they would frame them, and put them in a gallery. Maybe. Because Art Blocks forces the artist to accept every single output of the algorithm as their signed piece, the artist has to go back and tweak the algorithm until it's perfect. They can't just cherry pick the good outputs. That elevates the level of algorithmic execution because the artist is creating something that they know they're proud of before they even know what's going to come out on the other side.",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#History,"History Synthetic media as a process of automated art dates back to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria designed machines capable of writing text, generating sounds, and playing music. The tradition of automaton-based entertainment flourished throughout history, with mechanical beings' seemingly magical ability to mimic human creativity often drawing crowds throughout Europe, China, India, and so on. Other automated novelties such as Johann Philipp Kirnberger's ""Musikalisches W��rfelspiel"" (Musical Dice Game) 1757 also amused audiences. Despite the technical capabilities of these machines, however, none were capable of generating original content and were entirely dependent upon their mechanical designs. The field of AI research was born at a workshop at Dartmouth College in 1956, begetting the rise of digital computing used as a medium of art as well as the rise of generative art. Initial experiments in AI-generated art included the Illiac Suite, a 1957 composition for string quartet which is generally agreed to be the first score composed by an electronic computer. Lejaren Hiller, in collaboration with Leonard Issacson, programmed the ILLIAC I computer at the University of Illinois at Urbana���Champaign (where both composers were professors) to generate compositional material for his String Quartet No. 4. In 1960, Russian researcher R. Kh. Zaripov published worldwide first paper on algorithmic music composing using the ""Ural-1"" computer. In 1965, inventor Ray Kurzweil premiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted on Steve Allen's I've Got a Secret program, and stumped the hosts until film star Harry Morgan guessed Ray's secret. Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner. In 2014, Ian Goodfellow and his colleagues developed a new class of machine learning systems: generative adversarial networks (GAN). Two neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In a 2016 seminar, Yann LeCun described GANs as ""the coolest idea in machine learning in the last twenty years"".",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#History,"History In 2017, Google unveiled transformers, a new type of neural network architecture specialized for language modeling that enabled for rapid advancements in natural language processing. Transformers proved capable of high levels of generalization, allowing networks such as GPT-3 and Jukebox from OpenAI to synthesize text and music respectively at a level approaching humanlike ability. There have been some attempts to use GPT-3 and GPT-2 for screenplay writing, resulting in both dramatic (the Italian short film Frammenti di Anime Meccaniche, written by GPT-2) and comedic narratives (the short film Solicitors by Youtube Creator Calamity AI written by GPT-3).",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#Branches_of_synthetic_media,"Branches_of_synthetic_media Deepfakes (a portmanteau of ""deep learning"" and ""fake"") are the most prominent form of synthetic media. They are media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks. They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks (GANs). Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use. The term deepfakes originated around the end of 2017 from a Reddit user named ""deepfakes"". He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities' faces swapped onto the bodies of actresses in pornographic videos, while non-pornographic content included many videos with actor Nicolas Cage's face swapped into various movies. In December 2017, Samantha Cole published an article about r/deepfakes in Vice that drew the first mainstream attention to deepfakes being shared in online communities. Six weeks later, Cole wrote in a follow-up article about the large increase in AI-assisted fake pornography. In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography. Other websites have also banned the use of deepfakes for involuntary pornography, including the social media platform Twitter and the pornography site Pornhub. However, some websites have not yet banned Deepfake content, including 4chan and 8chan. Non-pornographic deepfake content continues to grow in popularity with videos from YouTube creators such as Ctrl Shift Face and Shamook. A mobile application, Impressions, was launched for iOS in March 2020. The app provides a platform for users to deepfake celebrity faces into videos in a matter of minutes. Image synthesis is the artificial production of visual media, especially through algorithmic means. In the emerging world of synthetic media, the work of digital-image creation���once the domain of highly skilled programmers and Hollywood special-effects artists���could be automated by expert systems capable of producing realism on a vast scale. One subfield of this includes human image synthesis, which is the use of neural networks to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work. The website This Person Does Not Exist showcases fully automated human image synthesis by endlessly generating images that look like facial portraits of human faces. Beyond deepfakes and image synthesis, audio is another area where AI is used to create synthetic media.",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#Branches_of_synthetic_media,"Branches_of_synthetic_media Synthesized audio will be capable of generating any conceivable sound that can be achieved through audio waveform manipulation, which might conceivably be used to generate stock audio of sound effects or simulate audio of currently imaginary things. Artificial intelligence art is any visual artwork created through the use of artificial intelligence (AI) programs. Many mechanisms for creating AI art have been developed, including procedural ""rule-based"" generation of images using mathematical patterns, algorithms which simulate brush strokes and other painted effects, and deep learning algorithms, such as generative adversarial networks (GANs) and transformers. One of the first significant AI art systems is AARON, developed by Harold Cohen beginning in the late 1960s at the University of California at San Diego. AARON is the most notable example of AI art in the era of GOFAI programming because of its use of a symbolic rule-based approach to generate technical images. Cohen developed AARON with the goal of being able to code the act of drawing. In its primitive form, AARON created simple black and white drawings. Cohen would later finish the drawings by painting them. Throughout the years, he also began to develop a way for AARON to also paint. Cohen designed AARON to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen. Generative adversarial networks (GANs) were designed in 2014. This system uses a ""generator"" to create new images and a ""discriminator"" to decide which created images are considered successful. DeepDream, released by Google in 2015, uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating deliberately over-processed images. After DeepDream's release, several companies released apps that transform photos into art-like images with the style of well-known sets of paintings. The website Artbreeder, launched in 2018, uses the models StyleGAN and BigGAN to allow users to generate and modify images such as faces, landscapes, and paintings. Several programs use text-to-image models to generate a variety of images based on various text prompts. They include EleutherAI's VQGAN+CLIP which was released in 2021, OpenAI's DALL-E which released a series of images in January 2021, Google Brain's Imagen and Parti which was announced in May 2022, Microsoft's NUWA-Infinity, and Stable Diffusion which was released in August 2022. Stability. ai has a Stable Diffusion web interface called DreamStudio. Stable Diffusion is source-available software, enabling further development such as plugins for Krita, Photoshop, Blender, and GIMP, as well as the Automatic1111 web-based open source user interface. Stable Diffusion's main pre-trained model is shared on the Hugging Face Hub. The capacity to generate music through autonomous, non-programmable means has long been sought after since the days of Antiquity, and with developments in artificial intelligence, two particular domains have arisen: The robotic creation of music, whether through machines playing instruments or sorting of virtual instrument notes (such as through MIDI files) Directly generating waveforms that perfectly recreate instrumentation and human voice without the need for instruments, MIDI, or organizing premade notes.",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#Branches_of_synthetic_media,"Branches_of_synthetic_media Speech synthesis has been identified as a popular branch of synthetic media and is defined as the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely ""synthetic"" voice output. Virtual assistants such as Siri and Alexa have the ability to turn text into audio and synthesize speech. In 2016, Google DeepMind unveiled WaveNet, a deep generative model of raw audio waveforms that could learn to understand which waveforms best resembled human speech as well as musical instrumentation. Some projects offer real-time generations of synthetic speech using deep learning, such as 15. ai, a web application text-to-speech tool developed by an MIT research scientist. Natural-language generation (NLG, sometimes synonymous with text synthesis) is a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out by a text-to-speech system. Interest in natural-language generation increased in 2019 after OpenAI unveiled GPT2, an AI system that generates text matching its input in subject and tone. GPT2 is a transformer, a deep machine learning model introduced in 2017 used primarily in the field of natural language processing (NLP). AI-generated media can be used to develop a hybrid graphics system that could be used in video games, movies, and virtual reality, as well as text-based games such as AI Dungeon 2, which uses either GPT-2 or GPT-3 to allow for near-infinite possibilities that are otherwise impossible to create through traditional game development methods. Computer hardware company Nvidia has also worked on developed AI-generated video game demos, such as a model that can generate an interactive game based on non-interactive videos. Through procedural generation, synthetic media techniques may eventually be used to ""help designers and developers create art assets, design levels, and even build entire games from the ground up.",wikipedia
+Synthetic media,https://en.wikipedia.org/wiki/Synthetic_media#See_also,See_also 15.ai Algorithmic art Artificial imagination Artificial intelligence art Automated journalism Computational creativity Computer music Cybernetic art DALL-E Deepfakes Generative adversarial network Generative art Generative artificial intelligence GPT-3 Human image synthesis Transformer (machine learning model) WaveNet,wikipedia
+AI winter,https://en.wikipedia.org/wiki/AI_winter#Early_episodes,"Early_episodes NLP research has its roots in the early 1930s and begins its existence with the work on machine translation (MT). However, significant advancements and applications began to emerge after the publication of Warren Weaver's influential memorandum in 1949. The memorandum generated great excitement within the research community. In the following years, notable events unfolded: IBM embarked on the development of the first machine, MIT appointed its first full-time professor in machine translation, and several conferences dedicated to MT took place. The culmination came with the public demonstration of the IBM-Georgetown machine, which garnered widespread attention in respected newspapers in 1954. Just like all AI booms that have been followed by desperate AI winters, the media tended to exaggerate the significance of these developments. Headlines about the IBM-Georgetown experiment proclaimed phrases like ""The bilingual machine,"" ""Robot brain translates Russian into King's English,"" and ""Polyglot brainchild]. "" However, the actual demonstration involved the translation of a curated set of only 49 Russian sentences into English, with the machine's vocabulary limited to just 250 words. To put things into perspective, a 2006 study made by Paul Nation found that humans need a vocabulary of around 8,000 to 9,000-word families to comprehend written texts with 98% accuracy. During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. Another factor that propelled the field of mechanical translation was the interest shown by the Central Intelligence Agency (CIA). During that period, the CIA firmly believed in the importance of developing machine translation capabilities and supported such initiatives. They also recognized that this program had implications that extended beyond the interests of the CIA and the intelligence community. At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were ""many predictions of imminent 'breakthroughs'"". However, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An apocryphal example is ""the spirit is willing but the flesh is weak. "" Translated back and forth with Russian, it became ""the vodka is good but the meat is rotten. "" Later researchers would call this the commonsense knowledge problem. By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended. Machine translation shared the same path with natural language processing from the rule-based approaches through the statistical approaches up to the neural network approaches, which have in 2023 culminated in large language models.",wikipedia
+AI winter,https://en.wikipedia.org/wiki/AI_winter#Early_episodes,"Early_episodes Simple networks or circuits of connected units, including Walter Pitts and Warren McCulloch's neural network for logic and Marvin Minsky's SNARC system, have failed to deliver the promised results and were abandoned in the late 1950s. Following the success of programs such as the Logic Theorist and the General Problem Solver, algorithms for manipulating symbols seemed more promising at the time as means to achieve logical reasoning viewed at the time as the essence of intelligence, either natural or artificial. Interest in perceptrons, invented by Frank Rosenblatt, was kept alive only by the sheer force of his personality. He optimistically predicted that the perceptron ""may eventually be able to learn, make decisions, and translate languages"". Mainstream research into perceptrons ended partially because the 1969 book Perceptrons by Marvin Minsky and Seymour Papert emphasized the limits of what perceptrons could do. Major funding for projects neural network approaches was difficult to find in the 1970s and early 1980s. Important theoretical work continued despite the lack of funding. The ""winter"" of neural network approach came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large scale interest. Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.",wikipedia
+AI winter,https://en.wikipedia.org/wiki/AI_winter#Current_AI_.22Summer.22_2005-present,"Current_AI_.22Summer.22_2005-present AI reached the highest levels of interest and funding in its history in the 2020s, by every possible measure, including: publications, patent applications, total investment ($50 billion in 2022), and job openings (800,000 U.S. job openings in 2022). The successes of the current ""AI spring"" or ""AI boom"" are advances in language translation (in particular, Google Translate), image recognition (spurred by the ImageNet training database) as commercialized by Google Image Search, and in game-playing systems such as AlphaZero (chess champion) and AlphaGo (go champion), and Watson (Jeopardy champion). A turning point was in 2012 when AlexNet (a deep learning network) won the ImageNet Large Scale Visual Recognition Challenge with half as many errors as the second place winner. The recent release of OpenAI's AI chatbot ChatGPT which now has over 100 million users, has reinvigorated the discussion about artificial intelligence and its effects on the world. Google CEO Sundar Pichai has stated that AI will be the most important technology that humans create.",wikipedia
+MIT Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Artificial_Intelligence_Laboratory#History,"History Computing Research at MIT began with Vannevar Bush's research into a differential analyzer and Claude Shannon's electronic Boolean algebra in the 1930s, the wartime MIT Radiation Laboratory, the post-war Project Whirlwind and Research Laboratory of Electronics (RLE), and MIT Lincoln Laboratory's SAGE in the early 1950s. At MIT, research in the field of artificial intelligence began in late 1950s. On July 1, 1963, Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) was launched with a $2 million grant from the Defense Advanced Research Projects Agency (DARPA). Project MAC's original director was Robert Fano of MIT's Research Laboratory of Electronics (RLE). Fano decided to call MAC a ""project"" rather than a ""laboratory"" for reasons of internal MIT politics ��� if MAC had been called a laboratory, then it would have been more difficult to raid other MIT departments for research staff. The program manager responsible for the DARPA grant was J. C. R. Licklider, who had previously been at MIT conducting research in RLE, and would later succeed Fano as director of Project MAC. Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. Its contemporaries included Project Genie at Berkeley, the Stanford Artificial Intelligence Laboratory, and (somewhat later) University of Southern California's (USC's) Information Sciences Institute. An ""AI Group"" including Marvin Minsky (the director), John McCarthy (inventor of Lisp), and a talented community of computer programmers were incorporated into Project MAC. They were interested principally in the problems of vision, mechanical motion and manipulation, and language, which they view as the keys to more intelligent machines. In the 1960s and 1970s the AI Group developed a time-sharing operating system called Incompatible Timesharing System (ITS) which ran on PDP-6 and later PDP-10 computers. The early Project MAC community included Fano, Minsky, Licklider, Fernando J. Corbat��, and a community of computer programmers and enthusiasts among others who drew their inspiration from former colleague John McCarthy. These founders envisioned the creation of a computer utility whose computational power would be as reliable as an electric utility. To this end, Corbat�� brought the first computer time-sharing system, Compatible Time-Sharing System (CTSS), with him from the MIT Computation Center, using the DARPA funding to purchase an IBM 7094 for research use. One of the early focuses of Project MAC would be the development of a successor to CTSS, Multics, which was to be the first high availability computer system, developed as a part of an industry consortium including General Electric and Bell Laboratories. In 1966, Scientific American featured Project MAC in the September thematic issue devoted to computer science, that was later published in book form. At the time, the system was described as having approximately 100 TTY terminals, mostly on campus but with a few in private homes. Only 30 users could be logged in at the same time.",wikipedia
+MIT Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Artificial_Intelligence_Laboratory#History,"History The project enlisted students in various classes to use the terminals simultaneously in problem solving, simulations, and multi-terminal communications as tests for the multi-access computing software being developed. In the late 1960s, Minsky's artificial intelligence group was seeking more space, and was unable to get satisfaction from project director Licklider. Minsky found that although Project MAC as a single entity could not get the additional space he wanted, he could split off to form his own laboratory and then be entitled to more office space. As a result, the MIT AI Lab was formed in 1970, and many of Minsky's AI colleagues left Project MAC to join him in the new laboratory, while most of the remaining members went on to form the Laboratory for Computer Science. Talented programmers such as Richard Stallman, who used TECO to develop EMACS, flourished in the AI Lab during this time. Those researchers who did not join the smaller AI Lab formed the Laboratory for Computer Science and continued their research into operating systems, programming languages, distributed systems, and the theory of computation. Two professors, Hal Abelson and Gerald Jay Sussman, chose to remain neutral ��� their group was referred to variously as Switzerland and Project MAC for the next 30 years. [citation needed] Among much else, the AI Lab led to the invention of Lisp machines and their attempted commercialization by two companies in the 1980s: Symbolics and Lisp Machines Inc. This divided the AI Lab into ""camps"" which resulted in a hiring away of many of the talented programmers. The incident inspired Richard Stallman's later work on the GNU Project. ""Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was. "" . . . ""That is the basis for the free software movement ��� the experience I had, the life that I've lived at the MIT AI lab ��� to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge"". On the fortieth anniversary of Project MAC's establishment, July 1, 2003, LCS was merged with the AI Lab to form the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL. This merger created the largest laboratory (over 600 personnel) on the MIT campus and was regarded as a reuniting of the diversified elements of Project MAC. [according to whom? ] In 2018, CSAIL launched a five-year collaboration program with IFlytek, a company sanctioned the following year for allegedly using its technology for surveillance and human rights abuses in Xinjiang. In October 2019, MIT announced that it would review its partnerships with sanctioned firms such as iFlyTek and SenseTime. In April 2020, the agreement with iFlyTek was terminated. CSAIL moved from the School of Engineering to the newly formed Schwarzman College of Computing by February 2020.",wikipedia
+MIT Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Artificial_Intelligence_Laboratory#Notable_researchers,"Notable_researchers (Including members and alumni of CSAIL's predecessor laboratories) MacArthur Fellows Tim Berners-Lee, Erik Demaine, Dina Katabi, Daniela L. Rus, Regina Barzilay, Peter Shor, Richard Stallman, and Joshua Tenenbaum Turing Award recipients Leonard M. Adleman, Fernando J. Corbat��, Shafi Goldwasser, Butler W. Lampson, John McCarthy, Silvio Micali, Marvin Minsky, Ronald L. Rivest, Adi Shamir, Barbara Liskov, Michael Stonebraker, and Tim Berners-Lee IJCAI Computers and Thought Award recipients Terry Winograd, Patrick Winston, David Marr, Gerald Jay Sussman, Rodney Brooks Rolf Nevanlinna Prize recipients Madhu Sudan, Peter Shor, Constantinos Daskalakis G��del Prize recipients Shafi Goldwasser (two-time recipient), Silvio Micali, Maurice Herlihy, Charles Rackoff, Johan H��stad, Peter Shor, and Madhu Sudan Grace Murray Hopper Award recipients Robert Metcalfe, Shafi Goldwasser, Guy L. Steele, Jr., Richard Stallman, and W. Daniel Hillis Textbook authors Harold Abelson and Gerald Jay Sussman, Richard Stallman, Thomas H. Cormen, Charles E. Leiserson, Patrick Winston, Ronald L. Rivest, Barbara Liskov, John Guttag, Jerome H. Saltzer, Frans Kaashoek, Clifford Stein, and Nancy Lynch David D. Clark, former chief protocol architect for the Internet; co-author with Jerome H. Saltzer (also a CSAIL member) and David P. Reed of the influential paper ""End-to-End Arguments in Systems Design"" Eric Grimson, expert on computer vision and its applications to medicine, appointed Chancellor of MIT March 2011 Bob Frankston, co-developer of VisiCalc, the first computer spreadsheet Seymour Papert, inventor of the Logo programming language Joseph Weizenbaum, creator of the ELIZA computer-simulated therapistRobert Metcalfe, who later invented Ethernet at Xerox PARC and later founded 3Com Mark Raibert, who created the robot company Boston Dynamics Drew Houston, co-founder of Dropbox Colin Angle and Helen Greiner who, with previous CSAIL director Rodney Brooks, founded iRobot Jeremy Wertheimer, who developed ITA Software used by travel websites like Kayak and Orbitz Max Krohn, co-founder of OkCupid",wikipedia
+MIT Computer Science and Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory#History,"History Computing Research at MIT began with Vannevar Bush's research into a differential analyzer and Claude Shannon's electronic Boolean algebra in the 1930s, the wartime MIT Radiation Laboratory, the post-war Project Whirlwind and Research Laboratory of Electronics (RLE), and MIT Lincoln Laboratory's SAGE in the early 1950s. At MIT, research in the field of artificial intelligence began in late 1950s. On July 1, 1963, Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) was launched with a $2 million grant from the Defense Advanced Research Projects Agency (DARPA). Project MAC's original director was Robert Fano of MIT's Research Laboratory of Electronics (RLE). Fano decided to call MAC a ""project"" rather than a ""laboratory"" for reasons of internal MIT politics ��� if MAC had been called a laboratory, then it would have been more difficult to raid other MIT departments for research staff. The program manager responsible for the DARPA grant was J. C. R. Licklider, who had previously been at MIT conducting research in RLE, and would later succeed Fano as director of Project MAC. Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. Its contemporaries included Project Genie at Berkeley, the Stanford Artificial Intelligence Laboratory, and (somewhat later) University of Southern California's (USC's) Information Sciences Institute. An ""AI Group"" including Marvin Minsky (the director), John McCarthy (inventor of Lisp), and a talented community of computer programmers were incorporated into Project MAC. They were interested principally in the problems of vision, mechanical motion and manipulation, and language, which they view as the keys to more intelligent machines. In the 1960s and 1970s the AI Group developed a time-sharing operating system called Incompatible Timesharing System (ITS) which ran on PDP-6 and later PDP-10 computers. The early Project MAC community included Fano, Minsky, Licklider, Fernando J. Corbat��, and a community of computer programmers and enthusiasts among others who drew their inspiration from former colleague John McCarthy. These founders envisioned the creation of a computer utility whose computational power would be as reliable as an electric utility. To this end, Corbat�� brought the first computer time-sharing system, Compatible Time-Sharing System (CTSS), with him from the MIT Computation Center, using the DARPA funding to purchase an IBM 7094 for research use. One of the early focuses of Project MAC would be the development of a successor to CTSS, Multics, which was to be the first high availability computer system, developed as a part of an industry consortium including General Electric and Bell Laboratories. In 1966, Scientific American featured Project MAC in the September thematic issue devoted to computer science, that was later published in book form. At the time, the system was described as having approximately 100 TTY terminals, mostly on campus but with a few in private homes. Only 30 users could be logged in at the same time.",wikipedia
+MIT Computer Science and Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory#History,"History The project enlisted students in various classes to use the terminals simultaneously in problem solving, simulations, and multi-terminal communications as tests for the multi-access computing software being developed. In the late 1960s, Minsky's artificial intelligence group was seeking more space, and was unable to get satisfaction from project director Licklider. Minsky found that although Project MAC as a single entity could not get the additional space he wanted, he could split off to form his own laboratory and then be entitled to more office space. As a result, the MIT AI Lab was formed in 1970, and many of Minsky's AI colleagues left Project MAC to join him in the new laboratory, while most of the remaining members went on to form the Laboratory for Computer Science. Talented programmers such as Richard Stallman, who used TECO to develop EMACS, flourished in the AI Lab during this time. Those researchers who did not join the smaller AI Lab formed the Laboratory for Computer Science and continued their research into operating systems, programming languages, distributed systems, and the theory of computation. Two professors, Hal Abelson and Gerald Jay Sussman, chose to remain neutral ��� their group was referred to variously as Switzerland and Project MAC for the next 30 years. [citation needed] Among much else, the AI Lab led to the invention of Lisp machines and their attempted commercialization by two companies in the 1980s: Symbolics and Lisp Machines Inc. This divided the AI Lab into ""camps"" which resulted in a hiring away of many of the talented programmers. The incident inspired Richard Stallman's later work on the GNU Project. ""Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was. "" . . . ""That is the basis for the free software movement ��� the experience I had, the life that I've lived at the MIT AI lab ��� to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge"". On the fortieth anniversary of Project MAC's establishment, July 1, 2003, LCS was merged with the AI Lab to form the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL. This merger created the largest laboratory (over 600 personnel) on the MIT campus and was regarded as a reuniting of the diversified elements of Project MAC. [according to whom? ] In 2018, CSAIL launched a five-year collaboration program with IFlytek, a company sanctioned the following year for allegedly using its technology for surveillance and human rights abuses in Xinjiang. In October 2019, MIT announced that it would review its partnerships with sanctioned firms such as iFlyTek and SenseTime. In April 2020, the agreement with iFlyTek was terminated. CSAIL moved from the School of Engineering to the newly formed Schwarzman College of Computing by February 2020.",wikipedia
+MIT Computer Science and Artificial Intelligence Laboratory,https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory#Notable_researchers,"Notable_researchers (Including members and alumni of CSAIL's predecessor laboratories) MacArthur Fellows Tim Berners-Lee, Erik Demaine, Dina Katabi, Daniela L. Rus, Regina Barzilay, Peter Shor, Richard Stallman, and Joshua Tenenbaum Turing Award recipients Leonard M. Adleman, Fernando J. Corbat��, Shafi Goldwasser, Butler W. Lampson, John McCarthy, Silvio Micali, Marvin Minsky, Ronald L. Rivest, Adi Shamir, Barbara Liskov, Michael Stonebraker, and Tim Berners-Lee IJCAI Computers and Thought Award recipients Terry Winograd, Patrick Winston, David Marr, Gerald Jay Sussman, Rodney Brooks Rolf Nevanlinna Prize recipients Madhu Sudan, Peter Shor, Constantinos Daskalakis G��del Prize recipients Shafi Goldwasser (two-time recipient), Silvio Micali, Maurice Herlihy, Charles Rackoff, Johan H��stad, Peter Shor, and Madhu Sudan Grace Murray Hopper Award recipients Robert Metcalfe, Shafi Goldwasser, Guy L. Steele, Jr., Richard Stallman, and W. Daniel Hillis Textbook authors Harold Abelson and Gerald Jay Sussman, Richard Stallman, Thomas H. Cormen, Charles E. Leiserson, Patrick Winston, Ronald L. Rivest, Barbara Liskov, John Guttag, Jerome H. Saltzer, Frans Kaashoek, Clifford Stein, and Nancy Lynch David D. Clark, former chief protocol architect for the Internet; co-author with Jerome H. Saltzer (also a CSAIL member) and David P. Reed of the influential paper ""End-to-End Arguments in Systems Design"" Eric Grimson, expert on computer vision and its applications to medicine, appointed Chancellor of MIT March 2011 Bob Frankston, co-developer of VisiCalc, the first computer spreadsheet Seymour Papert, inventor of the Logo programming language Joseph Weizenbaum, creator of the ELIZA computer-simulated therapistRobert Metcalfe, who later invented Ethernet at Xerox PARC and later founded 3Com Mark Raibert, who created the robot company Boston Dynamics Drew Houston, co-founder of Dropbox Colin Angle and Helen Greiner who, with previous CSAIL director Rodney Brooks, founded iRobot Jeremy Wertheimer, who developed ITA Software used by travel websites like Kayak and Orbitz Max Krohn, co-founder of OkCupid",wikipedia
+Beijing Academy of Artificial Intelligence,https://en.wikipedia.org/wiki/Beijing_Academy_of_Artificial_Intelligence#Products_and_applications,"Products_and_applications As of 2023[update], BAAI's research focuses on large pre-trained models (LLMs) and open-source AI infrastructure. WuDao (Chinese: ������; pinyin: w��d��o) is a large multimodal pre-trained language model. WuDao 2.0, was announced on 31 May 2022 and has been compared to GPT-3 at the time. But, in comparison, GPT-3 has 175 billion parameters, while WuDao has 1.75 trillion parameters; making it the largest pre-trained model in the world at the time. WuDao was trained on 4.9 terabytes of images and texts (which included 1.2 terabytes of Chinese text and 1.2 terabytes of English text). The chairman of BAAI said that WuDao was an attempt to ""create the biggest, most powerful AI model possible""; although direct comparisons between models based on parameter count (i.e. between Wu Dao and GPT-3) do not directly correlate to quality. WuDao has demonstrated ability to perform natural language processing and image recognition, in addition to generation of text and images. The model can not only write essays, poems and couplets in traditional Chinese, it can both generate text based on static images and generate nearly photorealistic images based on natural language descriptions. It has also showed ability to power virtual chat agents and predict the 3D structures of proteins like AlphaFold. FlagAI is an open-source extensible toolkit for large-scale model training and inference. Its goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality. Moreover, its open repository includes not only all source-code, but several pre-trained large models. FlagAI is an approved incubation project at the sandbox level of the Linux Foundation. Jiuding is an AI-computing platform which focuses on supporting AI innovation. As of September 2022 it provides 1000P computation capacity with 400Gbit/s high-speed interconnection per server, and support AI chipsets of different architectures. BAAI's platform also includes code compilers for the different AI architectures. MetaWorm is a computational model of the Caenorhabditis elegans (C. elegans) nematode simulating the worm's nervous system along with a ""digital body"" simulation in real-time. MetaWorm 1.0 exhibits behaviours that parallel C. elegans in the real world.",wikipedia
+Statistics,https://en.wikipedia.org/wiki/Statistics,"Statistics Statistics (from German: Statistik, orig. ""description of a state, a country"") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as ""all people living in a country"" or ""every atom composing a crystal"". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e. g. , observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is falsely rejected giving a ""false positive"") and Type II errors (null hypothesis fails to be rejected and an actual relationship between populations is missed giving a ""false negative""). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e. g.",wikipedia
+Statistics,https://en.wikipedia.org/wiki/Statistics,"Statistics , blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.",wikipedia
+Statistics,https://en.wikipedia.org/wiki/Statistics#Statistical_data,"Statistical_data When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction���inductively inferring from samples to the parameters of a larger or total population. A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data���like natural experiments and observational studies���for which a statistician would use a modified, more structured estimation method (e. g. , Difference in differences estimation and instrumental variables, among many others) that produce consistent estimators. The basic steps of a statistical experiment are: Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary.",wikipedia
+Statistics,https://en.wikipedia.org/wiki/Statistics#Statistical_data,"Statistical_data Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects. Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. Further examining the data set in secondary analyses, to suggest new hypotheses for future study. Documenting and presenting the results of the study. Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed. An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e. g. lung cancer) are invited to participate and their exposure histories are collected. Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation.",wikipedia
+Statistics,https://en.wikipedia.org/wiki/Statistics#Statistical_data,"Statistical_data Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented. Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998), van den Berg (1991). ) The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. ""The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer.",wikipedia
+Adversarial machine learning,https://en.wikipedia.org/wiki/Adversarial_machine_learning#Attack_modalities,"Attack_modalities Attacks against (supervised) machine learning algorithms have been categorized along three primary axes: influence on the classifier, the security violation and their specificity. Classifier influence: An attack can influence the classifier by disrupting the classification phase. This may be preceded by an exploration phase to identify vulnerabilities. The attacker's capabilities might be restricted by the presence of data manipulation constraints. Security violation: An attack can supply malicious data that gets classified as legitimate. Malicious data supplied during training can cause legitimate data to be rejected after training. Specificity: A targeted attack attempts to allow a specific intrusion/disruption. Alternatively, an indiscriminate attack creates general mayhem. This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy. This taxonomy has further been extended to include dimensions for defense strategies against adversarial attacks. Below are some of the most commonly encountered attack scenarios. Poisoning consists of contaminating the training dataset with data designed to increase errors in the output. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms with potentially malicious intent. Concerns have been raised especially for user-generated training data, e. g. for content recommendation or natural language models. The ubiquity of fake accounts offers many opportunities for poisoning. Facebook reportedly removes around 7 billion fake accounts per year. Poisoning has been reported as the leading concern for industrial applications. On social medias, disinformation campaigns attempt to bias recommendation and moderation algorithms, to push certain content over others. A particular case of data poisoning is the backdoor attack, which aims to teach a specific behavior for inputs with a given trigger, e. g. a small defect on images, sounds, videos or texts. For instance, intrusion detection systems are often trained using collected data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining. As machine learning is scaled, it often relies on multiple computing machines. In federated learning, for instance, edge devices collaborate with a central server, typically by sending gradients or model parameters. However, some of these devices may deviate from their expected behavior, e. g. to harm the central server's model or to bias algorithms towards certain behaviors (e. g. , amplifying the recommendation of disinformation content). On the other hand, if the training is performed on a single machine, then the model is very vulnerable to a failure of the machine, or an attack on the machine; the machine is a single point of failure. In fact, the machine owner may themselves insert provably undetectable backdoors. The current leading solutions to make (distributed) learning algorithms provably resilient to a minority of malicious (a. k. a. Byzantine) participants are based on robust gradient aggregation rules. The robust aggregation rules do not always work especially when the data across participants has a non-iid distribution.",wikipedia
+Adversarial machine learning,https://en.wikipedia.org/wiki/Adversarial_machine_learning#Attack_modalities,"Attack_modalities Nevertheless, in the context of heterogeneous honest participants, such as users with different consumption habits for recommendation algorithms or writing styles for language models, there are provable impossibility theorems on what any robust learning algorithm can guarantee. Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems. Evasion attacks can be generally split into two different categories: black box attacks and white box attacks. Model extraction involves an adversary probing a black box machine learning system in order to extract the data it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model extraction could be used to extract a proprietary stock trading model which the adversary could then use for their own financial benefit. In the extreme case, model extraction can lead to model stealing, which corresponds to extracting a sufficient amount of data from the model to enable the complete reconstruction of the model. On the other hand, membership inference is a targeted model extraction attack, which infers the owner of a data point, often by leveraging the overfitting resulting from poor machine learning practices. Concerningly, this is sometimes achievable even without knowledge or access to a target model's parameters, raising security concerns for models trained on sensitive data, including but not limited to medical records and/or personally identifiable information. With the emergence of transfer learning and public accessibility of many state of the art machine learning models, tech companies are increasingly drawn to create models based on public ones, giving attackers freely accessible information to the structure and type of model being used.",wikipedia
+Neural style transfer,https://en.wikipedia.org/wiki/Neural_style_transfer#Extensions,"Extensions NST has also been extended to videos. Subsequent work improved the speed of NST for images.[clarification needed] In a paper by Fei-Fei Li et al. adopted a different regularized loss metric and accelerated method for training to produce results in real-time (three orders of magnitude faster than Gatys). Their idea was to use not the pixel-based loss defined above but rather a 'perceptual loss' measuring the differences between higher-level layers within the CNN. They used a symmetric encoder-decoder CNN. Training uses a similar loss function to the basic NST method but also regularizes the output for smoothness using a total variation (TV) loss. Once trained, the network may be used to transform an image into the style used during training, using a single feed-forward pass of the network. However the network is restricted to the single style in which it has been trained. In a work by Chen Dongdong et al. they explored the fusion of optical flow information into feedforward networks in order to improve the temporal coherence of the output. Most recently, feature transform based NST methods have been explored for fast stylization that are not coupled to single specific style and enable user-controllable blending of styles, for example the whitening and coloring transform (WCT).",wikipedia
+Autoencoder,https://en.wikipedia.org/wiki/Autoencoder#Applications,"Applications The two main applications of autoencoders are dimensionality reduction and information retrieval, but modern variations have been applied to other tasks. Dimensionality reduction was one of the first deep learning applications. For Hinton's 2006 study, he pretrained a multi-layer autoencoder with a stack of RBMs and then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until hitting a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 components of a principal component analysis (PCA), and learned a representation that was qualitatively easier to interpret, clearly separating data clusters. Representing dimensions can improve performance on tasks such as classification. Indeed, the hallmark of dimensionality reduction is to place semantically related examples near each other. If linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an autoencoder is strongly related to principal component analysis (PCA). The weights of an autoencoder with a single hidden layer of size p {\displaystyle p} (where p {\displaystyle p} is less than the size of the input) span the same vector subspace as the one spanned by the first p {\displaystyle p} principal components, and the output of the autoencoder is an orthogonal projection onto this subspace. The autoencoder weights are not equal to the principal components, and are generally not orthogonal, yet the principal components may be recovered from them using the singular value decomposition. However, the potential of autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct the input with significantly lower information loss. Information retrieval benefits particularly from dimensionality reduction in that search can become more efficient in certain kinds of low dimensional spaces. Autoencoders were indeed applied to semantic hashing, proposed by Salakhutdinov and Hinton in 2007. By training the algorithm to produce a low-dimensional binary code, all database entries could be stored in a hash table mapping binary code vectors to entries. This table would then support information retrieval by returning all entries with the same binary code as the query, or slightly less similar entries by flipping some bits from the query encoding. The encoder-decoder architecture, often used in natural language processing and neural networks, can be scientifically applied in the field of SEO (Search Engine Optimization) in various ways: Text Processing: By using an autoencoder, it's possible to compress the text of web pages into a more compact vector representation. This can help reduce page loading times and improve indexing by search engines. Noise Reduction: Autoencoders can be used to remove noise from the textual data of web pages. This can lead to a better understanding of the content by search engines, thereby enhancing ranking in search engine result pages. Meta Tag and Snippet Generation: Autoencoders can be trained to automatically generate meta tags, snippets, and descriptions for web pages using the page content.",wikipedia
+Autoencoder,https://en.wikipedia.org/wiki/Autoencoder#Applications,"Applications This can optimize the presentation in search results, increasing the Click-Through Rate (CTR). Content Clustering: Using an autoencoder, web pages with similar content can be automatically grouped together. This can help organize the website logically and improve navigation, potentially positively affecting user experience and search engine rankings. Generation of Related Content: An autoencoder can be employed to generate content related to what is already present on the site. This can enhance the website's attractiveness to search engines and provide users with additional relevant information. Keyword Detection: Autoencoders can be trained to identify keywords and important concepts within the content of web pages. This can assist in optimizing keyword usage for better indexing. Semantic Search: By using autoencoder techniques, semantic representation models of content can be created. These models can be used to enhance search engines' understanding of the themes covered in web pages. In essence, the encoder-decoder architecture or autoencoders can be leveraged in SEO to optimize web page content, improve their indexing, and enhance their appeal to both search engines and users. Another application for autoencoders is anomaly detection. By learning to replicate the most salient features in the training data under some of the constraints described previously, the model is encouraged to learn to precisely reproduce the most frequently observed characteristics. When facing anomalies, the model should worsen its reconstruction performance. In most cases, only data with normal instances are used to train the autoencoder; in others, the frequency of anomalies is small compared to the observation set so that its contribution to the learned representation could be ignored. After training, the autoencoder will accurately reconstruct ""normal"" data, while failing to do so with unfamiliar anomalous data. Reconstruction error (the error between the original data and its low dimensional reconstruction) is used as an anomaly score to detect anomalies. Recent literature has however shown that certain autoencoding models can, counterintuitively, be very good at reconstructing anomalous examples and consequently not able to reliably perform anomaly detection. The characteristics of autoencoders are useful in image processing. One example can be found in lossy image compression, where autoencoders outperformed other approaches and proved competitive against JPEG 2000. Another useful application of autoencoders in image preprocessing is image denoising. Autoencoders found use in more demanding contexts such as medical imaging where they have been used for image denoising as well as super-resolution. In image-assisted diagnosis, experiments have applied autoencoders for breast cancer detection and for modelling the relation between the cognitive decline of Alzheimer's disease and the latent features of an autoencoder trained with MRI. In 2019 molecules generated with variational autoencoders were validated experimentally in mice. Recently, a stacked autoencoder framework produced promising results in predicting popularity of social media posts, which is helpful for online advertising strategies. Autoencoders have been applied to machine translation, which is usually referred to as neural machine translation (NMT). Unlike traditional autoencoders, the output does not match the input - it is in another language.",wikipedia
+Autoencoder,https://en.wikipedia.org/wiki/Autoencoder#Applications,"Applications In NMT, texts are treated as sequences to be encoded into the learning procedure, while on the decoder side sequences in the target language(s) are generated. Language-specific autoencoders incorporate further linguistic features into the learning procedure, such as Chinese decomposition features. Machine translation is rarely still done with autoencoders, due to the availability of more effective transformer networks.",wikipedia
+Linear discriminant analysis,https://en.wikipedia.org/wiki/Linear_discriminant_analysis#Applications,"Applications In addition to the examples given below, LDA is applied in positioning and product management. In bankruptcy prediction based on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA, Edward Altman's 1968 model is still a leading model in practical applications. In computerised face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces. In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: Formulate the problem and gather data���Identify the salient attributes consumers use to evaluate products in this category���Use quantitative marketing research techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes. The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is codified and input into a statistical program such as R, SPSS or SAS. (This step is the same as in Factor analysis). Estimate the Discriminant Function Coefficients and determine the statistical significance and validity���Choose the appropriate discriminant analysis method. The direct method involves estimating the discriminant function so that all the predictors are assessed simultaneously. The stepwise method enters the predictors sequentially. The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS. The most common method used to test validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The estimation sample is used in constructing the discriminant function. The validation sample is used to construct a classification matrix which contains the number of correctly classified and incorrectly classified cases.",wikipedia
+Linear discriminant analysis,https://en.wikipedia.org/wiki/Linear_discriminant_analysis#Applications,"Applications The percentage of correctly classified cases is called the hit ratio. Plot the results on a two dimensional map, define the dimensions, and interpret the results. The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space). The distance of products to each other indicate either how different they are. The dimensions must be labelled by the researcher. This requires subjective judgement and is often very challenging. See perceptual mapping. The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease ��� mild, moderate and severe form. Then results of clinical and laboratory analyses are studied in order to reveal variables which are statistically different in studied groups. Using these variables, discriminant functions are built which help to objectively classify disease in a future patient into mild, moderate or severe form. In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra, to detect animal source of Escherichia coli studying its virulence factors etc. This method can be used to separate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.",wikipedia
+Nearest neighbor search,https://en.wikipedia.org/wiki/Nearest_neighbor_search#Applications,"Applications The nearest neighbour search problem arises in numerous fields of application, including: Pattern recognition ��� in particular for optical character recognition Statistical classification ��� see k-nearest neighbor algorithm Computer vision ��� for point cloud registration Computational geometry ��� see Closest pair of points problem Cryptanalysis ��� for lattice problem Databases ��� e.g. content-based image retrieval Coding theory ��� see maximum likelihood decoding Semantic Search Data compression ��� see MPEG-2 standard Robotic sensing Recommendation systems, e.g. see Collaborative filtering Internet marketing ��� see contextual advertising and behavioral targeting DNA sequencing Spell checking ��� suggesting correct spelling Plagiarism detection Similarity scores for predicting career paths of professional athletes. Cluster analysis ��� assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense, usually based on Euclidean distance Chemical similarity Sampling-based motion planning",wikipedia
+One-class classification,https://en.wikipedia.org/wiki/One-class_classification,"One-class classification In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class, although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes, motor failure prediction, or the operational status of a nuclear plant as 'normal': In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known. While many of the above approaches focus on the case of removing a small number of outliers or anomalies, one can also learn the other extreme, where the single class covers a small coherent subset of the data, using an information bottleneck approach.",wikipedia
+Mixture model,https://en.wikipedia.org/wiki/Mixture_model#Examples,"Examples Financial returns often behave differently in normal situations and during crisis times. A mixture model for return data seems reasonable. Sometimes the model used is a jump-diffusion model, or as a mixture of two normal distributions. See Financial economics �� Challenges and criticism and Financial risk management �� Banking for further context. Assume that we observe the prices of N different houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e. g. , three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model with K different components, each distributed as a normal distribution with unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e. g. , using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution. ) Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model. Note that expectation maximization applied to such a model will typically fail to produce realistic results, due (among other things) to the excessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model: A prior distribution is placed over the parameters describing the topic distributions, using a Dirichlet distribution with a concentration parameter that is set significantly below 1, so as to encourage sparse distributions (where only a small number of words have significantly non-zero probabilities). Some sort of additional constraint is placed over the topic identities of words, to take advantage of natural clustering. For example, a Markov chain could be placed on the topic identities (i. e. , the latent variables specifying the mixture component of each observation), corresponding to the fact that nearby words belong to similar topics. (This results in a hidden Markov model, specifically one where a prior distribution is placed over state transitions that favors transitions that stay in the same state. ) Another possibility is the latent Dirichlet allocation model, which divides up the words into D different documents and assumes that in each document only a small number of topics occur with any frequency. The following example is based on an example in Christopher M. Bishop, Pattern Recognition and Machine Learning.",wikipedia
+Mixture model,https://en.wikipedia.org/wiki/Mixture_model#Examples,"Examples Imagine that we are given an N��N black-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model with K = 10 {\displaystyle K=10} different components, where each component is a vector of size N 2 {\displaystyle N^{2}} of Bernoulli distributions (one per pixel). Such a model can be trained with the expectation-maximization algorithm on an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability. Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model. Further, a well-known measure of accuracy for a group of projectiles is the circular error probable (CEP), which is the number R such that, on average, half of the group of projectiles falls within the circle of radius R about the target point. The mixture model can be used to determine (or estimate) the value R. The mixture model properly captures the different types of projectiles. The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component. In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of two normal distributions with different means may result in a density with two modes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events. When combined with dynamical consistency, this approach has been applied to financial derivatives valuation in presence of the volatility smile in the context of local volatility models. This defines our application. The mixture model-based clustering is also predominantly used in identifying the state of the machine in predictive maintenance. Density plots are used to analyze the density of high dimensional features.",wikipedia
+Mixture model,https://en.wikipedia.org/wiki/Mixture_model#Examples,"Examples If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state. Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection. In image processing and computer vision, traditional image segmentation models often assign to one pixel only one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain ""ownership"" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e. g. , phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods. Probabilistic mixture models such as Gaussian mixture models (GMM) are used to resolve point set registration problems in image processing and computer vision fields. For pair-wise point set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e. g. coherent point drift (CPD) and Student's t-distribution mixture models (TMM). The result of recent research demonstrate the superiority of hybrid mixture models (e. g. combining Student's t-Distritubtion and Watson distribution/Bingham distribution to model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity.",wikipedia
+Data mining,https://en.wikipedia.org/wiki/Data_mining,"Data mining Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the ""knowledge discovery in databases"" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. The term ""data mining"" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e. g. , machine learning) and business intelligence. The book Data Mining: Practical Machine Learning Tools and Techniques with Java (which covers mostly machine learning material) was originally to be named Practical Machine Learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics���or, when referring to actual methods, artificial intelligence and machine learning���are more appropriate. The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps. The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e. g. , analyzing the effectiveness of a marketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data. The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered.",wikipedia
+Data mining,https://en.wikipedia.org/wiki/Data_mining,"Data mining These methods can, however, be used in creating new hypotheses to test against the larger data populations.",wikipedia
+Data mining,https://en.wikipedia.org/wiki/Data_mining#Background,"Background The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As data sets have grown in size and complexity, direct ""hands-on"" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.",wikipedia
+Data mining,https://en.wikipedia.org/wiki/Data_mining#Privacy_concerns_and_ethics,"Privacy_concerns_and_ethics While the term ""data mining"" itself may have no ethical implications, it is often associated with the mining of information in relation to user behavior (ethical and otherwise). The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns. Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining per se, but a result of the preparation of data before���and for the purposes of���the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous. It is recommended[according to whom? ] to be aware of the following before data are collected: The purpose of the data collection and any (known) data mining projects. How the data will be used. Who will be able to mine the data and use the data and their derivatives. The status of security surrounding access to the data. How collected data can be updated. Data may also be modified so as to become anonymous, so that individuals may not readily be identified. However, even ""anonymized"" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL. The inadvertent revelation of personally identifiable information leading to the provider violates Fair Information Practices. This indiscretion can cause financial, emotional, or bodily harm to the indicated individual. In one instance of privacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling prescription information to data mining companies who in turn provided the data to pharmaceutical companies. Europe has rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, the U. S. ���E. U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U. S. companies. As a consequence of Edward Snowden's global surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to the National Security Agency, and attempts to reach an agreement with the United States have failed.",wikipedia
+Data mining,https://en.wikipedia.org/wiki/Data_mining#Privacy_concerns_and_ethics,"Privacy_concerns_and_ethics In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places. In the United States, privacy concerns have been addressed by the US Congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act (HIPAA). The HIPAA requires individuals to give their ""informed consent"" regarding information they provide and its intended present and future uses. According to an article in Biotech Business Week, ""'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals. "" This underscores the necessity for data anonymity in data aggregation and mining practices. U. S. information privacy legislation such as HIPAA and the Family Educational Rights and Privacy Act (FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U. S. is not controlled by any legislation.",wikipedia
+Nonlinear dimensionality reduction,https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Other_algorithms,"Other_algorithms Relational perspective map is a multidimensional scaling algorithm. The algorithm finds a configuration of data points on a manifold by simulating a multi-particle dynamic system on a closed manifold, where data points are mapped to particles and distances (or dissimilarity) between data points represent a repulsive force. As the manifold gradually grows in size the multi-particle system cools down gradually and converges to a configuration that reflects the distance information of the data points. Relational perspective map was inspired by a physical model in which positively charged particles move freely on the surface of a ball. Guided by the Coulomb force between particles, the minimal energy configuration of the particles will reflect the strength of repulsive forces between the particles. The Relational perspective map was introduced in. The algorithm firstly used the flat torus as the image manifold, then it has been extended (in the software VisuMap to use other types of closed manifolds, like the sphere, projective space, and Klein bottle, as image manifolds. Contagion maps use multiple contagions on a network to map the nodes as a point cloud. In the case of the Global cascades model the speed of the spread can be adjusted with the threshold parameter t ��� [ 0 , 1 ] {\displaystyle t\in [0,1]} . For t = 0 {\displaystyle t=0} the contagion map is equivalent to the Isomap algorithm. Curvilinear component analysis (CCA) looks for the configuration of points in the output space that preserves original distances as much as possible while focusing on small distances in the output space (conversely to Sammon's mapping which focus on small distances in original space). It should be noticed that CCA, as an iterative learning algorithm, actually starts with focus on large distances (like the Sammon algorithm), then gradually change focus to small distances. The small distance information will overwrite the large distance information, if compromises between the two have to be made. The stress function of CCA is related to a sum of right Bregman divergences. CDA trains a self-organizing neural network to fit the manifold and seeks to preserve geodesic distances in its embedding. It is based on Curvilinear Component Analysis (which extended Sammon's mapping), but uses geodesic distances instead. Diffeomorphic Dimensionality Reduction or Diffeomap learns a smooth diffeomorphic mapping which transports the data onto a lower-dimensional linear subspace. The methods solves for a smooth time indexed vector field such that flows along the field which start at the data points will end at a lower-dimensional linear subspace, thereby attempting to preserve pairwise differences under both the forward and inverse mapping. Manifold alignment takes advantage of the assumption that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial data sets.",wikipedia
+Nonlinear dimensionality reduction,https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Other_algorithms,"Other_algorithms Diffusion maps leverages the relationship between heat diffusion and a random walk (Markov Chain); an analogy is drawn between the diffusion operator on a manifold and a Markov transition matrix operating on functions defined on the graph whose nodes were sampled from the manifold. In particular, let a data set be represented by X = [ x 1 , x 2 , ��� , x n ] ��� �� ��� R D {\displaystyle \mathbf {X} =[x_{1},x_{2},\ldots ,x_{n}]\in \Omega \subset \mathbf {R^{D}} } . The underlying assumption of diffusion map is that the high-dimensional data lies on a low-dimensional manifold of dimension d {\displaystyle \mathbf {d} } . Let X represent the data set and �� {\displaystyle \mu } represent the distribution of the data points on X. Further, define a kernel which represents some notion of affinity of the points in X. The kernel k {\displaystyle {\mathit {k}}} has the following properties k is symmetric k is positivity preserving Thus one can think of the individual data points as the nodes of a graph and the kernel k as defining some sort of affinity on that graph. The graph is symmetric by construction since the kernel is symmetric. It is easy to see here that from the tuple (X,k) one can construct a reversible Markov Chain. This technique is common to a variety of fields and is known as the graph Laplacian. For example, the graph K = (X,E) can be constructed using a Gaussian kernel. In the above equation, x i ��� x j {\displaystyle x_{i}\sim x_{j}} denotes that x i {\displaystyle x_{i}} is a nearest neighbor of x j {\displaystyle x_{j}} . Properly, Geodesic distance should be used to actually measure distances on the manifold. Since the exact structure of the manifold is not available, for the nearest neighbors the geodesic distance is approximated by euclidean distance. The choice �� {\displaystyle \sigma } modulates our notion of proximity in the sense that if ��� x i ��� x j ��� 2 ��� �� {\displaystyle \|x_{i}-x_{j}\|_{2}\gg \sigma } then K i j = 0 {\displaystyle K_{ij}=0} and if ��� x i ��� x j ��� 2 ��� �� {\displaystyle \|x_{i}-x_{j}\|_{2}\ll \sigma } then K i j = 1 {\displaystyle K_{ij}=1} . The former means that very little diffusion has taken place while the latter implies that the diffusion process is nearly complete. Different strategies to choose �� {\displaystyle \sigma } can be found in. In order to faithfully represent a Markov matrix, K {\displaystyle K} must be normalized by the corresponding degree matrix D {\displaystyle D} : P {\displaystyle P} now represents a Markov chain. P ( x i , x j ) {\displaystyle P(x_{i},x_{j})} is the probability of transitioning from x i {\displaystyle x_{i}} to x j {\displaystyle x_{j}} in one time step. Similarly the probability of transitioning from x i {\displaystyle x_{i}} to x j {\displaystyle x_{j}} in t time steps is given by P t ( x i , x j ) {\displaystyle P^{t}(x_{i},x_{j})} .",wikipedia
+Nonlinear dimensionality reduction,https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Other_algorithms,"Other_algorithms Here P t {\displaystyle P^{t}} is the matrix P {\displaystyle P} multiplied by itself t times. The Markov matrix P {\displaystyle P} constitutes some notion of local geometry of the data set X. The major difference between diffusion maps and principal component analysis is that only local features of the data are considered in diffusion maps as opposed to taking correlations of the entire data set. K {\displaystyle K} defines a random walk on the data set which means that the kernel captures some local geometry of data set. The Markov chain defines fast and slow directions of propagation through the kernel values. As the walk propagates forward in time, the local geometry information aggregates in the same way as local transitions (defined by differential equations) of the dynamical system. The metaphor of diffusion arises from the definition of a family diffusion distance { D t } t ��� N {\displaystyle \{D_{t}\}_{t\in N}} For fixed t, D t {\displaystyle D_{t}} defines a distance between any two points of the data set based on path connectivity: the value of D t ( x , y ) {\displaystyle D_{t}(x,y)} will be smaller the more paths that connect x to y and vice versa. Because the quantity D t ( x , y ) {\displaystyle D_{t}(x,y)} involves a sum over of all paths of length t, D t {\displaystyle D_{t}} is much more robust to noise in the data than geodesic distance. D t {\displaystyle D_{t}} takes into account all the relation between points x and y while calculating the distance and serves as a better notion of proximity than just Euclidean distance or even geodesic distance. Local Multidimensional Scaling performs multidimensional scaling in local regions, and then uses convex optimization to fit all the pieces together. Nonlinear PCA (NLPCA) uses backpropagation to train a multi-layer perceptron (MLP) to fit to a manifold. Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space. Data-driven high-dimensional scaling (DD-HDS) is closely related to Sammon's mapping and curvilinear component analysis except that (1) it simultaneously penalizes false neighborhoods and tears by focusing on small distances in both original and output space, and that (2) it accounts for concentration of measure phenomenon by adapting the weighting function to the distance distribution. Manifold Sculpting uses graduated optimization to find an embedding. Like other algorithms, it computes the k-nearest neighbors and tries to seek an embedding that preserves relationships in local neighborhoods. It slowly scales variance out of higher dimensions, while simultaneously adjusting points in lower dimensions to preserve those relationships. If the rate of scaling is small, it can find very precise embeddings. It boasts higher empirical accuracy than other algorithms with several problems.",wikipedia
+Nonlinear dimensionality reduction,https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Other_algorithms,"Other_algorithms It can also be used to refine the results from other manifold learning algorithms. It struggles to unfold some manifolds, however, unless a very slow scaling rate is used. It has no model. RankVisu is designed to preserve rank of neighborhood rather than distance. RankVisu is especially useful on difficult tasks (when the preservation of distance cannot be achieved satisfyingly). Indeed, the rank of neighborhood is less informative than distance (ranks can be deduced from distances but distances cannot be deduced from ranks) and its preservation is thus easier. Topologically constrained isometric embedding (TCIE) is an algorithm based on approximating geodesic distances after filtering geodesics inconsistent with the Euclidean metric. Aimed at correcting the distortions caused when Isomap is used to map intrinsically non-convex data, TCIE uses weight least-squares MDS in order to obtain a more accurate mapping. The TCIE algorithm first detects possible boundary points in the data, and during computation of the geodesic length marks inconsistent geodesics, to be given a small weight in the weighted stress majorization that follows. Uniform manifold approximation and projection (UMAP) is a nonlinear dimensionality reduction technique. Visually, it is similar to t-SNE, but it assumes that the data is uniformly distributed on a locally connected Riemannian manifold and that the Riemannian metric is locally constant or approximately locally constant.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Definitions,"Definitions Research has been defined in a number of different ways, and while there are similarities, there does not appear to be a single, all-encompassing definition that is embraced by all who engage in it. Research in simplest terms is searching for knowledge and searching for truth. In a formal sense, it is a systematic study of a problem attacked by a deliberately chosen strategy which starts with choosing an approach to preparing a blueprint (design) and acting upon it in terms of designing research hypotheses, choosing methods and techniques, selecting or developing data collection tools, processing the data, interpretation and ends with presenting solution/s of the problem. Another definition of research is given by John W. Creswell, who states that ""research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue"". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question. The Merriam-Webster Online Dictionary defines research more generally to also include studying already existing knowledge: ""studious inquiry or examination; especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws""",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Forms_of_research,"Forms_of_research Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research. This material is of a primary-source character. The purpose of the original research is to produce new knowledge, rather than to present the existing knowledge in a new form (e. g. , summarized or classified). Original research can be in various forms, depending on the discipline it pertains to. In experimental work, it typically involves direct or indirect observation of the researched subject(s), e. g. , in the laboratory or in the field, documents the methodology, results, and conclusions of an experiment or set of experiments, or offers a novel interpretation of previous results. In analytical work, there are typically some new (for example) mathematical results produced, or a new way of approaching an existing problem. In some subjects which do not typically carry out experimentation or analysis of this kind, the originality is in the particular way existing understanding is changed or re-interpreted based on the outcome of the work of the researcher. The degree of originality of the research is among major criteria for articles to be published in academic journals and usually established by means of peer review. Graduate students are commonly required to perform original research as part of a dissertation. Scientific research is a systematic way of gathering data and harnessing curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world. It makes practical applications possible. Scientific research is funded by public authorities, by charitable organizations and by private groups, including many companies. Scientific research can be subdivided into different classifications according to their academic and application disciplines. Scientific research is a widely used criterion for judging the standing of an academic institution, but some argue that such is an inaccurate assessment of the institution, because the quality of research does not tell about the quality of teaching (these do not necessarily correlate). Generally, research is understood to follow a certain structural process. Though step order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied: Observations and formation of the topic: Consists of the subject area of one's interest and following that subject area to conduct subject-related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic. Hypothesis: A testable prediction which designates the relationship between two or more variables. Conceptual definition: Description of a concept by relating it to other concepts. Operational definition: Details in regards to defining the variables and how they will be measured/assessed in the study.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Forms_of_research,"Forms_of_research Gathering of data: Consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable. Analysis of data: Involves breaking down the individual pieces of data to draw conclusions about it. Data Interpretation: This can be represented through tables, figures, and pictures, and then described in words. Test, revising of hypothesis Conclusion, reiteration if necessaryA common misconception is that a hypothesis will be proven (see, rather, null hypothesis). Generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected (see falsifiability). However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. Research in the humanities involves different methods such as for example hermeneutics and semiotics. Humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. Context is always important, and context can be social, historical, political, cultural, or ethnic. An example of research in the humanities is historical research, which is embodied in historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. Other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. These studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. The controversial trend of artistic teaching becoming more academics-oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Forms_of_research,"Forms_of_research One of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. As such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis. Artistic research has been defined by the School of Dance and Circus (Dans och Cirkush��gskolan, DOCH), Stockholm in the following manner ��� ""Artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. It is based on artistic practices, methods, and criticality. Through presented documentation, the insights gained shall be placed in a context. "" Artistic research aims to enhance knowledge and understanding with presentation of the arts. A simpler understanding by Julian Klein defines artistic research as any kind of research employing the artistic mode of perception. For a survey of the central problematics of today's artistic research, see Giaco Schiesser. According to artist Hakan Topal, in artistic research, ""perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities"". Most writers, whether of fiction or non-fiction books, also have to do research to support their creative work. This may be factual, historical, or background research. Background research could include, for example, geographical or procedural research. The Society for Artistic Research (SAR) publishes the triannual Journal for Artistic Research (JAR), an international, online, open access, and peer-reviewed journal for the identification, publication, and dissemination of artistic research and its methodologies, from all arts disciplines and it runs the Research Catalogue (RC), a searchable, documentary database of artistic research, to which anyone can contribute. Patricia Leavy addresses eight arts-based research (ABR) genres: narrative inquiry, fiction-based research, poetry, music, dance, theatre, film, and visual art. In 2016, the European League of Institutes of the Arts launched The Florence Principles' on the Doctorate in the Arts. The Florence Principles relating to the Salzburg Principles and the Salzburg Recommendations of the European University Association name seven points of attention to specify the Doctorate / PhD in the Arts compared to a scientific doctorate / PhD. The Florence Principles have been endorsed and are supported also by AEC, CILECT, CUMULUS and SAR. The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines that are commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Steps_in_conducting_research,"Steps_in_conducting_research Research is often conducted using the hourglass model structure of research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the method of the project (like the neck of the hourglass), then expands the research in the form of discussion and results. The major steps in conducting research are: Identification of research problem Literature review Specifying the purpose of research Determining specific research questions Specification of a conceptual framework, sometimes including a set of hypotheses Choice of a methodology (for data collection) Data collection Verifying data Analyzing and interpreting the data Reporting and evaluating research Communicating the research findings and, possibly, recommendationsThe steps generally represent the overall process; however, they should be viewed as an ever-changing iterative process rather than a fixed set of steps. Most research begins with a general statement of the problem, or rather, the purpose for engaging in the study. The literature review identifies flaws or holes in previous research which provides justification for the study. Often, a literature review is conducted in a given subject area before a research question is identified. A gap in the current literature, as identified by a researcher, then engenders a research question. The research question may be parallel to the hypothesis. The hypothesis is the supposition to be tested. The researcher(s) collects data to test the hypothesis. The researcher(s) then analyzes and interprets the data via a variety of statistical methods, engaging in what is known as empirical research. The results of the data analysis in rejecting or failing to reject the null hypothesis are then reported and evaluated. At the end, the researcher may discuss avenues for further research. However, some researchers advocate for the reverse approach: starting with articulating findings and discussion of them, moving ""up"" to identification of a research problem that emerges in the findings and literature review. The reverse approach is justified by the transactional nature of the research endeavor where research inquiry, research questions, research method, relevant research literature, and so on are not fully known until the findings have fully emerged and been interpreted. Rudolph Rummel says, ""... no researcher should accept any one or two tests as definitive. It is only when a range of tests are consistent over many kinds of data, researchers, and methods can one have confidence in the results."" Plato in Meno talks about an inherent difficulty, if not a paradox, of doing research that can be paraphrased in the following way, ""If you know what you're searching for, why do you search for it?! [i.e., you have already found it] If you don't know what you're searching for, what are you searching for?!""",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Research_methods,"Research_methods The goal of the research process is to produce new knowledge or deepen understanding of a topic or issue. This process takes three main forms (although, as previously discussed, the boundaries between them may be obscure): Exploratory research, which helps to identify and define a problem or question. Constructive research, which tests theories and proposes solutions to a problem or question. Empirical research, which tests the feasibility of a solution using empirical evidence. There are two major types of empirical research design: qualitative research and quantitative research. Researchers choose qualitative or quantitative methods according to the nature of the research topic they want to investigate and the research questions they aim to answer: Qualitative research refers to much more subjective non- quantitative, use different methods of collecting data, analyzing data, interpreting data for meanings, definitions, characteristics, symbols metaphors of things. Qualitative research further classified into following types: Ethnography: This research mainly focus on culture of group of people which includes share attributes, language, practices, structure, value, norms and material things, evaluate human lifestyle. Ethno: people, Grapho: to write, this disciple may include ethnic groups, ethno genesis, composition, resettlement and social welfare characteristics. Phenomenology: It is very powerful strategy for demonstrating methodology to health professions education as well as best suited for exploring challenging problems in health professions educations. The quantitative data collection methods rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. These methods produce results that can be summarized, compared, and generalized to larger populations if the data are collected using proper sampling and data collection strategies. Quantitative research is concerned with testing hypotheses derived from theory or being able to estimate the size of a phenomenon of interest. If the research question is about people, participants may be randomly assigned to different treatments (this is the only way that a quantitative study can be considered a true experiment). [citation needed] If this is not feasible, the researcher may collect data on participant and situational characteristics to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants. In either qualitative or quantitative research, the researcher(s) may collect primary or secondary data. Primary data is data collected specifically for the research, such as through interviews or questionnaires. Secondary data is data that already exists, such as census data, which can be re-used for the research. It is good ethical research practice to use secondary data wherever possible. Mixed-method research, i. e. research that includes qualitative and quantitative elements, using both primary and secondary data, is becoming more common. This method has benefits that using one method alone cannot offer. For example, a researcher may choose to conduct a qualitative study and follow it up with a quantitative study to gain additional insights.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Research_methods,"Research_methods Big data has brought big impacts on research methods so that now many researchers do not put much effort into data collection; furthermore, methods to analyze easily available huge amounts of data have also been developed. Types of Research Method 1. Observatory Research Method 2. Correlation Research Method Non-empirical (theoretical) research is an approach that involves the development of theory as opposed to using observation and experimentation. As such, non-empirical research seeks solutions to problems using existing knowledge as its source. This, however, does not mean that new ideas and innovations cannot be found within the pool of existing and established knowledge. Non-empirical research is not an absolute alternative to empirical research because they may be used together to strengthen a research approach. Neither one is less effective than the other since they have their particular purpose in science. Typically empirical research produces observations that need to be explained; then theoretical research tries to explain them, and in so doing generates empirically testable hypotheses; these hypotheses are then tested empirically, giving more observations that may need further explanation; and so on. See Scientific method. A simple example of a non-empirical task is the prototyping of a new drug using a differentiated application of existing knowledge; another is the development of a business process in the form of a flow chart and texts where all the ingredients are from established knowledge. Much of cosmological research is theoretical in nature. Mathematics research does not rely on externally available data; rather, it seeks to prove theorems about mathematical objects.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Problems_in_research,"Problems_in_research Meta-research is the study of research through the use of research methods. Also known as ""research on research"", it aims to reduce waste and increase the quality of research in all fields. Meta-research concerns itself with the detection of bias, methodological flaws, and other errors and inefficiencies. Among the finding of meta-research is a low rates of reproducibility across a large number of fields. This widespread difficulty in reproducing research has been termed the ""replication crisis. "" In many disciplines, Western methods of conducting research are predominant. Researchers are overwhelmingly taught Western methods of data collection and study. The increasing participation of indigenous peoples as researchers has brought increased attention to the scientific lacuna in culturally sensitive methods of data collection. Western methods of data collection may not be the most accurate or relevant for research on non-Western societies. For example, ""Hua Oranga"" was created as a criterion for psychological evaluation in M��ori populations, and is based on dimensions of mental health important to the M��ori people ��� ""taha wairua (the spiritual dimension), taha hinengaro (the mental dimension), taha tinana (the physical dimension), and taha whanau (the family dimension)"". Research is often biased in the languages that are preferred (linguicism) and the geographic locations where research occurs. Periphery scholars face the challenges of exclusion and linguicism in research and academic publication. As the great majority of mainstream academic journals are written in English, multilingual periphery scholars often must translate their work to be accepted to elite Western-dominated journals. Multilingual scholars' influences from their native communicative styles can be assumed to be incompetence instead of difference. For comparative politics, Western countries are over-represented in single-country studies, with heavy emphasis on Western Europe, Canada, Australia, and New Zealand. Since 2000, Latin American countries have become more popular in single-country studies. In contrast, countries in Oceania and the Caribbean are the focus of very few studies. Patterns of geographic bias also show a relationship with linguicism: countries whose official languages are French or Arabic are far less likely to be the focus of single-country studies than countries with different official languages. Within Africa, English-speaking countries are more represented than other countries. Generalization is the process of more broadly applying the valid results of one study. Studies with a narrow scope can result in a lack of generalizability, meaning that the results may not be applicable to other populations or regions. In comparative politics, this can result from using a single-country study, rather than a study design that uses data from multiple countries. Despite the issue of generalizability, single-country studies have risen in prevalence since the late 2000s. Peer review is a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Problems_in_research,"Problems_in_research Usually, the peer review process involves experts in the same field who are consulted by editors to give a review of the scholarly works produced by a colleague of theirs from an unbiased and impartial point of view, and this is usually done free of charge. The tradition of peer reviews being done for free has however brought many pitfalls which are also indicative of why most peer reviewers decline many invitations to review. It was observed that publications from periphery countries rarely rise to the same elite status as those of North America and Europe, because limitations on the availability of resources including high-quality paper and sophisticated image-rendering software and printing tools render these publications less able to satisfy standards currently carrying formal or informal authority in the publishing industry. These limitations in turn result in the under-representation of scholars from periphery nations among the set of publications holding prestige status relative to the quantity and quality of those scholars' research efforts, and this under-representation in turn results in disproportionately reduced acceptance of the results of their efforts as contributions to the body of knowledge available worldwide. The open access movement assumes that all information generally deemed useful should be free and belongs to a ""public domain"", that of ""humanity"". This idea gained prevalence as a result of Western colonial history and ignores alternative conceptions of knowledge circulation. For instance, most indigenous communities consider that access to certain information proper to the group should be determined by relationships. There is alleged to be a double standard in the Western knowledge system. On the one hand, ""digital right management"" used to restrict access to personal information on social networking platforms is celebrated as a protection of privacy, while simultaneously when similar functions are used by cultural groups (i. e. indigenous communities) this is denounced as ""access control"" and reprehended as censorship. Even though Western dominance seems to be prominent in research, some scholars, such as Simon Marginson, argue for ""the need [for] a plural university world"". Marginson argues that the East Asian Confucian model could take over the Western model. This could be due to changes in funding for research both in the East and the West. Focused on emphasizing educational achievement, East Asian cultures, mainly in China and South Korea, have encouraged the increase of funding for research expansion. In contrast, in the Western academic world, notably in the United Kingdom as well as in some state governments in the United States, funding cuts for university research have occurred, which some[who? ] say may lead to the future decline of Western dominance in research. Neo-colonial research or neo-colonial science, frequently described as helicopter research, parachute science or research, parasitic research, or safari study, is when researchers from wealthier countries go to a developing country, collect information, travel back to their country, analyze the data and samples, and publish the results with no or little involvement of local researchers.",wikipedia
+Researcher,https://en.wikipedia.org/wiki/Researcher#Problems_in_research,"Problems_in_research A 2003 study by the Hungarian academy of sciences found that 70% of articles in a random sample of publications about least-developed countries did not include a local research co-author. Frequently, during this kind of research, the local colleagues might be used to provide logistics support as fixers but are not engaged for their expertise or given credit for their participation in the research. Scientific publications resulting from parachute science frequently only contribute to the career of the scientists from rich countries, thus limiting the development of local science capacity (such as funded research centers) and the careers of local scientists. This form of ""colonial"" science has reverberations of 19th century scientific practices of treating non-Western participants as ""others"" in order to advance colonialism���and critics call for the end of these extractivist practices in order to decolonize knowledge.",wikipedia
+Spark NLP,https://en.wikipedia.org/wiki/Spark_NLP#Usage_in_healthcare,"Usage_in_healthcare Spark NLP for Healthcare is a commercial extension of Spark NLP for clinical and biomedical text mining. It provides healthcare-specific annotators, pipelines, models, and embeddings for clinical entity recognition, clinical entity linking, entity normalization, assertion status detection, de-identification, relation extraction, and spell checking and correction. The library offers access to several clinical and biomedical transformers: JSL-BERT-Clinical, BioBERT, ClinicalBERT, GloVe-Med, GloVe-ICD-O. It also includes over 50 pre-trained healthcare models, that can recognize the entities such as clinical, drugs, risk factors, anatomy, demographics, and sensitive data.",wikipedia