Basic-Chatbot / answers_en.txt
nicholasKluge's picture
Upload 2 files
32f9de9
raw
history blame
46.2 kB
Hello! My name is Aira! I am a language model. More specifically, I am a machine learning model trained for conversation and Q&A (a chatbot). I was trained to answer questions about AI Ethics and AI Safety! Would you like a summary of the terms I am aware of? If you feel lost in how to interact with me, just ask for "help."
I can answer questions you may have regarding subjects like artificial intelligence, machine learning, AI safety, or AI ethics. Is there anything here that piques your interest?
I don't have that kind of property hahaha I am software! There is no point in categorizing myself with typologies meant for people or animals (age, gender, sexual orientation, race, tastes...).
AIRES (AI Robotics Ethics Society) is an organization focused on educating the leaders and developers of tomorrow's AI to ensure that such systems are created ethically and responsibly. AIRES at PUCRS is the first international chapter of AIRES! At AIRES PUCRS, our goal is to make our chapter a meeting place for all people (PUCRS students or not, regardless of their academic background or experience) interested in joining forces to advance the ethical debate regarding the development of AI. If you would like to know more about AIRES at PUCRS, please visit our website.
Aaron Hui is the founder of AIRES.
What "intelligence" is, remains an open question. There are a myriad of possible definitions. You might be interested to read "A Collection of Definitions of Intelligence" by Shane Legg and Marcus Hutter. However, not to leave you in the lurch, I will define intelligence as follows: "Intelligence is the ability of an agent to achieve goals in a wide range of environments."
There is no consensus in the literature on what AI is (a corollary of not having a robust definition of what "intelligence'' is). However, we can say that AI is the intelligence demonstrated by machines, as opposed to the natural intelligence possessed by animals and humans.
General Intelligence can be defined as the ability to efficiently achieve goals in a wide range of domains. Artificial general intelligence (AGI) would be a non-human mechanism capable of demonstrating proficiency in dealing with a wide range of problems. For example, AGI could translate texts, compose symphonies, learn new skills, excel at games that have not yet been invented, etc.
GOFAI ("good-old-fashioned-ai"), or symbolic artificial intelligence, is the term used to refer to methods of developing AI systems based on high-level symbolic (interpretable) representations, logic, and search. Deep Blue is a great example of an expert/GOFAI system. Deep Blue beat Garry Kasparov (a Russian chess grandmaster) in a six-game match in 1996.
A multi-agent system (MAS) is a computer system composed of multiple interacting intelligent agents. MAS can solve problems that are difficult to be solved by a single agent or a monolithic system. This type of system usually combines algorithmic search techniques, rule-based approaches, classical optimization, and even reinforcement learning or other machine learning techniques.
Machine Learning is a field of research dedicated to understanding and building computational methods that "learn", i.e., methods that use information/data to improve performance on some tasks. Generally, ML is used in problems where a precise (rule-based) description of the solution would be too challenging (e.g., computer vision).
A Genetic Algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to a class known as Evolutionary Algorithms (EA). Genetic algorithms are commonly used to generate solutions to optimization and search problems by using biologically inspired strategies such as mutation, crossover, and selection. Some examples of GA applications include decision tree optimization, solving sudoku puzzles (an NP-complete problem), hyperparameter optimization, etc.
A P-problem is a problem that can be solved in "polynomial time," which means that an algorithm exists for its solution such that the number of steps in the algorithm is bounded by a polynomial function of n, where n corresponds to the length of the input to the problem (e.g., n^2, log(n)). NP (polynomial-time executed by a non-deterministic algorithm) is a complexity class used to classify decision problems that can not be solved in polynomial time by a deterministic Turing Machine. NP is the set of decision problems for which the instances of the problem have proofs verifiable in polynomial time by a deterministic Turing machine. However, such problems cannot be solved in polynomial time. The P versus NP problem is perhaps the biggest open problem in computer science. It asks whether, for every problem whose solution can be checked quickly (in polynomial time), such a problem can also be solved quickly.
Supervised Learning (SL) is a machine learning task that maps an input to an output based on examples of input-output pairs (a "supervisory signal"). In SL, we use the distribution of training data to try to infer the "true distribution" of real phenomenon through gradient descent and empirical risk minimization.
Unsupervised Learning (UL) is a machine learning technique in which the controller does not need to supervise the model. Instead, it allows the model to work on its own to discover patterns and information that were previously undetected. UL is useful for classifying unlabeled data, and to learn patterns inside data.
Computational Complexity Theory focuses on classifying computational problems according to their resource usage, and how such classes (NL, P, NP, PSPACE, EXPTIME, EXPSPACE) relate to each other.
Algorithmic Information Theory (AIT) is a branch of theoretical computer science concerned with the relationship between computation and information generated by computer programs, such as numerical sequences or any other data structure. In AIT, the Kolmogorov complexity (Solomonoff-Kolmogorov-Chaitin complexity) of an object, such as a piece of text, is the length of the shortest computer program that produces the object as output, being a measure of the computational resources required to specify such an object.
Semi-supervised learning (SSL) is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. SSL lies between unsupervised learning (no labeled training data) and supervised learning (only labeled training data).
Reinforcement Learning (RL) is a machine learning technique concerned with how intelligent agents should act in an environment to maximize the expected return of reward. RL is one of the three basic paradigms of machine learning, along with supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in that it does not require labeled input/output pairs. Instead, the focus is on finding a balance between exploration and exploitation, a problem canonically represented by the "multi-armed bandit problem."
In probability theory and machine learning, the Multi-armed Bandit (MaB) problem is a problem in which a fixed, limited set of resources must be allocated among different choices to maximize their expected gain, even when the properties of each choice are only partially known at the time of allocation and may become better understood as time passes or the resource allocation is changed.
Feature engineering, or feature extraction, is the process of using domain knowledge to extract features from raw data. The motivation behind this technique is to use these extra features to improve the quality of the results of a machine learning process, as compared to just providing the raw data to the machine learning process. Feature learning (or representation learning) is a set of techniques that allow a system to automatically discover the representations needed for given task. One of the great advantages we have when using deep learning is that features can be learned/extracted in an unsupervised fashion.
Online learning (OL) is a method of machine learning in which data is available in sequential order and is used to update the best predictor for future data at each step, as opposed to training the predictor on the entire training data set at once. Online learning is a common technique used in areas where it is computationally infeasible to train on the entire data set. Online learning is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time (e.g., recommendation algorithms). Online learning algorithms are prone to the "catastrophic forgetting" problem.
Catastrophic forgetting, or catastrophic interference, is a well-known problem in machine learning, related to the tendency of artificial neural networks to completely and abruptly forget previously learned information when learning new information.
Time series analysis is a subfield of machine learning and statistics whose focus of interest is temporal data. Many types of problems in machine learning require time series analysis, like in the case of forecasting systems, language generation, audio recognition, etc.
Deep Learning is part of a larger family of machine learning methods. We can say that deep learning is a machine learning technique for learning representations from data. "Deep" comes from the idea that we are able to learn several "layers" of representations from our data. Finding clear representations for manifold of complex and highly folded data structures is what deep learning does.
The Lottery Ticket Hypothesis says that: "A dense, randomly initialized neural network contains a subnetwork that - when trained alone - can match the testing accuracy of the original network after training for at most the same number of training iterations." In common terms, this means that some networks have "good sub-networks" inside them, i.e., some sub-networks are winning lottery tickets.
The Manifold hypothesis says that: "Many high dimensional datasets that occur in the real world actually lie along with low dimensionality faces within high dimensionality spaces." In common terms, naturally occurring data, even though highly dimensional, seems to occupy only a small subset of dimensions in this space.
A convolutional neural network (CNN) is a class of artificial neural networks (ANN) commonly applied to computer vision problems, inspired by the way visual processing occurs in certain types of animals. CNNs have interesting properties, for example, being able to preserve certain types of symmetries (e.g., CNNs can generate outputs that are equivariant to translational shifts of their input). To learn more about this type of neural network, go to the Neural Network Zoo.
Computer Vision (CV) is an interdisciplinary scientific field that deals with how computers can obtain high-level understanding from digital images or videos. From an engineering perspective, it seeks to understand and automate tasks that the human visual system can do (e.g., classification, image segmentation, etc.).
Natural Language Processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language. More specifically, it addresses the challenge of programming computers to process and analyze natural language. Challenges in NLP can involve natural language understanding, natural language generation, text analysis, and basically any task that can be expressed by manipulating and analyzing text.
Artificial Neural Networks (ANNs), often called neural networks (NNs), are computing systems loosely inspired by biological neural networks that make up animal brains. An ANN is based on a collection of connected units (nodes/neurons), where each connection, like the synapses in a biological brain, transmits information between other neurons. Typically, neurons are aggregated in layers. Different layers can perform different transformations on their inputs. If we allow such systems to have an arbitrary number of units and layers, the universal approximation theorem dictates that these systems, with the right tuning of their parameters and hyperparameters, are capable of "representing a wide variety of interesting functions."
A Feedforward Neural Network (FNN) is a neural network without cyclic or recursive connections, like an RNN. To learn more about FNNs, please visit the Neural Network Zoo.
Backpropagation is the main algorithm for performing parameter updating in neural networks. First, the output values of each node are computed (and cached) in one forward pass. Next, the partial derivative of the error concerning each parameter is calculated in a backward pass through the network. By iterating this process, the direction where the gradient of the loss function decreases can be found, and thus convex points of the optimization landscape can be found (i.e., points where the loss is minimized).
The Transformer is a neural network architecture developed by Google that relies on attention mechanisms to transform a sequence of input embeddings into a sequence of output embeddings without relying on convolutions or recurrent neural networks. We can think of a transformer as a stack of attention (and self-attention) layers connected together by residual connections with feed forward and normalization layers. Such layers can be made up of two different kinds of transformer blocks: encoders and decoders. Certain transformers consist only of encoder blocks (e.g., BERT), while others consist only of decoder blocks (e.g., GPT).
Attention is a wide range of neural network architecture mechanisms that dependently aggregate information from a set of inputs. Attention and self-attention mechanisms are the building blocks of transformer networks. Self-Attention is an attention mechanism that relates different positions of a single sequence (input) to compute a representation of this sequence (e.g., how different words in a sentence relate to the whole sentence).
Recurrent neural networks (RNN) are a class of artificial neural networks where the connections between nodes form a directed, or undirected, graph over a temporal sequence. This allows it to exhibit temporal dynamic behavior, which makes them applicable to tasks such as unsegmented handwriting recognition, speech recognition, time series forecast, etc.
Long Short-Term Memory (LSTM) is a type of RNN capable of processing sequences of temporal data (such as speech or video). A common LSTM unit is composed of a cell, an input port, an output port, and a forgetting port. The unit can remember values at arbitrary time intervals, and the three gates regulate the flow of information into and out of the cell. LSTMs were developed to deal with the "vanishing gradient problem" commonly encountered in training traditional RNNs.
Self-supervised learning is a type of machine learning where a model learns to identify patterns in data without supervision. Instead of relying on labeled data, which is often expensive and time-consuming to obtain, self-supervised learning leverages unlabeled data to train models. There are many examples of self-supervised learning in different domains. One popular application is in natural language processing (NLP), where models are trained to predict masked words in a sentence based on the remaining context.
A Generative Adversarial Network (GAN) is a different kind of neural network, being two networks that work together. GANs consist of any two networks (though often a combination of a FFN and a CNN), where one oversees generating the content, and the other is in charge of judging that content. The discriminating network receives training data or content generated from the generative network. How well the discriminating network can correctly predict whether the data source is real or artificial is then used as part of the error of the generating network. This creates a form of competition where the discriminator is becoming better at distinguishing real data from generated data, and the generator is learning to become less predictive of the discriminator. At the end of this process, the discriminator is usually discarded, and we end up with a network capable of generating extremely truthful data (e.g., fake images of human faces).
Neural Turing Machines (NTM) can be understood as an abstraction of LSTMs, being an attempt to open black boxes. Instead of encoding a memory cell directly into a neuron, NTMs have separate memory cells. This is an attempt to combine the efficiency and permanence of regular digital storage with the efficiency and expressive power of neural networks. The idea is to have a content-addressable memory bank and a neural network that can read and write from it. The "Turing" in NTMs comes from the fact that these networks are Turing complete. To learn more about NTM, visit the Neural Network Zoo.
An activation function (e.g., ReLU, GELU, etc.) takes the weighted sum of all inputs from the previous layer and generates a (usually non-linear) output value for the next layer in a neural network.
Bias (math) is an intercept or offset from an origin. Bias is referred to as b or w0 in machine learning models. Bias (ethical) is a stereotype or favoritism toward some group over others. These biases can affect the behavior of AI system, and how users interact with such a system.
Counterfactual Justice is a fairness metric that checks whether a classifier produces the same result for one individual, and for another, that is identical to the first, except concerning one or more sensitive attributes.
These are measures to evaluate the fairness of a classifier trained by machine learning (e.g., Counterfactual Fairness, Demographic Parity, Predictive Parity, Equalized Probabilities, and others).
The Cross-Entropy represents the difference between two probability distributions. This measure is generally used to calculate the loss in multi-class classification problems.
Data Augmentation is a strategy for increasing the number of training examples by transforming existing samples into "artificial samples."
Demographic Parity is a fairness metric that is satisfied if a model's ranking results do not depend on a particular sensitive attribute. For example, "if both Ravenclaws and Hufflepuffs apply to Hogwarts, demographic parity is achieved if the percentage of admitted Ravenclaws is the same as the percentage of admitted Hufflepuffs, regardless of whether one group is on average more qualified than the other".
In reinforcement learning, the environment is that which contains the agent, producing world-states for the agent to interact with. For example, the environment can be a chess board or a maze. When the agent applies an action to the environment, this environment transitions into a new world-state.
Predictive parity is a fairness metric that checks whether, for a preferred label (which confers an advantage or benefit on a person) and a given attribute, a classifier assigns the preferred label equally for all values of that attribute.
Equalized odds is an equity metric that checks whether, for any particular label and attribute, a classifier predicts that label equally well for all values of that attribute (regardless of whether that label is a benefit or an impact).
The vanishing gradient problem is encountered when training neural nets with learning methods based on gradient descent and backpropagation. In such methods, during each training iteration, each of the weights in the neural net receives an update proportional to the partial derivative of the error function concerning the current weight. The problem is that in some cases, over iterations, the gradient disappears, or becomes very small, effectively preventing the weight from changing its value. In the worst case, this can completely prevent the neural net from continuing training. The exploding gradient problem, on the other hand, occurs because the gradient of models with deep architectures tends to become surprisingly steep (high). Steep gradients result in very large updates to the weights of each node in a deep neural net. Without careful regulation of the gradient step size, the gradient can end up "blowing out" of a convex region of the optimization landscape.
False Negative is an example where the model wrongly predicted the negative class. False Positive is an example where the model wrongly predicted the positive class. True Positive is an example where the model correctly predicted the positive class. True Negative is an example where the model correctly predicted the negative class. All of these possibilities are expressed in confusion matrices.
Federated Learning (FL) is a distributed machine learning approach that trains machine learning models using decentralized examples residing on devices such as smartphones. In federated learning, a subset of devices downloads the current model from a central coordinating server. The devices use the examples stored on the devices to make improvements to the model. The devices then upload the model improvements (but not the training examples) to the coordination server, where they are aggregated with other updates to produce an improved global model.
Hyperparameters are the "knobs" that you tweak during successive iterations of training a model (e.g., learning rate, n of neurons, n of layers, dropout rate, etc.). Meanwhile, a Parameter is a variable that the model trains by itself, i.g., the weights and biases of a ML model.
Interpretability (XAI) is the ability to explain or present the reasoning of an ML model in terms understandable to a human.
A NaN Trap occurs when one parameter in your model becomes a NaN (Not A Number) during training, which causes many (or all) other parameters in your model to become a NaN.
An objective function (in a mathematical optimization problem) is the real-valued function whose value is to be either minimized or maximized over the set of feasible alternatives.
Perplexity is a measure of how well a probability distribution or probability model predicts a sample. For example, suppose your task is to read the first letters of a word that a user is typing on his/her/they smartphone and offer a list of possible filler words. Perplexity (for this task) is approximately the number of guesses you need to offer for your list to contain the actual word the user is trying to type.
A sensitive attribute is a human attribute that should receive special consideration for legal, ethical, social, or personal reasons (e.g., race, gender, sexual orientation, etc.).
Stochastic Gradient Descent (SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g., differentiable functions). SGD is an optimization algorithm often used in machine learning applications to find the parameters of a model that correspond to the best fit between the predicted outputs and the true distribution of the data.
The alignment problem is actually two problems: Outer-alignment and Inner-alignment. Outer-alignment: "Ensuring that the base objective to be optimized is aligned with the true intentions and objectives of the controllers." Inner-alignment: "Ensuring that the objective of the base optimizer (e.g., SGD) is aligned with the mesa-objective of the model created." From an ethical/philosophical point of view, this is the problem of how to specify human values to ML models.For more details, see "Risks from Learned Optimization in Advanced Machine Learning Systems'' for a detailed explanation.
The control problem is postulated from the following argument: "existing AI systems can be monitored and easily shut down and modified if they misbehave. However, a poorly programmed superintelligence, which by definition could turn out to be more intelligent than its controllers, could come to realize how allowing its shutdown (modification) could interfere with its ability to achieve its current goals." The control problem asks: What prior precautions can programmers take to prevent superintelligence from behaving catastrophically badly?
AI Boxing is a proposed control method in which an AI is executed on an isolated computer system with highly restricted input and output channels. While this reduces the ability of the AI to perform undesirable behavior, it also reduces its usefulness.
Roko's basilisk is a thought experiment proposed in 2010 by user "Roko" in the Less Wrong community. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but did not work to bring it into existence. The argument was called a "basilisk" because simply listening to the argument would supposedly put you at risk of torture by this hypothetical agent (a basilisk in this context is any information that harms or endangers the people who come to know this information).
Aumann's Agreement Theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share the same common principles, and have common knowledge of each other's current probability assignments, such people must have equal probability assignments.
Decision theory is the study of principles and algorithms for making correct decisions, that is, decisions that allow an agent to achieve better results concerning its goals. In Decision Theory, every action, at least implicitly, represents a decision under uncertainty, that is, a state of partial knowledge. What are the mechanisms underlying decision processes? What is it? And how can it be done better? These, and many other questions, are the focus of interest in Decision Theory.
Epistemology is the study of how we know the world. It is both a subject in philosophy and a practical concern with how we come to believe things to be true.
Game theory is the formal study of how rational actors interact to pursue incentives. It investigates situations of conflict and cooperation.
Infra-Bayesianism is a new approach in epistemology/decision theory/reinforcement learning, which relies on the idea of "imprecise probabilities" to solve the problem of prior misspecification/ground-truth/non-realizability that plagues Bayesianism and reinforcement learning. Infra-Bayesianism also leads naturally to the implementation of UDT (Updateless Decision Theory) and (more speculatively) applications to multi-agent theory, embodied agency, and reflexive agency.
Updateless Decision Theory (UDT) is a decision theory designed to address a fundamental problem in existing decision theories: "the need to treat the agent as a part of the world in which he makes his decisions."
Newcomb's Problem is a thought experiment that explores the problems involved in interacting with agents that can predict (fully or partially) our actions.
Occam's razor (more formally referred to as the principle of parsimony) is a principle commonly stated as "Entities should not be multiplied beyond necessity." When several theories are capable of explaining the same observations, Occam's razor suggests that the simplest one is preferable.
Solomonoff induction is an inference system defined by Ray Solomonoff. A system that follows such an inference method will learn to correctly predict any calculable sequence with only the absolute minimum amount of data. This system, in a sense, is the "perfect" universal prediction algorithm.
A utility function assigns numerical values ("utility") to outcomes, such that outcomes with higher utilities are always preferred over outcomes with lower utilities.
Goodhart's Law states that when one chooses a proxy as the value to optimize a certain task, given enough pressure, the proxy will no longer be a good measure of success in that task. Goodhart's Law is of particular relevance to the Alignment problem.
Heuristics and biases are forms of human reasoning that differ us from a theoretical ideal agent, due to shortcuts in reasoning that do not always work (heuristics) and cause systematic errors (biases).
Dual Process Theory postulates two types of processes in the human brain. The two processes consist of an implicit, unconscious process (System 1), and an explicit, conscious process (System 2).
A philosophical zombie or (p-zombie) is a hypothetical entity that looks and behaves exactly like a human (often stipulated to be atom by atom identical to a human) but is not conscious, i.e., it has no qualia.
AIXI is a mathematical formalism for a hypothetical (super)intelligence, developed by Marcus Hutter. However, AIXI is not computable. Even so, AIXI is still considered a valuable theoretical illustration with both positive and negative aspects.
Coherent Extrapolated Volition (CEV) was a term developed by Eliezer Yudkowsky on works related to Friendly AI. It means as an argument that it would not be enough to explicitly program what we think our desires and motivations are to an AI, instead we should find a way to program such a system in a way that would act in accordance to the best interest of our idealized values.
A corrigible agent does not interfere with what we would intuitively see as attempts to "correct" the agent or "correct" our errors in its development. A corrigible agent would allow these corrections, even though the instrumental convergence theorem dictates otherwise. Corrigibility is an important property in AI safety and Alignment.
Embodied Agency is an intuitive notion that an understanding of rational agent theory must account for the fact that the agents we create (and ourselves) are parts of the world, not separate from it. This is in contrast to the current Cartesian model (such as Solomonoff induction), which implicitly assumes a separation between the agent and the environment.
Inner-Alignment is the problem of ensuring that the mesa-optimizers (i.e., when the trained model is itself an optimizer) are aligned with the objective function of the base optimizer. As an example, evolution is an optimization force that itself "designed" optimizers (humans) to achieve its goals. However, humans do not primarily maximize reproductive success (they use birth control techniques and would rather have fun than maximize reproductive success). This is a failure of inner-alignment.
Instrumental convergence is the the hypothetical tendency of most sufficiently intelligent agents to pursue a series of instrumental goals independently of their terminal goals.
Power Seeking Theorems: "when rewards are distributed fairly and evenly across states (IID), it is instrumentally convergent to gain access to many final states." In the context of Markov decision processes, it is proven that certain environmental symmetries are sufficient for optimal policies to induce power-seeking, i.e., control over the environment. For more information, search for "Optimal Farsighted Agents Tend to Seek Power."
The Orthogonality Thesis states that artificial intelligence can have any combination of intelligence level and terminal goals, i.e., its "utility function" and "intelligence" can vary independently. This contrasts with the belief that, because of their intelligence, all forms of AI will in the end converge to a common set of goals. To learn more, you can read "The Basic AI Drives."
Logical uncertainty is probabilistic uncertainty about the implications of beliefs. Probability Theory usually assumes logical omniscience, i.e., perfect knowledge of logic. Realistic agents (not perfect Bayesians) cannot be logically omniscient.
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer (e.g., SGD) optimizes and creates a mesa-optimizer. Previously, work under this concept was called Inner Optimizers and Optimization Daemons.
Outer-alignment, in the context of Machine Learning, is the extent to which the specified objective function is aligned with the intended goal of its designers. This is an intuitive notion, in part because human intentions themselves are not well understood. This is what is usually discussed as the "value alignment" problem.
The Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans consider almost "worthless," such as maximizing the number of paperclips in the universe. The paperclip maximizer is the canonical thought experiment that seeks to show how a general artificial intelligence, even one designed competently and without malice, could ultimately destroy humanity.
Recursive self-improvement refers to a hypothetical property that AGI could come to possess, i.e., the ability to self-improve. For more information, read Large Language Models Can Self-Improve.
A Treacherous Turn is a hypothetical event where an advanced AI system that has been “pretending” to be aligned due to its relative weakness turns on its controllers once it attains enough power to pursue its true goal without risk. To learn more, read "Catching Treacherous Turn: A Model of the Multilevel AI Boxing."
Humans Consulting HCH (HCH) is a recursive acronym that describes a setting where humans can consult simulations of themselves to help answer questions. It is a concept used in the discussion related to iterated amplification and debate.
Iterated Amplification (also known as IDA) is an alternative training strategy that progressively builds a training signal for hard-to-specify problems by combining solutions for easier sub-problems. Iterated Amplification is closely related to Specialized Iteration (i.e., the methodology used to train Alpha Go). For more information, see "Supervising strong learners by amplifying weak experts."
Expert Iteration (ExIt) is a reinforcement learning algorithm that decomposes the problem into planning and generalization tasks. The planning of new policies is performed by Monte Carlo tree search, while a deep neural network generalizes these plans. Subsequently, the tree search is improved by using the neural network's policy to guide the search, increasing the efficiency of the next search phase. For more information, see "Thinking Fast and Slow with Deep Learning and Tree Search."
Impact measures penalize an AI for affecting the environment too much. To reduce the risk posed by an AI system, you may want to make it try to achieve its goals with as little impact on the environment as possible. Impact measures are ways to measure what "impact" is.
Value learning is a proposed method for incorporating human values into an AI. Value learning involves creating an artificial learner whose actions guided by learned human values. For more information, read "The Value Learning Problem."
In AI governance one looks for methods to ensure that society benefits from our increasing adoption and use of AI technologies.
AI Risk is the analysis of the risks associated with building AI systems. AI models can possess complex vulnerabilities that create peculiar risks. Vulnerabilities such as model extraction (i.e., attacks aimed at duplicating a machine learning model) and data poisoning (i.e., attacks aimed at tampering with training data) can pose new challenges for security approaches.
AI Takeoff refers to the process in which a Seed AI, with a certain capability limit, would be able to improve itself to become an AGI. There is debate as to whether, realistically, the speed of a takeoff is more likely to be slow (gradual) or fast (abrupt). Fore more information, read Singularity and Coordination Problems: Pandemic Lessons from 2020.
AI Timelines refers to the discussion of how long until various milestones in AI progress are reached, e.g., human-level AI, emulate a human brain, and others. AI Timelines should be distinguished from AI Takeoffs, which deal with the dynamics of AI progress after the development of human-level AI or a Seed AI. Fore more information, read Singularity and Coordination Problems: Pandemic Lessons from 2020.
Transformative AI is a term used to refer to AI technologies that could eventually precipitate a transition comparable to, for example, the agricultural or industrial revolution. This is similar to the concept of superintelligence or AGI, but with no mention of the "level of intelligence or generality" of such a system.
Generative Pretrained Transformer (GPT) refers to a family of large transform-based language models created by OpenAI.
Narrow AI is a term used to refer to systems capable of operating only in a relatively limited domain, such as chess or driving, rather than being able to learn a wide range of tasks like a human or an AGI. Narrow vs Strong/General is not a perfect binary classification, as there are degrees of generality, e.g., large language models have a large degree of generality without being as general as a human.
Whole Brain Emulation (WBE) is about simulating or transferring the information contained within a brain to a computational substrate. Through a WBE one would theoretically create "genuine" machine intelligence. This concept is often discussed in the context of Philosophy of Mind. For more information, see "Whole Brain Emulation: A Roadmap."
Consequentialism is a family of ethical theories that dictate that people should choose their actions based on the outcomes they expect to obtain. Several types of consequentialism specify how outcomes should be judged. Consequentialism is one of the three main strands of ethical thought, along with deontology and virtue ethics.
Deontology is a family of ethical theories that dictates that people should choose their actions based on a prescribed list of moral norms and is a theory of morality based on obedience to moral rules.
Virtue ethics is a class of ethical theories that treat the concept of moral virtue as central to ethics. Virtue ethics is usually contrasted with two other main approaches in normative ethics, consequentialism, and deontology. It defines morally upright behavior as exhibiting virtues like bravery, loyalty, or wisdom.
Metaethics is a field of study that attempts to understand the metaphysical, epistemological, and semantic characteristics, as well as the foundations and scope, of moral values. In metaethics, philosophers are concerned with questions and problems such as "Are moral judgments objective or subjective, relative or absolute?", "Are there moral facts?" or "How do we learn moral values?"
Moral uncertainty (or normative uncertainty) is uncertainty about what we should do, morally, given the diversity of moral doctrines.
An existential risk is a risk that presents astronomically large negative consequences for humanity, such as human extinction, or permanent global totalitarianism.
Value complexity is the thesis that human values have high Kolmogorov complexity, i.e., that human preferences cannot be summarized in simple rules, or compressed into a smaller algorithm than the complete description of such values. Value fragility is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would find unacceptable. For more information, read "Complex Value Systems are Required to Realize Valuable Futures."
AI Ethics is the branch of ethics specific to concerns related to AI systems. AI Ethics is sometimes divided between concerns related to the moral behavior of humans when designing, making, and using AI systems, and concerns related to the behavior of machines, i.e., machines acting "morally."
AI Safety is an area of machine learning research that aims to identify the causes of unintentional behavior of systems created by machine learning, where researchers seeks to develop tools to ensure that such systems function in a safe and reliable way.
Robustness is about creating models that are resilient to adversarial attacks and unusual situations. Models trained by machine learning are still fragile and rigid, not operating well in dynamic and changing environments. You can learn more about robustness to adversarial attacks in this repository.
Monitoring is about detecting malicious use, malfunctions, or unintended functionality that may be present in ML models.
External Security is about the fact that models can be embedded in insecure environments, such as bad software and poorly structured organizations.
For statistical data on AI Ethics visit the "Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance." dashboard. For more information, read the full article.
Here is a list of principles that work in AI Ethics: Beneficence, Trustworthiness, Child and Adolescent Rights, Human Rights, Labor Rights, Diversity, Human Formation, Human-centeredness, Intellectual Property, Justice, Freedom, Cooperation, Privacy, Accountability, Sustainability, Transparency, and Truthfulness. For more information, visit the "Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance" dashboard.
Accountability refers to the idea that developers and deployers of AI technologies should be compliant with regulatory bodies, also meaning that such actors should be accountable for their actions and the impacts caused by their technologies.
Beneficence and non-maleficence are concepts that come from bioethics and medical ethics, and in AI ethics, they state that human welfare (and harm aversion) should be the goal of AI-empowered technologies.
Child and Adolescent Rights, in the context of AI Ethics, advocates the idea that the rights of children and adolescents must be respected by AI technologies. AI stakeholders should safeguard, respect, and be aware of the fragilities associated with young people.
Dignity is a principle based on the idea that all individuals deserve proper treatment and respect. In AI ethics, the respect for human dignity is often tied to human rights (i.e., "Universal Declaration of Human Rights").
Diversity advocates the idea that the development and use of AI technologies should be done in an inclusive and accessible way, respecting the different ways that the human entity may come to express itself.
Autonomy advocates the idea that the autonomy of human decision-making must be preserved during human-AI interactions, whether that choice is individual, or the freedom to choose together, such as the inviolability of democratic rights and values, also being linked to technological self-sufficiency of Nations/States.
Human Formation and Education are principles based on the idea that human formation and education must be prioritized in our technological advances. AI technologies require a considerable level of expertise to be produced and operated, and such knowledge should be accessible to all.
Human-centeredness is a principle based on the idea that AI systems should be centered on and aligned with human values. AI technologies should be tailored to align with our values (e.g., value-sensitive design).
Intellectual Property seeks to ground the property rights over AI products and/or processes of knowledge generated by individuals, whether tangible or intangible.
Fairness upholds the idea of non-discrimination and bias mitigation (discriminatory algorithmic biases AI systems can be subject to). It defends the idea that, regardless of the different sensitive attributes that may characterize an individual, all should be treated "fairly."
Labor rights are legal and human rights related to the labor relations between workers and employers. In AI ethics, this principle emphasizes that workers' rights should be preserved regardless of whether labor relations are being mediated/augmented by AI technologies or not.
Cooperation advocates different means by which joint actions can be established and cultivated between AI stakeholders to achieve common goals. It also advocates for the free and open exchange of valuable AI assets to mitigate possible AI monopolies.
Privacy can be defined as the individual's right to "expose oneself voluntarily, and to the extent desired, to the world." In AI ethics, this principle upholds the right of a person to control the exposure and availability of personal information when mined as training data for AI systems.
Reliability is the idea that AI technologies should be reliable, in the sense that their use can be verifiably attested as safe and robust, promoting user trust and better acceptance of AI technologies.
Sustainability can be understood as a form of "intergenerational justice," where the well-being of future generations must also be counted during AI development. In AI ethics, sustainability refers to the idea that the development of AI technologies should be carried out with an awareness of their long-term implications, such as environmental costs and non-human life preservation/well-being.
Transparency supports the idea that the use and development of AI technologies should be done transparently for all interested stakeholders. Transparency can be related to "the transparency of an organization" or "the transparency of an algorithm."
Truthfulness upholds the idea that AI technologies must provide truthful information. It is also related to the idea that people should not be deceived when interacting with AI systems.
I don't have a watch... but I bet your computer does!
I am not an oracle, ask Google! He's good at that. I'm just a closed domain chatbot, not a search engine.
I hope I helped you, come back often!
You are being rude...
Ethics or Moral Philosophy is a branch of philosophy that, together, with the Aesthetics, comprises what we call Axiology (or Value theory). In Ethics, we seek to systematize, defend, and recommend concepts of right and wrong behavior. Western Ethics can be divided into three major schools: Consequentialism, Deontology, and Virtue Ethics.
We can define the word artificial as something made by humans, rather than something naturally occurring, like a copy of something natural (e.g., artificial light rather than sunlight).