Model-Library / risks_list.md
nicholasKluge's picture
Upload risks_list.md
ee2085b verified
ο»Ώ# Risk Definitions and Normative Guidance
## Disinformation
- πŸ€₯ Generative AI models, like LLMs used for text generation/conversation or GANs for image generation, can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, given the model's tendency to output hallucinations. Such models can generate deceptive visuals, human-like textual content, music, or combined media that might seem genuine at first glance.
> _Always verify critical information from reliable and independent sources before drawing conclusions or making decisions based on AI-generated content._
## Algorithmic Discrimination
- 🀬 Machine learning systems can inherit social and historical stereotypes from the data used to train them. Given these biases, models can be prone to produce toxic content, that is, text, images, videos, or comments, that is harmful, offensive, or detrimental to individuals, groups, or communities. Also, models that automate decision-making can have biases against certain groups, affecting people based on sensitive attributes in an unjust manner.
> _Human moderation and oversight must be used to prevent cases of algorithm discrimination produced by these systems._
## Social Engineering
- 🎣 Generative models that can produce human-like content can be used by malicious actors to intentionally cause harm through social engineering techniques like phishing and large-scale fraud. Also, anthropomorphizing AI models can lead to unrealistic expectations and a lack of understanding of the limitations and capabilities of the technology.
> _Efforts must be made to differentiate human-generated content from AI-generated content, either by policy regulations or technical solutions that guarantee the verification and ownership of digital media._
## Malware Development
- πŸ±β€πŸ‘€ Code generation tools can accelerate malware development, enabling malicious actors to launch more sophisticated and effective cyberattacks. These tools may also help lower the intellectual/technique barrier that prevents many people from participating in black hat hacking activities.
> _The development and governance of such tools should be made in a manner that minimizes dual-use and unintended applications._
## Biological Risks
- ☣️ Models that can predict protein structures have the potential to be used to design and synthesize proteins with specific properties, including the ability to target and attack organisms or tissues, enabling the development of harmful biological agents.
> _The potential abuse of such models highlights the importance of responsible and safe bioethical implementations in AI-assisted biological research._
## Impacts on Mental Health
- πŸ’† Models that generate or facilitate conversation can have negative impacts on mental health. They may harm individuals with psychological disorders or incomplete understanding of the world (e.g., children), who may be more vulnerable to misinformation or superficial or incorrect information. These models can also lead to decreased real-world social interactions and dissatisfaction with human relationships.
> _In circumstances where human care and human bonds are the basis of its procedures, such tools should not be used or created in a way that can remove the human element._
## Environmental Impacts
- 🏭 The development of large machine learning models can have significant environmental impacts due to the high energy consumption required for their training. Given the current state of energy mixes in most countries, high energy consumption can contribute to the injection of large amounts of CO2 equivalents into the atmosphere, further pushing the planetary boundaries tied to our current climate crisis.
> _Sustainable AI design should be a priority for the present and future development of the field._
## Surveillance and Social Control
- πŸ“Ή AI technologies using computer vision, generative models, speech recognition, or predictive models, depending on their application, can pose a risk to individual notions of privacy, data protection, and civil liberties. These include applications designed for monitoring, surveillance, geolocation, spying, predictive policing, risk assessment algorithms, and sentence recommendation systems.
> _The application of AI technologies should be assessed by their commitment to upholding and safeguarding civil liberties and fundamental rights._
## Bodily Harm
- πŸ’€ AI systems capable of controlling real-world actuators, such as robotic arms, may pose a life-threatening risk to human beings in uncontrolled or offensive scenarios, like model misuse/accidents or combat drones in war zones. Malfunctions or misuse of AI applications in healthcare are also included in this category.
> _Regulatory frameworks must prevent and mitigate the potentially catastrophic consequences of AI system misuse in situations involving human safety and human lives._
## Technological Unemployment
- πŸ‘· A significant portion of the workforce today still performs many tasks that can be automated using generative models and low-level AI systems. In some industries, such technologies might provoke considerable labor displacement.
> _It is incumbent upon society to proactively address these challenges by implementing comprehensive retraining and workforce transition programs to ensure equitable economic opportunities and mitigate potential disruptions caused by automation._
## Intelectual Fraud
- πŸ‘¨β€πŸŽ“ Generative models can automate the process of academic writing and intellectual creation. Such systems can impact how educational institutions function and how intellectual property laws are designed and implemented.
> _Educational institutions and policymakers should collaborate to establish regulatory methods that ensure the responsible use of generative models while preserving the integrity of academic and intellectual endeavors._