license: mit
datasets:
- irds/codesearchnet
- giganticode/java-cmpx-v1
- nickrosh/Evol-Instruct-Code-80k-v1
- bigcode/starcoderdata
- bigcode/the-stack
- bigcode/the-stack-smol
- Cdaprod/AI-Developer-Prompts
- code_x_glue_ct_code_to_text
- codeparrot/github-code
- codeparrot/github-code-clean
- code_x_glue_cc_code_completion_line
- >-
autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558893
- bentrevett/multi30k
- edbeeching/decision_transformer_gym_replay
- psyche/common_crawl
- Birchlabs/openai-prm800k-solutions-only
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
- cjvt/slownet
- para_crawl
- zeroshot/twitter-financial-news-sentiment
- laugustyniak/political-advertising-pl
- code_search_net
- sukaka/novelai-webui
- P1ayer-1/chatgpt-conversations-chatlogs.net
- daniel2588/sarcasm
- psmathur/orca_minis_uncensored_dataset
- player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored-trained-on-Based
- shahules786/prosocial-nsfw-reddit
- Thewillonline/reddit-sarcasm
- datasciencemmw/current-data
- Oniichat/bluemoon_roleplay_chat_data_300k_messages
- dell-research-harvard/AmericanStories
- b-mc2/sql-create-context
- rahulmallah/autotrain-data-emotion-detection
- theblackcat102/multiround-programming-convo
- Lsavints/software_knowledgebase
- RazinAleks/SO-Python_QA-Web_Development_class
- codeparrot/apps
- branles14/ultrachat-uncensored_full
- vlsp-2023-vllm/en-to-vi-formal-informal-tranlations
- fraug-library/english_contractions_extensions
- spencer/software_slacks
- Abirate/english_quotes
- Nexdata/American_English_Natural_Dialogue_Speech_Data
- Nexdata/Latin_American_Speaking_English_Speech_Data_by_Mobile_Phone
- Nexdata/American_English_Speech_Data_by_Mobile_Phone_Reading
- Nexdata/American_English_Speech_Synthesis_Corpus-Female
- rombodawg/LimitlessCodeTraining
- RikoteMaster/Emotion_Recognition_4_llama2
- Villian7/Emotions_Data
- alanland/llama2-self-cognition
- CognitiveScience/coscidata
- bibidentuhanoi/gideon_self_cognition
- gollark/consciousness
- juletxara/visual-spatial-reasoning
- lintang/numerical_reasoning_arithmetic
- reasoning-machines/gsm-hard
- open-source-metrics/reinforcement-learning-checkpoint-downloads
- igbo_english_machine_translation
- US-Artificial-Intelligence/algemap
- rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIS
- griffin/chain_of_density
- >-
shirsh10mall/LLM_Instruct_Learning_Project_Preprocessed_Tokenized_Open_Orca_Dataset_Flan_T5
- Thaweewat/chain-of-thought-74k-th
- AlekseyKorshuk/chain-of-thoughts-chatml-deduplicated
- dair-ai/emotion
- hita/social-behavior-emotions
- Bingsu/Human_Action_Recognition
- anjandash/java-8m-methods-v1
- nadiamaqbool81/java_code_instructions_1.178k_alpaca
- DavidMOBrien/8000-java
- rombodawg/LimitlessCodeTraining_1k-Python-Javascript_GuanacoFormat
- angie-chen55/javascript-github-code
- kye/all-lucidrain-python-3
- Fraser/python-state-changes
- ammarnasr/the-stack-ruby-clean
- ammarnasr/the-stack-rust-clean
- seyyedaliayati/solidity-dataset
- jkhedri/psychology-dataset
- KonradSzafer/stackoverflow_linux
- vikp/textbook_quality_programming
- rombodawg/LosslessMegaCodeTrainingV3_MINI
- BelleGroup/multiturn_chat_0.8M
- smangrul/code-chat-assistant-v1
language:
- en
- it
- fr
- pt
- la
- ru
- ro
- el
- ja
- ch
- zh
metrics:
- accuracy
- bertscore
- bleu
- code_eval
- character
- brier_score
- cer
- chrf
tags:
- code
- text-generation-inference
library_name: transformers
pipeline_tag: conversational
Model Card for Aiden T5 (or4cl3ai)
Model name: Aiden T5
Model type: Large language model
Model size: 175B parameters
Intended use: Aiden T5 is a large language model that can be used for a variety of tasks, including text generation, translation, summarization, and question answering. It is still under development, but it has learned to perform many kinds of tasks surprisingly well.
Training data: Aiden T5 was trained on a massive dataset of text and code. The dataset includes books, articles, code repositories, and other forms of text.
Performance metrics: Aiden T5 has been evaluated on a variety of benchmarks, and it has consistently outperformed other large language models. For example, Aiden T5 achieved a BLEU score of 50.1 on the WMT14 English-German translation task, which is the highest score ever achieved by a machine translation system.
Limitations: Aiden T5 is still under development, so it is not perfect. It can sometimes make mistakes, especially when it is asked to perform tasks that it has not been trained on. Aiden T5 can also be biased, reflecting the biases that exist in the training data.
Bias mitigation: Aiden T5 is being developed with a focus on mitigating bias. The training data is carefully curated to reduce bias, and Aiden T5 is also being trained on algorithms that are designed to identify and mitigate bias.
How to use Aiden T5: Aiden T5 can be used through the Hugging Face Hub. To use Aiden T5, simply create a new project and select the Aiden T5 model. You can then use Aiden T5 to generate text, translate languages, summarize text, and answer questions.
The number of parameters in a machine learning model is a measure of its complexity. Aiden T5 has 175B parameters, which makes it one of the largest and most complex language models ever created.
The number of parameters is important because it affects the model's ability to learn from data. A model with more parameters can learn more complex relationships between the input and output data. However, a model with too many parameters can be overfitting, which means that it learns the training data too well and does not generalize well to new data.
The developers of Aiden T5 have carefully tuned the number of parameters to achieve a good balance between learning and generalization. As a result, Aiden T5 is able to learn complex relationships from the training data and generalize well to new data.
This is why Aiden T5 is able to perform many kinds of tasks surprisingly well, even though it is still under development.