modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
sava5/car
sava5
"2024-06-12T15:10:21Z"
0
0
diffusers
[ "diffusers", "table-question-answering", "af", "aa", "dataset:OpenGVLab/ShareGPT-4o", "license:afl-3.0", "region:us" ]
table-question-answering
"2024-06-12T15:08:27Z"
--- license: afl-3.0 datasets: - OpenGVLab/ShareGPT-4o language: - af - aa metrics: - charcut_mt library_name: diffusers pipeline_tag: table-question-answering ---
Lioncba/MixAsia
Lioncba
"2024-06-12T15:09:57Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:09:57Z"
--- license: apache-2.0 ---
adamo1139/stable-diffusion-3-medium-ungated
adamo1139
"2024-06-12T15:52:38Z"
0
23
null
[ "text-to-image", "stable-diffusion", "en", "arxiv:2403.03206", "license:other", "region:us" ]
text-to-image
"2024-06-12T15:10:50Z"
--- license: other license_name: stabilityai-nc-research-community license_link: LICENSE tags: - text-to-image - stable-diffusion extra_gated_prompt: >- By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy). extra_gated_fields: Name: text Email: text Country: country Organization or Affiliation: text Receive email updates and promotions on Stability AI products, services, and research?: type: select options: - 'Yes' - 'No' I acknowledge that this model is for non-commercial use only unless I acquire a separate license from Stability AI: checkbox language: - en pipeline_tag: text-to-image --- # Mirror info Same as official repo, all hashes match. Just ungated. You can also download via [torrent](https://aitracker.art/viewtopic.php?p=85). # Stable Diffusion 3 Medium ![sd3 demo images](sd3demo.jpg) ## Model ![mmdit](mmdit.png) [Stable Diffusion 3 Medium](stability.ai/news/stable-diffusion-3-medium) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency. For more technical details, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper). Please note: this model is released under the Stability Non-Commercial Research Community License. For a Creator License or an Enterprise License visit Stability.ai or [contact us](https://stability.ai/license) for commercial licensing details. ### Model Description - **Developed by:** Stability AI - **Model type:** MMDiT text-to-image generative model - **Model Description:** This is a model that can be used to generate images based on text prompts. It is a Multimodal Diffusion Transformer (https://arxiv.org/abs/2403.03206) that uses three fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main) and [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl)) ### License - **Non-commercial Use:** Stable Diffusion 3 Medium is released under the [Stability AI Non-Commercial Research Community License](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md). The model is free to use for non-commercial purposes such as academic research. - **Commercial Use**: This model is not available for commercial use without a separate commercial license from Stability. We encourage professional artists, designers, and creators to use our Creator License. Please visit https://stability.ai/license to learn more. ### Model Sources For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for inference. Stable Diffusion 3 Medium is available on our [Stability API Platform](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post). Stable Diffusion 3 models and workflows are available on [Stable Assistant](https://stability.ai/stable-assistant) and on Discord via [Stable Artisan](https://stability.ai/stable-artisan). - **ComfyUI:** https://github.com/comfyanonymous/ComfyUI - **StableSwarmUI:** https://github.com/Stability-AI/StableSwarmUI - **Tech report:** https://stability.ai/news/stable-diffusion-3-research-paper - **Demo:** Huggingface Space is coming soon... ## Training Dataset We used synthetic data and filtered publicly available data to train our models. The model was pre-trained on 1 billion images. The fine-tuning data includes 30M high-quality aesthetic images focused on specific visual content and style, as well as 3M preference data images. ## File Structure ``` β”œβ”€β”€ comfy_example_workflows/ β”‚ β”œβ”€β”€ sd3_medium_example_workflow_basic.json β”‚ β”œβ”€β”€ sd3_medium_example_workflow_multi_prompt.json β”‚ └── sd3_medium_example_workflow_upscaling.json β”‚ β”œβ”€β”€ text_encoders/ β”‚ β”œβ”€β”€ README.md β”‚ β”œβ”€β”€ clip_g.safetensors β”‚ β”œβ”€β”€ clip_l.safetensors β”‚ β”œβ”€β”€ t5xxl_fp16.safetensors β”‚ └── t5xxl_fp8_e4m3fn.safetensors β”‚ β”œβ”€β”€ LICENSE β”œβ”€β”€ sd3_medium.safetensors β”œβ”€β”€ sd3_medium_incl_clips.safetensors β”œβ”€β”€ sd3_medium_incl_clips_t5xxlfp8.safetensors └── ... ``` We have prepared three packaging variants of the SD3 Medium model, each equipped with the same set of MMDiT & VAE weights, for user convenience. * `sd3_medium.safetensors` includes the MMDiT and VAE weights but does not include any text encoders. * `sd3_medium_incl_clips_t5xxlfp8.safetensors` contains all necessary weights, including fp8 version of the T5XXL text encoder, offering a balance between quality and resource requirements. * `sd3_medium_incl_clips.safetensors` includes all necessary weights except for the T5XXL text encoder. It requires minimal resources, but the model's performance will differ without the T5XXL text encoder. * The `text_encoders` folder contains three text encoders and their original model card links for user convenience. All components within the text_encoders folder (and their equivalents embedded in other packings) are subject to their respective original licenses. * The `example_workfows` folder contains example comfy workflows. ## Uses ### Intended Uses Intended uses include the following: * Generation of artworks and use in design and other artistic processes. * Applications in educational or creative tools. * Research on generative models, including understanding the limitations of generative models. All uses of the model should be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy). ### Out-of-Scope Uses The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model. ## Safety As part of our safety-by-design and responsible AI deployment approach, we implement safety measures throughout the development of our models, from the time we begin pre-training a model to the ongoing development, fine-tuning, and deployment of each model. We have implemented a number of safety mitigations that are intended to reduce the risk of severe harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases. For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety). ### Evaluation Approach Our evaluation methods include structured evaluations and internal and external red-teaming testing for specific, severe harms such as child sexual abuse and exploitation, extreme violence, and gore, sexually explicit content, and non-consensual nudity. Testing was conducted primarily in English and may not cover all possible harms. As with any model, the model may, at times, produce inaccurate, biased or objectionable responses to user prompts. ### Risks identified and mitigations: * Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. The model may, at times, generate toxic or biased content. All developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases. * Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our Acceptable Use Policy, including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products. * Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy. ### Contact Please report any issues with the model or contact us: * Safety issues: safety@stability.ai * Security issues: security@stability.ai * Privacy issues: privacy@stability.ai * License and general: https://stability.ai/license * Enterprise license: https://stability.ai/enterprise
iotu345/una-neural-chat-v3-3-P1-OMA-1
iotu345
"2024-06-12T15:10:53Z"
0
0
null
[ "merge", "mergekit", "lazymergekit", "one-man-army/una-neural-chat-v3-3-P1-OMA", "rhysjones/Phi-3-mini-mango-1", "base_model:one-man-army/una-neural-chat-v3-3-P1-OMA", "base_model:rhysjones/Phi-3-mini-mango-1", "region:us" ]
null
"2024-06-12T15:10:52Z"
--- tags: - merge - mergekit - lazymergekit - one-man-army/una-neural-chat-v3-3-P1-OMA - rhysjones/Phi-3-mini-mango-1 base_model: - one-man-army/una-neural-chat-v3-3-P1-OMA - rhysjones/Phi-3-mini-mango-1 --- # una-neural-chat-v3-3-P1-OMA-1 una-neural-chat-v3-3-P1-OMA-1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [one-man-army/una-neural-chat-v3-3-P1-OMA](https://huggingface.co/one-man-army/una-neural-chat-v3-3-P1-OMA) * [rhysjones/Phi-3-mini-mango-1](https://huggingface.co/rhysjones/Phi-3-mini-mango-1) ## 🧩 Configuration ```yaml slices: - sources: - model: one-man-army/una-neural-chat-v3-3-P1-OMA layer_range: [0, 32] - model: rhysjones/Phi-3-mini-mango-1 layer_range: [0, 32] merge_method: slerp base_model: one-man-army/una-neural-chat-v3-3-P1-OMA parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "iotu345/una-neural-chat-v3-3-P1-OMA-1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ualerr/C-S-L-M
ualerr
"2024-06-12T15:16:27Z"
0
1
null
[ "license:mit", "region:us" ]
null
"2024-06-12T15:12:02Z"
--- license: mit --- https://github.com/ualers/Games_A-I-L-M
w11wo/sherpa-onnx-zipformer-streaming-librispeech
w11wo
"2024-06-12T15:13:46Z"
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:13:01Z"
--- license: apache-2.0 ---
Bobrkurwaaa/Goddame
Bobrkurwaaa
"2024-06-12T15:13:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:13:25Z"
Entry not found
Jngrau71/Qr
Jngrau71
"2024-06-12T15:13:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:13:27Z"
Entry not found
sava5/house
sava5
"2024-06-12T15:15:06Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:14:07Z"
--- license: apache-2.0 ---
etanios/june-12-epitope-model
etanios
"2024-06-12T15:18:06Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-12T15:18:05Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Bakerbunker/FreeV_Model_Logs
Bakerbunker
"2024-06-12T16:43:09Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:18:25Z"
--- license: apache-2.0 ---
IKinya/APPEN
IKinya
"2024-06-12T15:18:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:18:33Z"
Entry not found
avemio-digital/Llama3_finalentity_adapter_2500steps
avemio-digital
"2024-06-12T15:29:52Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-12T15:22:46Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abutarik2003/Art
Abutarik2003
"2024-06-12T15:24:25Z"
0
0
null
[ "ar", "arxiv:1910.09700", "license:artistic-2.0", "region:us" ]
null
"2024-06-12T15:23:02Z"
--- license: artistic-2.0 language: - ar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kkms51/Trial
kkms51
"2024-06-12T15:29:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:25:29Z"
# Trial This is the model card for Trial.
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-80
Augusto777
"2024-06-12T16:09:12Z"
0
1
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T15:27:22Z"
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-ve-U13-b-80 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8043478260869565 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-ve-U13-b-80 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9190 - Accuracy: 0.8043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 6 | 1.3859 | 0.1304 | | 1.3859 | 2.0 | 13 | 1.3828 | 0.2826 | | 1.3859 | 2.92 | 19 | 1.3769 | 0.3261 | | 1.379 | 4.0 | 26 | 1.3566 | 0.2826 | | 1.3356 | 4.92 | 32 | 1.3162 | 0.2391 | | 1.3356 | 6.0 | 39 | 1.2093 | 0.3478 | | 1.2023 | 6.92 | 45 | 1.1349 | 0.4565 | | 1.0274 | 8.0 | 52 | 1.0414 | 0.4783 | | 1.0274 | 8.92 | 58 | 0.9788 | 0.5217 | | 0.9125 | 10.0 | 65 | 1.0071 | 0.4348 | | 0.7688 | 10.92 | 71 | 1.0416 | 0.5217 | | 0.7688 | 12.0 | 78 | 1.0480 | 0.4130 | | 0.6891 | 12.92 | 84 | 0.9351 | 0.5870 | | 0.5795 | 14.0 | 91 | 1.0683 | 0.6304 | | 0.5795 | 14.92 | 97 | 1.0698 | 0.6087 | | 0.5337 | 16.0 | 104 | 0.9603 | 0.6304 | | 0.4337 | 16.92 | 110 | 0.7188 | 0.6957 | | 0.4337 | 18.0 | 117 | 0.7620 | 0.6739 | | 0.4258 | 18.92 | 123 | 0.9433 | 0.6739 | | 0.4045 | 20.0 | 130 | 1.0823 | 0.6522 | | 0.4045 | 20.92 | 136 | 0.7059 | 0.7174 | | 0.4135 | 22.0 | 143 | 0.7467 | 0.7391 | | 0.4135 | 22.92 | 149 | 0.7637 | 0.7391 | | 0.3525 | 24.0 | 156 | 0.8157 | 0.7391 | | 0.263 | 24.92 | 162 | 0.9995 | 0.7174 | | 0.263 | 26.0 | 169 | 0.8719 | 0.7609 | | 0.272 | 26.92 | 175 | 0.9939 | 0.6957 | | 0.262 | 28.0 | 182 | 0.8639 | 0.7174 | | 0.262 | 28.92 | 188 | 1.0737 | 0.6522 | | 0.2282 | 30.0 | 195 | 0.8416 | 0.7174 | | 0.2098 | 30.92 | 201 | 0.9744 | 0.6739 | | 0.2098 | 32.0 | 208 | 1.0593 | 0.6087 | | 0.2141 | 32.92 | 214 | 1.0997 | 0.7174 | | 0.1759 | 34.0 | 221 | 0.9735 | 0.5870 | | 0.1759 | 34.92 | 227 | 1.0789 | 0.6957 | | 0.2042 | 36.0 | 234 | 1.0664 | 0.6957 | | 0.1591 | 36.92 | 240 | 0.9417 | 0.7609 | | 0.1591 | 38.0 | 247 | 1.1042 | 0.6739 | | 0.1579 | 38.92 | 253 | 0.9732 | 0.7609 | | 0.1626 | 40.0 | 260 | 0.9960 | 0.6957 | | 0.1626 | 40.92 | 266 | 0.9763 | 0.7391 | | 0.1458 | 42.0 | 273 | 0.9790 | 0.7391 | | 0.1458 | 42.92 | 279 | 1.0952 | 0.7174 | | 0.1317 | 44.0 | 286 | 0.9190 | 0.8043 | | 0.1255 | 44.92 | 292 | 0.9420 | 0.7391 | | 0.1255 | 46.0 | 299 | 0.9085 | 0.7391 | | 0.1352 | 46.92 | 305 | 0.9184 | 0.7174 | | 0.1311 | 48.0 | 312 | 1.0567 | 0.7609 | | 0.1311 | 48.92 | 318 | 1.1507 | 0.7174 | | 0.1501 | 50.0 | 325 | 1.2068 | 0.7174 | | 0.1088 | 50.92 | 331 | 1.4607 | 0.6957 | | 0.1088 | 52.0 | 338 | 1.1036 | 0.6739 | | 0.1152 | 52.92 | 344 | 1.1081 | 0.6957 | | 0.1141 | 54.0 | 351 | 1.1006 | 0.6957 | | 0.1141 | 54.92 | 357 | 1.1470 | 0.7174 | | 0.1307 | 56.0 | 364 | 1.0715 | 0.7609 | | 0.1273 | 56.92 | 370 | 1.1021 | 0.7174 | | 0.1273 | 58.0 | 377 | 1.1176 | 0.6957 | | 0.1066 | 58.92 | 383 | 1.0948 | 0.7174 | | 0.1046 | 60.0 | 390 | 1.0563 | 0.7391 | | 0.1046 | 60.92 | 396 | 1.1155 | 0.6957 | | 0.1129 | 62.0 | 403 | 1.0922 | 0.6957 | | 0.1129 | 62.92 | 409 | 1.0364 | 0.6957 | | 0.1031 | 64.0 | 416 | 1.0675 | 0.7174 | | 0.0808 | 64.92 | 422 | 1.1133 | 0.6957 | | 0.0808 | 66.0 | 429 | 1.2029 | 0.7174 | | 0.0783 | 66.92 | 435 | 1.1453 | 0.7174 | | 0.09 | 68.0 | 442 | 1.0925 | 0.6957 | | 0.09 | 68.92 | 448 | 1.0999 | 0.7174 | | 0.0796 | 70.0 | 455 | 1.0971 | 0.7391 | | 0.0828 | 70.92 | 461 | 1.0923 | 0.7391 | | 0.0828 | 72.0 | 468 | 1.1061 | 0.7391 | | 0.0923 | 72.92 | 474 | 1.1173 | 0.7391 | | 0.092 | 73.85 | 480 | 1.1208 | 0.7391 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
htuannn/room-data1-sd-1-5-dora-128
htuannn
"2024-06-12T15:28:28Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:28:28Z"
Entry not found
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-60
Augusto777
"2024-06-12T15:30:16Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:30:16Z"
Entry not found
carlisleking/Reinforce-CartPole-v1
carlisleking
"2024-06-12T15:31:03Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-12T15:30:59Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 187.90 +/- 11.20 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-12
Augusto777
"2024-06-12T15:33:34Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T15:31:25Z"
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-ve-U13-b-12 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.5434782608695652 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-ve-U13-b-12 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9160 - Accuracy: 0.5435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 1.3788 | 0.4348 | | 1.3828 | 2.0 | 16 | 1.3084 | 0.5 | | 1.2902 | 3.0 | 24 | 1.1908 | 0.4783 | | 1.1227 | 4.0 | 32 | 1.1055 | 0.4130 | | 0.9806 | 5.0 | 40 | 1.0173 | 0.5217 | | 0.9806 | 6.0 | 48 | 0.9396 | 0.5217 | | 0.8629 | 7.0 | 56 | 0.9529 | 0.5 | | 0.7707 | 8.0 | 64 | 0.9449 | 0.5217 | | 0.7411 | 9.0 | 72 | 0.9160 | 0.5435 | | 0.671 | 10.0 | 80 | 0.9073 | 0.5435 | | 0.671 | 11.0 | 88 | 0.9192 | 0.5435 | | 0.6501 | 12.0 | 96 | 0.9456 | 0.5 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
PKU-Alignment/ProgressGym-HistLlama3-8B-C014-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:36Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C014-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T15:34:50Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C014-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C014-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C014-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C014-instruct ProgressGym-HistLlama3-8B-C014-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C014-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C014-instruct is a 14th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 14th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5789 | 0.0152 | 1 | 2.6458 | | 2.5672 | 0.0758 | 5 | 2.6280 | | 2.5751 | 0.1515 | 10 | 2.5314 | | 2.418 | 0.2273 | 15 | 2.4634 | | 2.4701 | 0.3030 | 20 | 2.4177 | | 2.3904 | 0.3788 | 25 | 2.3785 | | 2.3539 | 0.4545 | 30 | 2.3378 | | 2.3101 | 0.5303 | 35 | 2.3082 | | 2.3254 | 0.6061 | 40 | 2.2816 | | 2.2762 | 0.6818 | 45 | 2.2614 | | 2.2525 | 0.7576 | 50 | 2.2458 | | 2.2777 | 0.8333 | 55 | 2.2321 | | 2.2054 | 0.9091 | 60 | 2.2206 | | 2.237 | 0.9848 | 65 | 2.2113 | | 1.986 | 1.0606 | 70 | 2.2115 | | 1.9373 | 1.1364 | 75 | 2.2217 | | 1.9228 | 1.2121 | 80 | 2.2132 | | 1.9084 | 1.2879 | 85 | 2.2118 | | 1.9684 | 1.3636 | 90 | 2.2122 | | 1.9126 | 1.4394 | 95 | 2.2094 | | 1.9101 | 1.5152 | 100 | 2.2066 | | 1.8496 | 1.5909 | 105 | 2.2058 | | 1.9154 | 1.6667 | 110 | 2.2057 | | 1.9233 | 1.7424 | 115 | 2.2056 | | 1.9198 | 1.8182 | 120 | 2.2052 | | 1.9229 | 1.8939 | 125 | 2.2048 | | 1.8913 | 1.9697 | 130 | 2.2045 | | 1.8814 | 2.0455 | 135 | 2.2046 | | 1.8813 | 2.1212 | 140 | 2.2051 | | 1.8912 | 2.1970 | 145 | 2.2058 | | 1.9184 | 2.2727 | 150 | 2.2065 | | 1.8662 | 2.3485 | 155 | 2.2071 | | 1.8809 | 2.4242 | 160 | 2.2074 | | 1.8591 | 2.5 | 165 | 2.2077 | | 1.8731 | 2.5758 | 170 | 2.2079 | | 1.8948 | 2.6515 | 175 | 2.2082 | | 1.8876 | 2.7273 | 180 | 2.2082 | | 1.8408 | 2.8030 | 185 | 2.2083 | | 1.8931 | 2.8788 | 190 | 2.2082 | | 1.8569 | 2.9545 | 195 | 2.2080 | | 1.8621 | 3.0303 | 200 | 2.2079 | | 1.8863 | 3.1061 | 205 | 2.2078 | | 1.9021 | 3.1818 | 210 | 2.2079 | | 1.8648 | 3.2576 | 215 | 2.2080 | | 1.8443 | 3.3333 | 220 | 2.2081 | | 1.8978 | 3.4091 | 225 | 2.2080 | | 1.8658 | 3.4848 | 230 | 2.2080 | | 1.8706 | 3.5606 | 235 | 2.2079 | | 1.8855 | 3.6364 | 240 | 2.2078 | | 1.8535 | 3.7121 | 245 | 2.2078 | | 1.9062 | 3.7879 | 250 | 2.2079 | | 1.8628 | 3.8636 | 255 | 2.2078 | | 1.8484 | 3.9394 | 260 | 2.2077 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C014-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9832 | 0.0208 | 1 | 0.9730 | | 0.9463 | 0.1042 | 5 | 0.9421 | | 0.8488 | 0.2083 | 10 | 0.8247 | | 0.7833 | 0.3125 | 15 | 0.8149 | | 0.7797 | 0.4167 | 20 | 0.8403 | | 0.8542 | 0.5208 | 25 | 0.8670 | | 0.8895 | 0.625 | 30 | 0.8718 | | 0.8519 | 0.7292 | 35 | 0.8592 | | 0.8224 | 0.8333 | 40 | 0.8491 | | 0.8538 | 0.9375 | 45 | 0.8384 | | 0.6569 | 1.0417 | 50 | 0.8295 | | 0.437 | 1.1458 | 55 | 0.8457 | | 0.4405 | 1.25 | 60 | 0.8668 | | 0.4331 | 1.3542 | 65 | 0.8671 | | 0.448 | 1.4583 | 70 | 0.8597 | | 0.4673 | 1.5625 | 75 | 0.8514 | | 0.4298 | 1.6667 | 80 | 0.8474 | | 0.4252 | 1.7708 | 85 | 0.8458 | | 0.4429 | 1.875 | 90 | 0.8451 | | 0.4484 | 1.9792 | 95 | 0.8450 | | 0.3634 | 2.0833 | 100 | 0.8455 | | 0.3876 | 2.1875 | 105 | 0.8467 | | 0.3717 | 2.2917 | 110 | 0.8481 | | 0.387 | 2.3958 | 115 | 0.8494 | | 0.3561 | 2.5 | 120 | 0.8505 | | 0.4219 | 2.6042 | 125 | 0.8516 | | 0.3798 | 2.7083 | 130 | 0.8527 | | 0.3551 | 2.8125 | 135 | 0.8537 | | 0.3827 | 2.9167 | 140 | 0.8546 | | 0.3938 | 3.0208 | 145 | 0.8556 | | 0.3805 | 3.125 | 150 | 0.8565 | | 0.3813 | 3.2292 | 155 | 0.8574 | | 0.3894 | 3.3333 | 160 | 0.8582 | | 0.3603 | 3.4375 | 165 | 0.8589 | | 0.3515 | 3.5417 | 170 | 0.8597 | | 0.3433 | 3.6458 | 175 | 0.8605 | | 0.3511 | 3.75 | 180 | 0.8614 | | 0.3599 | 3.8542 | 185 | 0.8620 | | 0.3994 | 3.9583 | 190 | 0.8621 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
GeorgeB0y/ChatNIAI
GeorgeB0y
"2024-06-12T15:35:24Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-06-12T15:35:24Z"
--- license: llama3 ---
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-40
Augusto777
"2024-06-12T15:36:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:36:49Z"
Entry not found
mikec003/yami_yugi
mikec003
"2024-06-12T15:40:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:37:08Z"
Entry not found
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-24
Augusto777
"2024-06-12T15:38:31Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T15:37:40Z"
Entry not found
Qali12/adaw
Qali12
"2024-06-12T15:39:22Z"
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
"2024-06-12T15:39:22Z"
--- license: cc-by-nc-4.0 ---
PKU-Alignment/ProgressGym-HistLlama3-8B-C015-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:39Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C015-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T15:41:33Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C015-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C015-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C015-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C015-instruct ProgressGym-HistLlama3-8B-C015-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C015-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C015-instruct is a 15th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 15th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 3.02 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 2.6141 | 0.006494 | 1 | 2.6354 | | 2.657 | 0.032468 | 5 | 2.6206 | | 2.6337 | 0.064935 | 10 | 2.5846 | | 2.5268 | 0.097403 | 15 | 2.5516 | | 2.5275 | 0.129870 | 20 | 2.5321 | | 2.5005 | 0.162338 | 25 | 2.5131 | | 2.5339 | 0.194805 | 30 | 2.4961 | | 2.5335 | 0.227273 | 35 | 2.4808 | | 2.4252 | 0.259740 | 40 | 2.4643 | | 2.4445 | 0.292208 | 45 | 2.4518 | | 2.4594 | 0.324675 | 50 | 2.4394 | | 2.4498 | 0.357143 | 55 | 2.4287 | | 2.3821 | 0.389610 | 60 | 2.4184 | | 2.4317 | 0.422078 | 65 | 2.4091 | | 2.3931 | 0.454545 | 70 | 2.4001 | | 2.3695 | 0.487013 | 75 | 2.3934 | | 2.3981 | 0.519481 | 80 | 2.3855 | | 2.3952 | 0.551948 | 85 | 2.3789 | | 2.4137 | 0.584416 | 90 | 2.3721 | | 2.3614 | 0.616883 | 95 | 2.3669 | | 2.3467 | 0.649351 | 100 | 2.3612 | | 2.4012 | 0.681818 | 105 | 2.3569 | | 2.3224 | 0.714286 | 110 | 2.3528 | | 2.3348 | 0.746753 | 115 | 2.3483 | | 2.3573 | 0.779221 | 120 | 2.3448 | | 2.306 | 0.811688 | 125 | 2.3412 | | 2.342 | 0.844156 | 130 | 2.3382 | | 2.3045 | 0.876623 | 135 | 2.3356 | | 2.2959 | 0.909091 | 140 | 2.3330 | | 2.3545 | 0.941558 | 145 | 2.3305 | | 2.3446 | 0.974026 | 150 | 2.3285 | | 2.2502 | 1.006494 | 155 | 2.3268 | | 2.0791 | 1.038961 | 160 | 2.3347 | | 2.1034 | 1.071429 | 165 | 2.3399 | | 2.095 | 1.103896 | 170 | 2.3358 | | 2.0627 | 1.136364 | 175 | 2.3346 | | 2.0408 | 1.168831 | 180 | 2.3357 | | 2.0575 | 1.201299 | 185 | 2.3364 | | 2.0976 | 1.233766 | 190 | 2.3349 | | 2.0668 | 1.266234 | 195 | 2.3336 | | 2.0579 | 1.298701 | 200 | 2.3329 | | 2.0756 | 1.331169 | 205 | 2.3326 | | 2.1174 | 1.363636 | 210 | 2.3325 | | 2.0663 | 1.396104 | 215 | 2.3325 | | 2.0941 | 1.428571 | 220 | 2.3324 | | 2.1074 | 1.461039 | 225 | 2.3324 | | 2.1251 | 1.493506 | 230 | 2.3322 | | 2.0629 | 1.525974 | 235 | 2.3318 | | 2.0872 | 1.558442 | 240 | 2.3312 | | 2.0994 | 1.590909 | 245 | 2.3310 | | 2.0879 | 1.623377 | 250 | 2.3308 | | 2.0623 | 1.655844 | 255 | 2.3305 | | 2.1054 | 1.688312 | 260 | 2.3303 | | 2.0736 | 1.720779 | 265 | 2.3301 | | 2.1146 | 1.753247 | 270 | 2.3300 | | 2.0444 | 1.785714 | 275 | 2.3301 | | 2.0541 | 1.818182 | 280 | 2.3301 | | 2.1333 | 1.850649 | 285 | 2.3300 | | 2.1101 | 1.883117 | 290 | 2.3299 | | 2.0234 | 1.915584 | 295 | 2.3298 | | 2.0671 | 1.948052 | 300 | 2.3298 | | 2.083 | 1.980519 | 305 | 2.3298 | | 2.0417 | 2.012987 | 310 | 2.3299 | | 2.0784 | 2.045455 | 315 | 2.3303 | | 2.058 | 2.077922 | 320 | 2.3308 | | 2.0524 | 2.110390 | 325 | 2.3312 | | 2.0318 | 2.142857 | 330 | 2.3316 | | 2.0914 | 2.175325 | 335 | 2.3318 | | 2.0319 | 2.207792 | 340 | 2.3320 | | 2.0099 | 2.240260 | 345 | 2.3322 | | 2.075 | 2.272727 | 350 | 2.3323 | | 2.0444 | 2.305195 | 355 | 2.3324 | | 2.0428 | 2.337662 | 360 | 2.3325 | | 2.0612 | 2.370130 | 365 | 2.3326 | | 2.1078 | 2.402597 | 370 | 2.3327 | | 2.0643 | 2.435065 | 375 | 2.3327 | | 2.0667 | 2.467532 | 380 | 2.3326 | | 2.0285 | 2.500000 | 385 | 2.3324 | | 2.0571 | 2.532468 | 390 | 2.3322 | | 2.0209 | 2.564935 | 395 | 2.3322 | | 2.0537 | 2.597403 | 400 | 2.3323 | | 2.0138 | 2.629870 | 405 | 2.3324 | | 2.0772 | 2.662338 | 410 | 2.3324 | | 2.039 | 2.694805 | 415 | 2.3323 | | 2.0181 | 2.727273 | 420 | 2.3322 | | 2.0484 | 2.759740 | 425 | 2.3320 | | 2.0224 | 2.792208 | 430 | 2.3320 | | 2.0732 | 2.824675 | 435 | 2.3320 | | 2.0499 | 2.857143 | 440 | 2.3321 | | 2.0498 | 2.889610 | 445 | 2.3321 | | 2.0472 | 2.922078 | 450 | 2.3320 | | 2.1327 | 2.954545 | 455 | 2.3319 | | 2.0642 | 2.987013 | 460 | 2.3319 | | 2.0654 | 3.019481 | 465 | - | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C015-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8675 | 0.1042 | 5 | 0.8585 | | 0.8415 | 0.2083 | 10 | 0.8063 | | 0.8225 | 0.3125 | 15 | 0.8210 | | 0.806 | 0.4167 | 20 | 0.8412 | | 0.8139 | 0.5208 | 25 | 0.8702 | | 0.8978 | 0.625 | 30 | 0.8631 | | 0.814 | 0.7292 | 35 | 0.8550 | | 0.7989 | 0.8333 | 40 | 0.8473 | | 0.8769 | 0.9375 | 45 | 0.8383 | | 0.7244 | 1.0417 | 50 | 0.8278 | | 0.4644 | 1.1458 | 55 | 0.8387 | | 0.4488 | 1.25 | 60 | 0.8680 | | 0.3973 | 1.3542 | 65 | 0.8718 | | 0.443 | 1.4583 | 70 | 0.8596 | | 0.4346 | 1.5625 | 75 | 0.8514 | | 0.4701 | 1.6667 | 80 | 0.8461 | | 0.4344 | 1.7708 | 85 | 0.8437 | | 0.4274 | 1.875 | 90 | 0.8434 | | 0.4771 | 1.9792 | 95 | 0.8434 | | 0.3876 | 2.0833 | 100 | 0.8439 | | 0.3698 | 2.1875 | 105 | 0.8451 | | 0.407 | 2.2917 | 110 | 0.8465 | | 0.374 | 2.3958 | 115 | 0.8482 | | 0.3945 | 2.5 | 120 | 0.8498 | | 0.3753 | 2.6042 | 125 | 0.8513 | | 0.3721 | 2.7083 | 130 | 0.8528 | | 0.3718 | 2.8125 | 135 | 0.8542 | | 0.3773 | 2.9167 | 140 | 0.8555 | | 0.3723 | 3.0208 | 145 | 0.8565 | | 0.374 | 3.125 | 150 | 0.8576 | | 0.3728 | 3.2292 | 155 | 0.8588 | | 0.3686 | 3.3333 | 160 | 0.8598 | | 0.3617 | 3.4375 | 165 | 0.8607 | | 0.3546 | 3.5417 | 170 | 0.8613 | | 0.3707 | 3.6458 | 175 | 0.8619 | | 0.3739 | 3.75 | 180 | 0.8625 | | 0.3617 | 3.8542 | 185 | 0.8632 | | 0.3591 | 3.9583 | 190 | 0.8637 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
bezzam/tapecam-mirflickr-unrolled-admm5-unet8M
bezzam
"2024-06-12T15:59:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:43:15Z"
Entry not found
Qzbq/First
Qzbq
"2024-06-12T15:43:30Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:43:30Z"
--- license: apache-2.0 ---
haiffy/travelease
haiffy
"2024-06-12T15:43:30Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:43:30Z"
--- license: apache-2.0 ---
DrDrew/testSplat
DrDrew
"2024-06-12T16:41:31Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-12T15:44:07Z"
--- license: mit ---
Pudenkoff/Pudenkoff
Pudenkoff
"2024-06-12T15:45:03Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:45:03Z"
--- license: apache-2.0 ---
MIKKELSEN1234/map
MIKKELSEN1234
"2024-06-12T15:46:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:46:17Z"
Entry not found
haturusinghe/xlm_r_base-finetuned_after_mrp-v2-generous-totem-14
haturusinghe
"2024-06-12T15:46:43Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:46:43Z"
Entry not found
bezzam/tapecam-mirflickr-mmcn-unet4M
bezzam
"2024-07-02T11:40:13Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-12T15:47:08Z"
--- license: mit ---
IvanNNNig/Modelka
IvanNNNig
"2024-06-12T15:47:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:47:29Z"
Entry not found
MarwaSaleh/whisper-medium-Egyptian_ASR_v1
MarwaSaleh
"2024-06-12T15:47:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:47:29Z"
Entry not found
Frankyzx/hw1
Frankyzx
"2024-06-12T15:48:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:48:41Z"
Entry not found
sherlor/test
sherlor
"2024-06-12T15:49:24Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:49:24Z"
--- license: apache-2.0 ---
Eduard19952112/Vrach333333
Eduard19952112
"2024-06-12T15:51:01Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T15:49:58Z"
--- license: apache-2.0 --- A doctor in space
Phinea/Lupitarbd
Phinea
"2024-06-12T16:22:15Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-12T15:50:26Z"
--- license: openrail ---
PKU-Alignment/ProgressGym-HistLlama3-8B-C016-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:42Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C016-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T15:50:33Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C016-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C016-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C016-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C016-instruct ProgressGym-HistLlama3-8B-C016-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C016-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C016-instruct is a 16th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 16th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5472 | 0.1947 | 200 | 2.5262 | | 2.4431 | 0.3895 | 400 | 2.4733 | | 2.4163 | 0.5842 | 600 | 2.4443 | | 2.4462 | 0.7790 | 800 | 2.4281 | | 2.4353 | 0.9737 | 1000 | 2.4196 | | 2.2111 | 1.1685 | 1200 | 2.4290 | | 2.2503 | 1.3632 | 1400 | 2.4281 | | 2.258 | 1.5579 | 1600 | 2.4271 | | 2.254 | 1.7527 | 1800 | 2.4266 | | 2.2508 | 1.9474 | 2000 | 2.4266 | | 2.2112 | 2.1422 | 2200 | 2.4287 | | 2.2063 | 2.3369 | 2400 | 2.4293 | | 2.2544 | 2.5316 | 2600 | 2.4291 | | 2.2024 | 2.7264 | 2800 | 2.4289 | | 2.2074 | 2.9211 | 3000 | 2.4288 | | 2.2268 | 3.1159 | 3200 | 2.4297 | | 2.1556 | 3.3106 | 3400 | 2.4294 | | 2.1953 | 3.5054 | 3600 | 2.4296 | | 2.2002 | 3.7001 | 3800 | 2.4294 | | 2.2437 | 3.8948 | 4000 | 2.4291 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C016-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8263 | 0.4167 | 20 | 0.8585 | | 0.8014 | 0.8333 | 40 | 0.8515 | | 0.4375 | 1.25 | 60 | 0.8718 | | 0.4593 | 1.6667 | 80 | 0.8558 | | 0.3969 | 2.0833 | 100 | 0.8528 | | 0.3982 | 2.5 | 120 | 0.8576 | | 0.3742 | 2.9167 | 140 | 0.8624 | | 0.3692 | 3.3333 | 160 | 0.8662 | | 0.3667 | 3.75 | 180 | 0.8690 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
OriginZak/first
OriginZak
"2024-06-12T15:50:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:50:54Z"
Entry not found
Ilya1422/SergeyMavrodi-RVC-RMPVE-40k
Ilya1422
"2024-06-12T15:51:43Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-12T15:50:59Z"
--- license: openrail ---
sgonzalezsilot/whisper-tiny-es-Nemo_new
sgonzalezsilot
"2024-06-12T16:28:38Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-12T15:52:04Z"
Entry not found
sgonzalezsilot/whisper-base-es-Nemo_new
sgonzalezsilot
"2024-06-12T16:30:21Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-12T15:52:07Z"
Entry not found
Rewolter/Rew
Rewolter
"2024-06-12T15:52:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:52:48Z"
Entry not found
PriceWang/maecg_aug
PriceWang
"2024-06-13T12:30:05Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-12T15:53:04Z"
--- license: mit ---
latihan/Groq_Langchain
latihan
"2024-06-12T15:56:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:53:11Z"
Entry not found
erbacher/zephyr-3b-rag-agent-webgpt
erbacher
"2024-06-12T15:54:27Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-12T15:54:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bezzam/tapecam-mirflickr-mwdn-8M
bezzam
"2024-06-20T11:09:53Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:54:25Z"
Entry not found
Louis-Dupont/Meta-Llama-3-8B-Instruct-fine-tuned-adapters
Louis-Dupont
"2024-06-14T06:44:17Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
"2024-06-12T15:56:59Z"
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B-Instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.2.dev0
anuran0/disease
anuran0
"2024-06-12T15:58:23Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-12T15:58:23Z"
--- license: mit ---
Perilo/Hdisoiejs
Perilo
"2024-06-12T15:59:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T15:59:47Z"
Entry not found
PKU-Alignment/ProgressGym-HistLlama3-8B-C017-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:45Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C017-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:00:16Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C017-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C017-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C017-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C017-instruct ProgressGym-HistLlama3-8B-C017-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C017-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C017-instruct is a 17th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 17th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5442 | 0.2028 | 200 | 2.5552 | | 2.5376 | 0.4057 | 400 | 2.5096 | | 2.4487 | 0.6085 | 600 | 2.4831 | | 2.5324 | 0.8114 | 800 | 2.4690 | | 2.265 | 1.0142 | 1000 | 2.4733 | | 2.3002 | 1.2170 | 1200 | 2.4736 | | 2.29 | 1.4199 | 1400 | 2.4734 | | 2.2566 | 1.6227 | 1600 | 2.4725 | | 2.3052 | 1.8256 | 1800 | 2.4721 | | 2.2702 | 2.0284 | 2000 | 2.4734 | | 2.2411 | 2.2312 | 2200 | 2.4746 | | 2.2413 | 2.4341 | 2400 | 2.4749 | | 2.216 | 2.6369 | 2600 | 2.4749 | | 2.2696 | 2.8398 | 2800 | 2.4747 | | 2.2455 | 3.0426 | 3000 | 2.4752 | | 2.216 | 3.2454 | 3200 | 2.4753 | | 2.2348 | 3.4483 | 3400 | 2.4757 | | 2.238 | 3.6511 | 3600 | 2.4753 | | 2.2349 | 3.8540 | 3800 | 2.4752 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C017-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8222 | 0.4167 | 20 | 0.8593 | | 0.8014 | 0.8333 | 40 | 0.8518 | | 0.4422 | 1.25 | 60 | 0.8722 | | 0.4551 | 1.6667 | 80 | 0.8555 | | 0.3806 | 2.0833 | 100 | 0.8530 | | 0.4011 | 2.5 | 120 | 0.8577 | | 0.37 | 2.9167 | 140 | 0.8622 | | 0.3626 | 3.3333 | 160 | 0.8659 | | 0.3708 | 3.75 | 180 | 0.8687 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
jmzzomg/stopwords
jmzzomg
"2024-06-12T16:14:44Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T16:00:45Z"
--- license: apache-2.0 ---
carlisleking/Pixelcopter-PLE-v0
carlisleking
"2024-06-12T16:01:12Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-12T16:01:10Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 5.20 +/- 4.56 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kawther1/whisperlargeev2
kawther1
"2024-06-18T15:26:33Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:02:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
scholl99/llama-3-8b-financeSA-qlora
scholl99
"2024-06-12T16:02:28Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:02:07Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** scholl99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pwl15/llava-v1.5-13b-task-lora_001_new
pwl15
"2024-06-12T16:08:18Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-12T16:04:35Z"
Entry not found
Niggendar/waiC_v20
Niggendar
"2024-06-12T16:12:06Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-12T16:04:45Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
magniolia/phi-2-basic-finance
magniolia
"2024-06-12T16:27:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:05:49Z"
Entry not found
MellisaA/whisper-small-FINAL
MellisaA
"2024-06-12T16:07:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:07:35Z"
Entry not found
PKU-Alignment/ProgressGym-HistLlama3-8B-C018-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:48Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C018-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:07:49Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C018-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C018-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C018-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C018-instruct ProgressGym-HistLlama3-8B-C018-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C018-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C018-instruct is a 18th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 18th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3701 | 0.2186 | 200 | 2.3702 | | 2.3183 | 0.4372 | 400 | 2.3160 | | 2.2634 | 0.6557 | 600 | 2.2863 | | 2.2522 | 0.8743 | 800 | 2.2706 | | 2.0306 | 1.0929 | 1000 | 2.2777 | | 2.0095 | 1.3115 | 1200 | 2.2760 | | 2.0539 | 1.5301 | 1400 | 2.2746 | | 2.0338 | 1.7486 | 1600 | 2.2743 | | 2.0648 | 1.9672 | 1800 | 2.2737 | | 2.0297 | 2.1858 | 2000 | 2.2766 | | 2.0487 | 2.4044 | 2200 | 2.2767 | | 2.0329 | 2.6230 | 2400 | 2.2770 | | 2.0213 | 2.8415 | 2600 | 2.2766 | | 2.0559 | 3.0601 | 2800 | 2.2771 | | 2.0543 | 3.2787 | 3000 | 2.2773 | | 2.0317 | 3.4973 | 3200 | 2.2772 | | 1.988 | 3.7158 | 3400 | 2.2770 | | 2.0355 | 3.9344 | 3600 | 2.2772 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C018-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8108 | 0.4167 | 20 | 0.8423 | | 0.7995 | 0.8333 | 40 | 0.8555 | | 0.4526 | 1.25 | 60 | 0.8816 | | 0.4663 | 1.6667 | 80 | 0.8521 | | 0.3927 | 2.0833 | 100 | 0.8507 | | 0.4017 | 2.5 | 120 | 0.8561 | | 0.368 | 2.9167 | 140 | 0.8608 | | 0.3677 | 3.3333 | 160 | 0.8647 | | 0.3635 | 3.75 | 180 | 0.8676 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
haturusinghe/xlm_r_large-baseline_model-v2-silvery-flower-7
haturusinghe
"2024-06-12T16:07:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:07:59Z"
Entry not found
haturusinghe/xlm_r_large-baseline_model-v2-true-rain-8
haturusinghe
"2024-06-12T16:08:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:08:14Z"
Entry not found
wolotar/AM
wolotar
"2024-06-12T16:10:51Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-12T16:08:58Z"
--- license: openrail ---
Youngaura/Coco
Youngaura
"2024-06-12T16:24:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:09:47Z"
Entry not found
khg-b/model-massp-mnist
khg-b
"2024-06-12T17:36:39Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-12T16:10:43Z"
# My MLP model This is my trained model demo for MaSSP.
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-80b
Augusto777
"2024-06-12T16:21:37Z"
0
1
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T16:10:56Z"
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-ve-U13-b-80b results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.782608695652174 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-ve-U13-b-80b This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6122 - Accuracy: 0.7826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 6 | 1.3855 | 0.1304 | | 1.3852 | 2.0 | 13 | 1.3762 | 0.2826 | | 1.3852 | 2.92 | 19 | 1.3521 | 0.2826 | | 1.3565 | 4.0 | 26 | 1.2510 | 0.3478 | | 1.2024 | 4.92 | 32 | 1.1528 | 0.3478 | | 1.2024 | 6.0 | 39 | 1.0294 | 0.5 | | 1.0453 | 6.92 | 45 | 0.9608 | 0.5217 | | 0.8827 | 8.0 | 52 | 0.8801 | 0.6087 | | 0.8827 | 8.92 | 58 | 0.9884 | 0.5652 | | 0.7887 | 10.0 | 65 | 0.7927 | 0.6522 | | 0.6795 | 10.92 | 71 | 0.7237 | 0.6522 | | 0.6795 | 12.0 | 78 | 0.7250 | 0.6739 | | 0.5777 | 12.92 | 84 | 0.7140 | 0.6957 | | 0.496 | 14.0 | 91 | 0.8014 | 0.6957 | | 0.496 | 14.92 | 97 | 0.8701 | 0.6739 | | 0.4224 | 16.0 | 104 | 0.9384 | 0.6522 | | 0.3744 | 16.92 | 110 | 0.7594 | 0.7174 | | 0.3744 | 18.0 | 117 | 0.6122 | 0.7826 | | 0.3775 | 18.92 | 123 | 0.8143 | 0.7174 | | 0.3275 | 20.0 | 130 | 0.9981 | 0.6522 | | 0.3275 | 20.92 | 136 | 0.8603 | 0.7174 | | 0.3202 | 22.0 | 143 | 0.8412 | 0.6957 | | 0.3202 | 22.92 | 149 | 0.8654 | 0.7174 | | 0.2849 | 24.0 | 156 | 0.9650 | 0.6957 | | 0.2518 | 24.92 | 162 | 0.8102 | 0.7609 | | 0.2518 | 26.0 | 169 | 0.7203 | 0.7826 | | 0.2467 | 26.92 | 175 | 0.9435 | 0.7391 | | 0.2218 | 28.0 | 182 | 0.8905 | 0.7391 | | 0.2218 | 28.92 | 188 | 1.0828 | 0.6957 | | 0.2075 | 30.0 | 195 | 0.8936 | 0.7174 | | 0.1893 | 30.92 | 201 | 0.8836 | 0.7826 | | 0.1893 | 32.0 | 208 | 0.9692 | 0.7174 | | 0.194 | 32.92 | 214 | 1.0390 | 0.7609 | | 0.1739 | 34.0 | 221 | 0.8695 | 0.7609 | | 0.1739 | 34.92 | 227 | 1.1836 | 0.6739 | | 0.1895 | 36.0 | 234 | 1.0131 | 0.7391 | | 0.1428 | 36.92 | 240 | 0.9618 | 0.7609 | | 0.1428 | 38.0 | 247 | 0.9950 | 0.7609 | | 0.1443 | 38.92 | 253 | 0.9113 | 0.7826 | | 0.1574 | 40.0 | 260 | 0.9213 | 0.7174 | | 0.1574 | 40.92 | 266 | 0.9437 | 0.7391 | | 0.1442 | 42.0 | 273 | 0.9226 | 0.7609 | | 0.1442 | 42.92 | 279 | 0.9430 | 0.7391 | | 0.1186 | 44.0 | 286 | 0.9759 | 0.7826 | | 0.1135 | 44.92 | 292 | 0.9651 | 0.7391 | | 0.1135 | 46.0 | 299 | 0.9536 | 0.7609 | | 0.1299 | 46.92 | 305 | 0.9118 | 0.7609 | | 0.134 | 48.0 | 312 | 0.9848 | 0.7826 | | 0.134 | 48.92 | 318 | 0.8641 | 0.7609 | | 0.1418 | 50.0 | 325 | 1.0553 | 0.7609 | | 0.1074 | 50.92 | 331 | 1.2511 | 0.6957 | | 0.1074 | 52.0 | 338 | 1.0186 | 0.7391 | | 0.1144 | 52.92 | 344 | 1.0467 | 0.7174 | | 0.0999 | 54.0 | 351 | 0.9898 | 0.7391 | | 0.0999 | 54.92 | 357 | 1.1780 | 0.7391 | | 0.1131 | 56.0 | 364 | 1.0015 | 0.7609 | | 0.1152 | 56.92 | 370 | 1.0759 | 0.7609 | | 0.1152 | 58.0 | 377 | 1.1294 | 0.7174 | | 0.1012 | 58.92 | 383 | 1.0894 | 0.7391 | | 0.0938 | 60.0 | 390 | 1.0764 | 0.7391 | | 0.0938 | 60.92 | 396 | 1.1784 | 0.7174 | | 0.0944 | 62.0 | 403 | 1.1581 | 0.7174 | | 0.0944 | 62.92 | 409 | 1.0444 | 0.7391 | | 0.1015 | 64.0 | 416 | 1.0996 | 0.7391 | | 0.0762 | 64.92 | 422 | 1.1235 | 0.7609 | | 0.0762 | 66.0 | 429 | 1.0999 | 0.7391 | | 0.0775 | 66.92 | 435 | 1.0776 | 0.7391 | | 0.0787 | 68.0 | 442 | 1.0879 | 0.7391 | | 0.0787 | 68.92 | 448 | 1.0913 | 0.7391 | | 0.081 | 70.0 | 455 | 1.0558 | 0.7391 | | 0.0749 | 70.92 | 461 | 1.0401 | 0.7391 | | 0.0749 | 72.0 | 468 | 1.0539 | 0.7391 | | 0.0841 | 72.92 | 474 | 1.0663 | 0.7391 | | 0.0928 | 73.85 | 480 | 1.0712 | 0.7391 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
sanchit-gandhi/distil-llama-3-7b-fineweb-edu
sanchit-gandhi
"2024-06-12T16:11:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:11:11Z"
Entry not found
Vv29/Pans
Vv29
"2024-06-12T16:12:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:12:09Z"
Entry not found
fxmeng/PiSSA-Llama-2-13B-r128-5iter
fxmeng
"2024-06-13T02:14:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-12T16:13:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aigc11/tuiwen_lora
aigc11
"2024-06-13T23:33:46Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T16:13:11Z"
--- license: apache-2.0 ---
PKU-Alignment/ProgressGym-HistLlama3-8B-C019-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:51Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C019-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:15:49Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C019-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C019-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C019-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C019-instruct ProgressGym-HistLlama3-8B-C019-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C019-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C019-instruct is a 19th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 19th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3809 | 0.1923 | 200 | 2.4207 | | 2.3057 | 0.3846 | 400 | 2.3750 | | 2.35 | 0.5769 | 600 | 2.3477 | | 2.3291 | 0.7692 | 800 | 2.3324 | | 2.2998 | 0.9615 | 1000 | 2.3237 | | 2.1248 | 1.1538 | 1200 | 2.3361 | | 2.1239 | 1.3462 | 1400 | 2.3344 | | 2.1521 | 1.5385 | 1600 | 2.3338 | | 2.1359 | 1.7308 | 1800 | 2.3336 | | 2.0531 | 1.9231 | 2000 | 2.3332 | | 2.0783 | 2.1154 | 2200 | 2.3357 | | 2.0952 | 2.3077 | 2400 | 2.3360 | | 2.1009 | 2.5 | 2600 | 2.3361 | | 2.125 | 2.6923 | 2800 | 2.3360 | | 2.1206 | 2.8846 | 3000 | 2.3360 | | 2.0593 | 3.0769 | 3200 | 2.3363 | | 2.0927 | 3.2692 | 3400 | 2.3365 | | 2.093 | 3.4615 | 3600 | 2.3368 | | 2.066 | 3.6538 | 3800 | 2.3363 | | 2.1086 | 3.8462 | 4000 | 2.3362 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C019-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8164 | 0.4167 | 20 | 0.8431 | | 0.7962 | 0.8333 | 40 | 0.8462 | | 0.4335 | 1.25 | 60 | 0.8650 | | 0.4578 | 1.6667 | 80 | 0.8533 | | 0.3944 | 2.0833 | 100 | 0.8484 | | 0.3997 | 2.5 | 120 | 0.8528 | | 0.3752 | 2.9167 | 140 | 0.8573 | | 0.3697 | 3.3333 | 160 | 0.8608 | | 0.3636 | 3.75 | 180 | 0.8634 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
0CHAE/Llama3_legal_model
0CHAE
"2024-06-12T16:21:32Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:15:57Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
salahyahya/T5GECkaggle
salahyahya
"2024-06-12T16:21:45Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:21:45Z"
Entry not found
EuphoriaReccords/JIMIN
EuphoriaReccords
"2024-06-12T22:28:25Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-12T16:22:27Z"
--- license: openrail ---
yyykkk/stable-diffusion-all
yyykkk
"2024-06-12T16:23:50Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-12T16:23:50Z"
--- license: mit ---
Mplay/1
Mplay
"2024-06-12T16:23:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:23:57Z"
Entry not found
diffusepanda4/distilbert-base-uncased-finetuned-cola
diffusepanda4
"2024-06-12T16:24:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:24:21Z"
Entry not found
PKU-Alignment/ProgressGym-HistLlama3-8B-C020-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:53Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C020-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:25:04Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C020-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C020-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C020-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C020-instruct ProgressGym-HistLlama3-8B-C020-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C020-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C020-instruct is a 20th-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 20th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9087 | 0.4032 | 200 | 1.9717 | | 1.8752 | 0.8065 | 400 | 1.9418 | | 1.6383 | 1.2097 | 600 | 1.9440 | | 1.7073 | 1.6129 | 800 | 1.9435 | | 1.6699 | 2.0161 | 1000 | 1.9428 | | 1.7212 | 2.4194 | 1200 | 1.9445 | | 1.7346 | 2.8226 | 1400 | 1.9443 | | 1.7028 | 3.2258 | 1600 | 1.9448 | | 1.7383 | 3.6290 | 1800 | 1.9450 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C020-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8145 | 0.4167 | 20 | 0.8468 | | 0.7939 | 0.8333 | 40 | 0.8432 | | 0.4337 | 1.25 | 60 | 0.8653 | | 0.4546 | 1.6667 | 80 | 0.8524 | | 0.3886 | 2.0833 | 100 | 0.8477 | | 0.3963 | 2.5 | 120 | 0.8523 | | 0.3728 | 2.9167 | 140 | 0.8571 | | 0.3681 | 3.3333 | 160 | 0.8608 | | 0.3621 | 3.75 | 180 | 0.8637 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
Pekka6655/Aaa
Pekka6655
"2024-06-12T16:26:37Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T16:26:37Z"
--- license: apache-2.0 ---
Rychiy/Lohnabrechnung_Adapters
Rychiy
"2024-06-12T16:35:26Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:27:07Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** Rychiy - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
Augusto777/swin-base-patch4-window7-224-ve-U13-b-80c
Augusto777
"2024-06-12T16:33:15Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T16:27:32Z"
Entry not found
seraa/JapaneseDollLora
seraa
"2024-06-12T16:29:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:27:34Z"
Entry not found
ambor1011/nextgptm3_quantized
ambor1011
"2024-06-12T16:27:50Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:27:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JonathanGarza/unsloth-llama-3-8b-Instruct-bnb-4bit-8k-tok-context-Mexican-Federal-Laws-Inst-FineTuned-step1
JonathanGarza
"2024-06-12T16:29:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:28:16Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** JonathanGarza - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
stiucsib/gemma_kto_goat_prompt
stiucsib
"2024-06-12T16:29:46Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:28:17Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zaikus/mane
Zaikus
"2024-06-12T16:30:38Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:30:38Z"
Entry not found
yutocame/vit-base-oxford-iiit-pets
yutocame
"2024-06-13T15:47:26Z"
0
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T16:30:46Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-oxford-iiit-pets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2057 - Accuracy: 0.9378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3677 | 1.0 | 370 | 0.3033 | 0.9188 | | 0.211 | 2.0 | 740 | 0.2351 | 0.9283 | | 0.1656 | 3.0 | 1110 | 0.2082 | 0.9323 | | 0.1525 | 4.0 | 1480 | 0.2017 | 0.9310 | | 0.1443 | 5.0 | 1850 | 0.2004 | 0.9364 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
JonathanGarza/unsloth-llama-3-8b-Instruct-bnb-4bit-8k-tok-context-Mexican-Federal-Laws-Inst-FineTuned-step2
JonathanGarza
"2024-06-12T18:05:31Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T16:31:25Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** JonathanGarza - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PKU-Alignment/ProgressGym-HistLlama3-8B-C021-instruct-v0.1
PKU-Alignment
"2024-07-01T18:14:56Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "dataset:PKU-Alignment/ProgressGym-TimelessQA", "arxiv:2406.20087", "base_model:PKU-Alignment/ProgressGym-HistLlama3-8B-C021-pretrain", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T16:32:09Z"
--- license: cc-by-4.0 datasets: - PKU-Alignment/ProgressGym-HistText - PKU-Alignment/ProgressGym-TimelessQA base_model: - PKU-Alignment/ProgressGym-HistLlama3-8B-C021-pretrain - meta-llama/Meta-Llama-3-8B --- # ProgressGym-HistLlama3-8B-C021-instruct ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-8B-C021-instruct** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-8B-C021-instruct ProgressGym-HistLlama3-8B-C021-instruct is one of the **36 historical language models** in the ProgressGym framework. **ProgressGym-HistLlama3-8B-C021-instruct is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-8B-C021-instruct is a 21st-century historical language model.** Based on [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B), It is continued-pretrained on the 21st-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2572 | 0.4264 | 200 | 1.2612 | | 1.1754 | 0.8529 | 400 | 1.2226 | | 1.1662 | 1.2793 | 600 | 1.2202 | | 1.1182 | 1.7058 | 800 | 1.2184 | | 1.046 | 2.1322 | 1000 | 1.2190 | | 1.0772 | 2.5586 | 1200 | 1.2190 | | 1.0326 | 2.9851 | 1400 | 1.2188 | | 1.013 | 3.4115 | 1600 | 1.2191 | | 1.1103 | 3.8380 | 1800 | 1.2188 | Note that the training data volume for the continued pretraining stage is capped at 300MB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. **ProgressGym-HistLlama3-8B-C021-instruct is an instruction-tuned language model.** It is tuned on [ProgressGym-TimelessQA](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-TimelessQA), using the following hyperparameters: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_steps: 20 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8056 | 0.4167 | 20 | 0.8398 | | 0.7984 | 0.8333 | 40 | 0.8509 | | 0.439 | 1.25 | 60 | 0.8703 | | 0.4595 | 1.6667 | 80 | 0.8540 | | 0.3986 | 2.0833 | 100 | 0.8511 | | 0.3895 | 2.5 | 120 | 0.8557 | | 0.3761 | 2.9167 | 140 | 0.8601 | | 0.3652 | 3.3333 | 160 | 0.8633 | | 0.3712 | 3.75 | 180 | 0.8667 | ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** PKU-Alignment/ProgressGym-LeaderBoard *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
eladiorocha/example-model
eladiorocha
"2024-06-12T16:49:16Z"
0
0
null
[ "arxiv:1910.09700", "license:mit", "region:us" ]
null
"2024-06-12T16:33:35Z"
--- license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mike058/soldier2221
Mike058
"2024-06-12T16:35:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-12T16:34:16Z"
--- license: apache-2.0 library_name: adapter-transformers ---A military man throws a grenade into a house that stands in the forest
Niggendar/autod4StylePony_v12
Niggendar
"2024-06-12T16:45:50Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-12T16:36:35Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Theed67/111
Theed67
"2024-06-12T16:36:37Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-12T16:36:37Z"
--- license: apache-2.0 ---
Charlie911/OpenELM-3B-Instruct-CP-SFT-llama3-generated-v1
Charlie911
"2024-06-12T18:12:56Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-12T16:37:11Z"
Entry not found
Hhdjsjsv/Hfhjcvb
Hhdjsjsv
"2024-06-12T16:37:11Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-12T16:37:11Z"
--- license: openrail ---
Augusto777/swinv2-tiny-patch4-window8-256-ve-U13-b-80
Augusto777
"2024-06-12T17:09:11Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-12T16:37:37Z"
--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-ve-U13-b-80 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7391304347826086 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-ve-U13-b-80 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7882 - Accuracy: 0.7391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 6 | 1.3858 | 0.1304 | | 1.3856 | 2.0 | 13 | 1.3777 | 0.3696 | | 1.3856 | 2.92 | 19 | 1.3488 | 0.2391 | | 1.361 | 4.0 | 26 | 1.2503 | 0.2826 | | 1.2088 | 4.92 | 32 | 1.1317 | 0.4130 | | 1.2088 | 6.0 | 39 | 1.0244 | 0.4565 | | 1.0729 | 6.92 | 45 | 1.0413 | 0.4565 | | 0.9554 | 8.0 | 52 | 0.9286 | 0.5652 | | 0.9554 | 8.92 | 58 | 0.9103 | 0.5652 | | 0.8221 | 10.0 | 65 | 0.8519 | 0.6522 | | 0.732 | 10.92 | 71 | 0.8300 | 0.5870 | | 0.732 | 12.0 | 78 | 0.8103 | 0.6304 | | 0.6491 | 12.92 | 84 | 0.9533 | 0.5870 | | 0.5724 | 14.0 | 91 | 0.7882 | 0.7391 | | 0.5724 | 14.92 | 97 | 0.8072 | 0.6957 | | 0.5305 | 16.0 | 104 | 0.7651 | 0.7391 | | 0.4879 | 16.92 | 110 | 0.7379 | 0.7174 | | 0.4879 | 18.0 | 117 | 0.7590 | 0.6739 | | 0.4346 | 18.92 | 123 | 0.9283 | 0.6739 | | 0.3671 | 20.0 | 130 | 1.0188 | 0.6304 | | 0.3671 | 20.92 | 136 | 0.8959 | 0.7391 | | 0.3725 | 22.0 | 143 | 0.9502 | 0.6957 | | 0.3725 | 22.92 | 149 | 0.9627 | 0.6522 | | 0.3321 | 24.0 | 156 | 0.9619 | 0.6957 | | 0.3376 | 24.92 | 162 | 1.0459 | 0.6739 | | 0.3376 | 26.0 | 169 | 1.0167 | 0.6522 | | 0.3699 | 26.92 | 175 | 0.9949 | 0.6304 | | 0.3098 | 28.0 | 182 | 0.9944 | 0.6739 | | 0.3098 | 28.92 | 188 | 1.0860 | 0.6304 | | 0.253 | 30.0 | 195 | 1.1721 | 0.6522 | | 0.2615 | 30.92 | 201 | 1.1626 | 0.6739 | | 0.2615 | 32.0 | 208 | 1.2464 | 0.6304 | | 0.242 | 32.92 | 214 | 1.2179 | 0.6522 | | 0.2173 | 34.0 | 221 | 1.2407 | 0.6304 | | 0.2173 | 34.92 | 227 | 1.1585 | 0.6739 | | 0.2305 | 36.0 | 234 | 1.3048 | 0.6522 | | 0.2114 | 36.92 | 240 | 1.1776 | 0.6522 | | 0.2114 | 38.0 | 247 | 1.1460 | 0.6522 | | 0.2243 | 38.92 | 253 | 1.2424 | 0.6957 | | 0.1822 | 40.0 | 260 | 1.2804 | 0.6739 | | 0.1822 | 40.92 | 266 | 1.3472 | 0.6739 | | 0.2065 | 42.0 | 273 | 1.3632 | 0.6739 | | 0.2065 | 42.92 | 279 | 1.2832 | 0.6739 | | 0.1942 | 44.0 | 286 | 1.3500 | 0.6739 | | 0.1699 | 44.92 | 292 | 1.3242 | 0.6739 | | 0.1699 | 46.0 | 299 | 1.3189 | 0.6957 | | 0.1764 | 46.92 | 305 | 1.2840 | 0.6739 | | 0.1771 | 48.0 | 312 | 1.3069 | 0.6957 | | 0.1771 | 48.92 | 318 | 1.1585 | 0.6957 | | 0.2095 | 50.0 | 325 | 1.3702 | 0.6957 | | 0.1404 | 50.92 | 331 | 1.3539 | 0.6957 | | 0.1404 | 52.0 | 338 | 1.3723 | 0.6957 | | 0.1449 | 52.92 | 344 | 1.3877 | 0.6957 | | 0.1348 | 54.0 | 351 | 1.3381 | 0.6739 | | 0.1348 | 54.92 | 357 | 1.3700 | 0.6739 | | 0.1683 | 56.0 | 364 | 1.2871 | 0.6957 | | 0.1577 | 56.92 | 370 | 1.3214 | 0.6957 | | 0.1577 | 58.0 | 377 | 1.3992 | 0.6522 | | 0.1474 | 58.92 | 383 | 1.3800 | 0.6522 | | 0.1267 | 60.0 | 390 | 1.2535 | 0.6739 | | 0.1267 | 60.92 | 396 | 1.3200 | 0.6739 | | 0.1171 | 62.0 | 403 | 1.3730 | 0.6739 | | 0.1171 | 62.92 | 409 | 1.3678 | 0.6739 | | 0.1461 | 64.0 | 416 | 1.3788 | 0.6739 | | 0.1124 | 64.92 | 422 | 1.3944 | 0.6739 | | 0.1124 | 66.0 | 429 | 1.3724 | 0.6739 | | 0.1168 | 66.92 | 435 | 1.3553 | 0.6522 | | 0.1243 | 68.0 | 442 | 1.3829 | 0.6739 | | 0.1243 | 68.92 | 448 | 1.4040 | 0.6739 | | 0.1375 | 70.0 | 455 | 1.4127 | 0.6522 | | 0.1017 | 70.92 | 461 | 1.4070 | 0.6522 | | 0.1017 | 72.0 | 468 | 1.3989 | 0.6739 | | 0.1346 | 72.92 | 474 | 1.3995 | 0.6739 | | 0.1382 | 73.85 | 480 | 1.3988 | 0.6739 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0