Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-02 06:27:36
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
407 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-02 06:27:06
card
stringlengths
11
1.01M
Setpember/Jon_GPT2L_DPO_3props_epi_point5
Setpember
"2025-03-25T22:07:29"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-25T22:05:54"
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ModelCloud/GRIN-MoE-gptq-4bit
ModelCloud
"2024-09-19T10:17:43"
5
6
null
[ "safetensors", "grinmoe", "gptq", "4bit", "int4", "gptqmodel", "modelcloud", "custom_code", "4-bit", "region:us" ]
null
"2024-09-19T09:51:48"
--- tags: - gptq - 4bit - int4 - gptqmodel - modelcloud --- This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel). - **bits**: 4 - **group_size**: 128 - **desc_act**: false - **static_groups**: false - **sym**: true - **lm_head**: false - **damp_percent**: 0.0025 - **damp_auto_increment**: 0.0015 - **true_sequential**: true - **model_name_or_path**: "" - **model_file_base_name**: "model" - **quant_method**: "gptq" - **checkpoint_format**: "gptq" - **meta**: - **quantizer**: "gptqmodel:1.0.3-dev0" ## Example: ```python from transformers import AutoTokenizer from gptqmodel import GPTQModel model_name = "ModelCloud/GRIN-MoE-gptq-4bit" prompt = [ {"role": "system", "content": "You are GRIN-MoE model from microsoft, a helpful assistant."}, {"role": "user", "content": "I am in Shanghai, preparing to visit the natural history museum. Can you tell me the best way to"} ] tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = GPTQModel.from_quantized(model_name, trust_remote_code=True) input_tensor = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_ids=input_tensor.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result) ``` ## Lm_eval result: | Tasks | Metric | | GRIN-MoE | GRIN-MoE-gptq-4bit | | ------------------------------------- | ---------- | --- | -------- | ------------------ | | arc_challenge | acc | ↑ | 0.6408 | 0.6425 | | | acc_norm | ↑ | 0.6561 | 0.6587 | | arc_easy | acc | ↑ | 0.8645 | 0.8683 | | | acc_norm | ↑ | 0.8422 | 0.846 | | boolq | acc | ↑ | 0.8820 | 0.8765 | | hellaswag | acc | ↑ | 0.6972 | 0.6891 | | | acc_norm | ↑ | 0.8518 | 0.8486 | | lambada_openai | acc | ↑ | 0.7058 | 0.7068 | | | perplexity | ↓ | 3.4568 | 3.5732 | | mmlu | acc | ↑ | 0.7751 | 0.7706 | | - humanities | acc | ↑ | 0.7394 | 0.7384 | | - formal_logic | acc | ↑ | 0.6429 | 0.6746 | | - high_school_european_history | acc | ↑ | 0.8606 | 0.8364 | | - high_school_us_history | acc | ↑ | 0.9118 | 0.902 | | - high_school_world_history | acc | ↑ | 0.8903 | 0.8734 | | - international_law | acc | ↑ | 0.9256 | 0.9091 | | - jurisprudence | acc | ↑ | 0.8426 | 0.8519 | | - logical_fallacies | acc | ↑ | 0.8344 | 0.8528 | | - moral_disputes | acc | ↑ | 0.7977 | 0.8208 | | - moral_scenarios | acc | ↑ | 0.6961 | 0.6849 | | - philosophy | acc | ↑ | 0.8199 | 0.8071 | | - prehistory | acc | ↑ | 0.8457 | 0.8426 | | - professional_law | acc | ↑ | 0.6173 | 0.6193 | | - world_religions | acc | ↑ | 0.8480 | 0.8655 | | - other | acc | ↑ | 0.8130 | 0.805 | | - business_ethics | acc | ↑ | 0.8100 | 0.78 | | - clinical_knowledge | acc | ↑ | 0.8415 | 0.8302 | | - college_medicine | acc | ↑ | 0.7514 | 0.7457 | | - global_facts | acc | ↑ | 0.5700 | 0.54 | | - human_aging | acc | ↑ | 0.7803 | 0.7668 | | - management | acc | ↑ | 0.8447 | 0.8447 | | - marketing | acc | ↑ | 0.9145 | 0.9103 | | - medical_genetics | acc | ↑ | 0.9200 | 0.89 | | - miscellaneous | acc | ↑ | 0.8966 | 0.8927 | | - nutrition | acc | ↑ | 0.8333 | 0.8268 | | - professional_accounting | acc | ↑ | 0.6489 | 0.656 | | - professional_medicine | acc | ↑ | 0.8750 | 0.8603 | | - virology | acc | ↑ | 0.5422 | 0.5361 | | - social sciences | acc | ↑ | 0.8638 | 0.8544 | | - econometrics | acc | ↑ | 0.5789 | 0.5789 | | - high_school_geography | acc | ↑ | 0.9091 | 0.8788 | | - high_school_government_and_politics | acc | ↑ | 0.9585 | 0.943 | | - high_school_macroeconomics | acc | ↑ | 0.8308 | 0.8103 | | - high_school_microeconomics | acc | ↑ | 0.9328 | 0.9286 | | - high_school_psychology | acc | ↑ | 0.9321 | 0.9303 | | - human_sexuality | acc | ↑ | 0.8779 | 0.8626 | | - professional_psychology | acc | ↑ | 0.8382 | 0.8219 | | - public_relations | acc | ↑ | 0.7545 | 0.7727 | | - security_studies | acc | ↑ | 0.7878 | 0.7918 | | - sociology | acc | ↑ | 0.8905 | 0.8955 | | - us_foreign_policy | acc | ↑ | 0.9000 | 0.88 | | - stem | acc | ↑ | 0.7044 | 0.7031 | | - abstract_algebra | acc | ↑ | 0.5000 | 0.45 | | - anatomy | acc | ↑ | 0.7407 | 0.7481 | | - astronomy | acc | ↑ | 0.8618 | 0.8618 | | - college_biology | acc | ↑ | 0.8889 | 0.875 | | - college_chemistry | acc | ↑ | 0.6100 | 0.59 | | - college_computer_science | acc | ↑ | 0.7100 | 0.67 | | - college_mathematics | acc | ↑ | 0.5100 | 0.58 | | - college_physics | acc | ↑ | 0.4608 | 0.4608 | | - computer_security | acc | ↑ | 0.8200 | 0.82 | | - conceptual_physics | acc | ↑ | 0.7787 | 0.766 | | - electrical_engineering | acc | ↑ | 0.6828 | 0.6828 | | - elementary_mathematics | acc | ↑ | 0.7566 | 0.7593 | | - high_school_biology | acc | ↑ | 0.9000 | 0.9097 | | - high_school_chemistry | acc | ↑ | 0.6650 | 0.665 | | - high_school_computer_science | acc | ↑ | 0.8700 | 0.86 | | - high_school_mathematics | acc | ↑ | 0.4370 | 0.4296 | | - high_school_physics | acc | ↑ | 0.5960 | 0.5894 | | - high_school_statistics | acc | ↑ | 0.7176 | 0.7222 | | - machine_learning | acc | ↑ | 0.6071 | 0.6339 | | openbookqa | acc | ↑ | 0.3920 | 0.386 | | | acc_norm | ↑ | 0.4900 | 0.486 | | piqa | acc | ↑ | 0.8183 | 0.8166 | | | acc_norm | ↑ | 0.8205 | 0.8177 | | rte | acc | ↑ | 0.8014 | 0.7834 | | truthfulqa_mc1 | acc | ↑ | 0.3880 | 0.399 | | winogrande | acc | ↑ | 0.7940 | 0.768 | | | | | | | | Groups | Metric | | Value | Value | | mmlu | acc | ↑ | 0.7751 | 0.7706 | | - humanities | acc | ↑ | 0.7394 | 0.7384 | | - other | acc | ↑ | 0.8130 | 0.805 | | - social sciences | acc | ↑ | 0.8638 | 0.8544 | | - stem | acc | ↑ | 0.7044 | 0.7031 |
NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF
NikolayKozloff
"2025-02-24T13:46:13"
0
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "pl", "base_model:CYFRAGOVPL/PLLuM-12B-nc-instruct", "base_model:quantized:CYFRAGOVPL/PLLuM-12B-nc-instruct", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-24T13:45:29"
--- license: cc-by-nc-4.0 language: - pl tags: - llama-cpp - gguf-my-repo base_model: CYFRAGOVPL/PLLuM-12B-nc-instruct --- # NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF This model was converted to GGUF format from [`CYFRAGOVPL/PLLuM-12B-nc-instruct`](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-nc-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-nc-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF --hf-file pllum-12b-nc-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF --hf-file pllum-12b-nc-instruct-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF --hf-file pllum-12b-nc-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/PLLuM-12B-nc-instruct-Q6_K-GGUF --hf-file pllum-12b-nc-instruct-q6_k.gguf -c 2048 ```
uproai/Rose-2x7B-GGUF
uproai
"2024-02-26T11:55:30"
2
0
null
[ "gguf", "moe", "frankenmoe", "merge", "mergekit", "uproai/ros-7b-v1", "WizardLM/WizardMath-7B-V1.1", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:uproai/ros-7b-v1", "base_model:merge:uproai/ros-7b-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-02-26T11:50:52"
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - uproai/ros-7b-v1 - WizardLM/WizardMath-7B-V1.1 base_model: - uproai/ros-7b-v1 - WizardLM/WizardMath-7B-V1.1 --- # Rose-2x7B-GGUF Rose-2x7B-GGUF is GGUF version of [Rose-2x7B](https://huggingface.co/uproai/Rose-2x7B) which is a Mixure of Experts (MoE) made with the following models using [Mergekit](https://github.com/cg123/mergekit): * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ## 🧩 Configuration ```yaml base_model: uproai/ros-7b-v1 experts: - source_model: maywell/PiVoT-0.1-Starling-LM-RP positive_prompts: - "storywriting" - "write" - "scene" - "story" - "character" - source_model: WizardLM/WizardMath-7B-V1.1 positive_prompts: - "reason" - "math" - "mathematics" - "solve" - "count" tokenizer_source: union ```
AppleHarem/keli
AppleHarem
"2023-12-28T09:28:46"
0
0
null
[ "art", "text-to-image", "license:mit", "region:us" ]
text-to-image
"2023-12-28T08:21:50"
--- license: mit pipeline_tag: text-to-image tags: - art --- # Lora of keli This model is trained with [kohyass-scripts](https://github.com/kohya-ss/sd-scripts). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). And the WebUI Panel provide by [LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI) The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [LittleApple-fp16/SpiritForeseerMix](https://huggingface.co/LittleApple-fp16/SpiritForeseerMix). The trigger words are: 1. `keli` 2. `bangs, klee_\(genshin_impact\), twintails, ahoge, pointy_ears, low_twintails, long_hair, hair_between_eyes, hat, smile, sidelocks, red_headwear, open_mouth, clover_print, hat_feather, cabbie_hat, light_brown_hair, bag, :d, backpack, hat_ornament, red_eyes, orange_eyes, blonde_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. 6. Individuals who feel that writing a WebUI is meaningless or impatient. These are available epochs: | Epochs | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | cheongsam | free | hanfu | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-----------------------------------------------------|:---------------------------------------------|:----------------------------------------------------|:---------------------------------------------------|:-----------------------------------------|:-------------------------------------------|:-----------------------------------------|:-----------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------|:---------------------------------------------| | **10** | **0.965** | [**Download**](000010/keli-000010.safetensors) | ![pattern_1-000010](000010/previews/pattern_1.png) | ![pattern_2-000010](000010/previews/pattern_2.png) | ![pattern_3-000010](000010/previews/pattern_3.png) | ![pattern_4-000010](000010/previews/pattern_4.png) | ![pattern_5-000010](000010/previews/pattern_5.png) | ![pattern_6-000010](000010/previews/pattern_6.png) | ![pattern_7-000010](000010/previews/pattern_7.png) | ![pattern_8-000010](000010/previews/pattern_8.png) | ![pattern_9-000010](000010/previews/pattern_9.png) | ![pattern_10-000010](000010/previews/pattern_10.png) | ![bikini-000010](000010/previews/bikini.png) | [<NSFW, click to see>](000010/previews/bondage.png) | ![cheongsam-000010](000010/previews/cheongsam.png) | ![free-000010](000010/previews/free.png) | ![hanfu-000010](000010/previews/hanfu.png) | ![maid-000010](000010/previews/maid.png) | ![miko-000010](000010/previews/miko.png) | [<NSFW, click to see>](000010/previews/nude.png) | [<NSFW, click to see>](000010/previews/nude2.png) | ![suit-000010](000010/previews/suit.png) | ![yukata-000010](000010/previews/yukata.png) | | 9 | 0.965 | [Download](000009/keli-000009.safetensors) | ![pattern_1-000009](000009/previews/pattern_1.png) | ![pattern_2-000009](000009/previews/pattern_2.png) | ![pattern_3-000009](000009/previews/pattern_3.png) | ![pattern_4-000009](000009/previews/pattern_4.png) | ![pattern_5-000009](000009/previews/pattern_5.png) | ![pattern_6-000009](000009/previews/pattern_6.png) | ![pattern_7-000009](000009/previews/pattern_7.png) | ![pattern_8-000009](000009/previews/pattern_8.png) | ![pattern_9-000009](000009/previews/pattern_9.png) | ![pattern_10-000009](000009/previews/pattern_10.png) | ![bikini-000009](000009/previews/bikini.png) | [<NSFW, click to see>](000009/previews/bondage.png) | ![cheongsam-000009](000009/previews/cheongsam.png) | ![free-000009](000009/previews/free.png) | ![hanfu-000009](000009/previews/hanfu.png) | ![maid-000009](000009/previews/maid.png) | ![miko-000009](000009/previews/miko.png) | [<NSFW, click to see>](000009/previews/nude.png) | [<NSFW, click to see>](000009/previews/nude2.png) | ![suit-000009](000009/previews/suit.png) | ![yukata-000009](000009/previews/yukata.png) | | 8 | 0.965 | [Download](000008/keli-000008.safetensors) | ![pattern_1-000008](000008/previews/pattern_1.png) | ![pattern_2-000008](000008/previews/pattern_2.png) | ![pattern_3-000008](000008/previews/pattern_3.png) | ![pattern_4-000008](000008/previews/pattern_4.png) | ![pattern_5-000008](000008/previews/pattern_5.png) | ![pattern_6-000008](000008/previews/pattern_6.png) | ![pattern_7-000008](000008/previews/pattern_7.png) | ![pattern_8-000008](000008/previews/pattern_8.png) | ![pattern_9-000008](000008/previews/pattern_9.png) | ![pattern_10-000008](000008/previews/pattern_10.png) | ![bikini-000008](000008/previews/bikini.png) | [<NSFW, click to see>](000008/previews/bondage.png) | ![cheongsam-000008](000008/previews/cheongsam.png) | ![free-000008](000008/previews/free.png) | ![hanfu-000008](000008/previews/hanfu.png) | ![maid-000008](000008/previews/maid.png) | ![miko-000008](000008/previews/miko.png) | [<NSFW, click to see>](000008/previews/nude.png) | [<NSFW, click to see>](000008/previews/nude2.png) | ![suit-000008](000008/previews/suit.png) | ![yukata-000008](000008/previews/yukata.png) | | 7 | 0.965 | [Download](000007/keli-000007.safetensors) | ![pattern_1-000007](000007/previews/pattern_1.png) | ![pattern_2-000007](000007/previews/pattern_2.png) | ![pattern_3-000007](000007/previews/pattern_3.png) | ![pattern_4-000007](000007/previews/pattern_4.png) | ![pattern_5-000007](000007/previews/pattern_5.png) | ![pattern_6-000007](000007/previews/pattern_6.png) | ![pattern_7-000007](000007/previews/pattern_7.png) | ![pattern_8-000007](000007/previews/pattern_8.png) | ![pattern_9-000007](000007/previews/pattern_9.png) | ![pattern_10-000007](000007/previews/pattern_10.png) | ![bikini-000007](000007/previews/bikini.png) | [<NSFW, click to see>](000007/previews/bondage.png) | ![cheongsam-000007](000007/previews/cheongsam.png) | ![free-000007](000007/previews/free.png) | ![hanfu-000007](000007/previews/hanfu.png) | ![maid-000007](000007/previews/maid.png) | ![miko-000007](000007/previews/miko.png) | [<NSFW, click to see>](000007/previews/nude.png) | [<NSFW, click to see>](000007/previews/nude2.png) | ![suit-000007](000007/previews/suit.png) | ![yukata-000007](000007/previews/yukata.png) | | 6 | 0.964 | [Download](000006/keli-000006.safetensors) | ![pattern_1-000006](000006/previews/pattern_1.png) | ![pattern_2-000006](000006/previews/pattern_2.png) | ![pattern_3-000006](000006/previews/pattern_3.png) | ![pattern_4-000006](000006/previews/pattern_4.png) | ![pattern_5-000006](000006/previews/pattern_5.png) | ![pattern_6-000006](000006/previews/pattern_6.png) | ![pattern_7-000006](000006/previews/pattern_7.png) | ![pattern_8-000006](000006/previews/pattern_8.png) | ![pattern_9-000006](000006/previews/pattern_9.png) | ![pattern_10-000006](000006/previews/pattern_10.png) | ![bikini-000006](000006/previews/bikini.png) | [<NSFW, click to see>](000006/previews/bondage.png) | ![cheongsam-000006](000006/previews/cheongsam.png) | ![free-000006](000006/previews/free.png) | ![hanfu-000006](000006/previews/hanfu.png) | ![maid-000006](000006/previews/maid.png) | ![miko-000006](000006/previews/miko.png) | [<NSFW, click to see>](000006/previews/nude.png) | [<NSFW, click to see>](000006/previews/nude2.png) | ![suit-000006](000006/previews/suit.png) | ![yukata-000006](000006/previews/yukata.png) | | 5 | 0.964 | [Download](000005/keli-000005.safetensors) | ![pattern_1-000005](000005/previews/pattern_1.png) | ![pattern_2-000005](000005/previews/pattern_2.png) | ![pattern_3-000005](000005/previews/pattern_3.png) | ![pattern_4-000005](000005/previews/pattern_4.png) | ![pattern_5-000005](000005/previews/pattern_5.png) | ![pattern_6-000005](000005/previews/pattern_6.png) | ![pattern_7-000005](000005/previews/pattern_7.png) | ![pattern_8-000005](000005/previews/pattern_8.png) | ![pattern_9-000005](000005/previews/pattern_9.png) | ![pattern_10-000005](000005/previews/pattern_10.png) | ![bikini-000005](000005/previews/bikini.png) | [<NSFW, click to see>](000005/previews/bondage.png) | ![cheongsam-000005](000005/previews/cheongsam.png) | ![free-000005](000005/previews/free.png) | ![hanfu-000005](000005/previews/hanfu.png) | ![maid-000005](000005/previews/maid.png) | ![miko-000005](000005/previews/miko.png) | [<NSFW, click to see>](000005/previews/nude.png) | [<NSFW, click to see>](000005/previews/nude2.png) | ![suit-000005](000005/previews/suit.png) | ![yukata-000005](000005/previews/yukata.png) | | 4 | 0.963 | [Download](000004/keli-000004.safetensors) | ![pattern_1-000004](000004/previews/pattern_1.png) | ![pattern_2-000004](000004/previews/pattern_2.png) | ![pattern_3-000004](000004/previews/pattern_3.png) | ![pattern_4-000004](000004/previews/pattern_4.png) | ![pattern_5-000004](000004/previews/pattern_5.png) | ![pattern_6-000004](000004/previews/pattern_6.png) | ![pattern_7-000004](000004/previews/pattern_7.png) | ![pattern_8-000004](000004/previews/pattern_8.png) | ![pattern_9-000004](000004/previews/pattern_9.png) | ![pattern_10-000004](000004/previews/pattern_10.png) | ![bikini-000004](000004/previews/bikini.png) | [<NSFW, click to see>](000004/previews/bondage.png) | ![cheongsam-000004](000004/previews/cheongsam.png) | ![free-000004](000004/previews/free.png) | ![hanfu-000004](000004/previews/hanfu.png) | ![maid-000004](000004/previews/maid.png) | ![miko-000004](000004/previews/miko.png) | [<NSFW, click to see>](000004/previews/nude.png) | [<NSFW, click to see>](000004/previews/nude2.png) | ![suit-000004](000004/previews/suit.png) | ![yukata-000004](000004/previews/yukata.png) | | 3 | 0.963 | [Download](000003/keli-000003.safetensors) | ![pattern_1-000003](000003/previews/pattern_1.png) | ![pattern_2-000003](000003/previews/pattern_2.png) | ![pattern_3-000003](000003/previews/pattern_3.png) | ![pattern_4-000003](000003/previews/pattern_4.png) | ![pattern_5-000003](000003/previews/pattern_5.png) | ![pattern_6-000003](000003/previews/pattern_6.png) | ![pattern_7-000003](000003/previews/pattern_7.png) | ![pattern_8-000003](000003/previews/pattern_8.png) | ![pattern_9-000003](000003/previews/pattern_9.png) | ![pattern_10-000003](000003/previews/pattern_10.png) | ![bikini-000003](000003/previews/bikini.png) | [<NSFW, click to see>](000003/previews/bondage.png) | ![cheongsam-000003](000003/previews/cheongsam.png) | ![free-000003](000003/previews/free.png) | ![hanfu-000003](000003/previews/hanfu.png) | ![maid-000003](000003/previews/maid.png) | ![miko-000003](000003/previews/miko.png) | [<NSFW, click to see>](000003/previews/nude.png) | [<NSFW, click to see>](000003/previews/nude2.png) | ![suit-000003](000003/previews/suit.png) | ![yukata-000003](000003/previews/yukata.png) | | 2 | 0.962 | [Download](000002/keli-000002.safetensors) | ![pattern_1-000002](000002/previews/pattern_1.png) | ![pattern_2-000002](000002/previews/pattern_2.png) | ![pattern_3-000002](000002/previews/pattern_3.png) | ![pattern_4-000002](000002/previews/pattern_4.png) | ![pattern_5-000002](000002/previews/pattern_5.png) | ![pattern_6-000002](000002/previews/pattern_6.png) | ![pattern_7-000002](000002/previews/pattern_7.png) | ![pattern_8-000002](000002/previews/pattern_8.png) | ![pattern_9-000002](000002/previews/pattern_9.png) | ![pattern_10-000002](000002/previews/pattern_10.png) | ![bikini-000002](000002/previews/bikini.png) | [<NSFW, click to see>](000002/previews/bondage.png) | ![cheongsam-000002](000002/previews/cheongsam.png) | ![free-000002](000002/previews/free.png) | ![hanfu-000002](000002/previews/hanfu.png) | ![maid-000002](000002/previews/maid.png) | ![miko-000002](000002/previews/miko.png) | [<NSFW, click to see>](000002/previews/nude.png) | [<NSFW, click to see>](000002/previews/nude2.png) | ![suit-000002](000002/previews/suit.png) | ![yukata-000002](000002/previews/yukata.png) | | 1 | 0.962 | [Download](000001/keli-000001.safetensors) | ![pattern_1-000001](000001/previews/pattern_1.png) | ![pattern_2-000001](000001/previews/pattern_2.png) | ![pattern_3-000001](000001/previews/pattern_3.png) | ![pattern_4-000001](000001/previews/pattern_4.png) | ![pattern_5-000001](000001/previews/pattern_5.png) | ![pattern_6-000001](000001/previews/pattern_6.png) | ![pattern_7-000001](000001/previews/pattern_7.png) | ![pattern_8-000001](000001/previews/pattern_8.png) | ![pattern_9-000001](000001/previews/pattern_9.png) | ![pattern_10-000001](000001/previews/pattern_10.png) | ![bikini-000001](000001/previews/bikini.png) | [<NSFW, click to see>](000001/previews/bondage.png) | ![cheongsam-000001](000001/previews/cheongsam.png) | ![free-000001](000001/previews/free.png) | ![hanfu-000001](000001/previews/hanfu.png) | ![maid-000001](000001/previews/maid.png) | ![miko-000001](000001/previews/miko.png) | [<NSFW, click to see>](000001/previews/nude.png) | [<NSFW, click to see>](000001/previews/nude2.png) | ![suit-000001](000001/previews/suit.png) | ![yukata-000001](000001/previews/yukata.png) |
luaqi/sn29_01041
luaqi
"2025-01-04T02:36:52"
238
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-04T02:30:32"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sb3/ppo-MiniGrid-Empty-Random-5x5-v0
sb3
"2023-03-31T18:11:08"
262
0
stable-baselines3
[ "stable-baselines3", "MiniGrid-Empty-Random-5x5-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-03-28T12:23:13"
--- library_name: stable-baselines3 tags: - MiniGrid-Empty-Random-5x5-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MiniGrid-Empty-Random-5x5-v0 type: MiniGrid-Empty-Random-5x5-v0 metrics: - type: mean_reward value: 0.97 +/- 0.01 name: mean_reward verified: false --- # **PPO** Agent playing **MiniGrid-Empty-Random-5x5-v0** This is a trained model of a **PPO** agent playing **MiniGrid-Empty-Random-5x5-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -orga sb3 -f logs/ python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -orga sb3 -f logs/ python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/ -orga sb3 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('clip_range', 0.2), ('ent_coef', 0.0), ('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'), ('gae_lambda', 0.95), ('gamma', 0.99), ('learning_rate', 0.00025), ('n_envs', 8), ('n_epochs', 10), ('n_steps', 128), ('n_timesteps', 100000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
mradermacher/astrollama-3-8b-chat_aic-i1-GGUF
mradermacher
"2024-09-30T12:49:06"
27
0
transformers
[ "transformers", "gguf", "llama-3", "astronomy", "astrophysics", "arxiv", "en", "base_model:AstroMLab/astrollama-3-8b-chat_aic", "base_model:quantized:AstroMLab/astrollama-3-8b-chat_aic", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
"2024-09-30T11:36:04"
--- base_model: AstroMLab/astrollama-3-8b-chat_aic language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - llama-3 - astronomy - astrophysics - arxiv --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/AstroMLab/astrollama-3-8b-chat_aic <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/astrollama-3-8b-chat_aic-i1-GGUF/resolve/main/astrollama-3-8b-chat_aic.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
theoracle/autotrain-kaggle
theoracle
"2024-03-29T20:16:16"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "kaggle-qa", "text-generation", "peft", "conversational", "dataset:custom", "license:other", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-29T09:49:18"
--- title: Kaggle Q&A Gemma Model tags: - autotrain - kaggle-qa - text-generation - peft datasets: - custom library_name: transformers widget: - messages: - role: user content: How do I submit to a Kaggle competition? license: other --- ## Overview Developed with the cutting-edge AutoTrain and PEFT technologies, this model is specifically trained to provide detailed answers to questions about Kaggle. Whether you're wondering how to get started, how to submit to a competition, or how to navigate the datasets, this model is equipped to assist. ## Key Features - **Kaggle-Specific Knowledge**: Designed to offer insights and guidance on using Kaggle, from competition submissions to data exploration. - **Powered by AutoTrain**: Utilizes Hugging Face's AutoTrain for efficient and effective training, ensuring high-quality responses. - **PEFT Enhanced**: Benefits from PEFT for improved performance and efficiency, making it highly scalable and robust. ## Usage The following Python code snippet illustrates how to use this model to answer your Kaggle-related questions: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "theoracle/autotrain-kaggle" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() tokenizer.pad_token = tokenizer.eos_token prompt = ''' ### How do I prepare for Kaggle competitions?\n ### Answer: ''' encoding = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True, max_length=500, add_special_tokens=True) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] output_ids = model.generate( input_ids.to('cuda'), attention_mask=attention_mask.to('cuda'), max_new_tokens=300, pad_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(response) ``` ## Application Scenarios This model is particularly useful for: - Kaggle competitors seeking advice on strategy and submissions. - Educators and students looking for a tool to facilitate learning through Kaggle competitions. - Data scientists requiring quick access to information about Kaggle datasets and competitions. ## About AutoTrain and PEFT AutoTrain by Hugging Face streamlines the model training process, making it easier and more efficient to develop state-of-the-art models. PEFT enhances this by providing a framework for efficient model training and deployment. Together, they enable this model to deliver fast and accurate responses to your Kaggle inquiries. ## License This model is distributed under an "other" license, allowing diverse applications while encouraging users to review the license terms for compliance with their project requirements.
ron5569/lamma_7b_test_perf
ron5569
"2023-11-05T14:27:27"
0
0
peft
[ "peft", "region:us" ]
null
"2023-11-05T14:18:19"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0
great0001/7d81fc6a-cb85-4191-8616-1d619cd8f0e3
great0001
"2025-01-19T05:23:46"
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
"2025-01-19T05:21:18"
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: 7d81fc6a-cb85-4191-8616-1d619cd8f0e3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Capybara-7B-V1 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 780a12a658d8c0ff_train_data.json ds_type: json format: custom path: /workspace/input_data/780a12a658d8c0ff_train_data.json type: field_instruction: prompt field_output: good_res format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/7d81fc6a-cb85-4191-8616-1d619cd8f0e3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/780a12a658d8c0ff_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e1187ce8-ef5e-4551-b9e9-17e5e13814ab wandb_project: Birthday-SN56-14-Gradients-On-Demand wandb_run: your_name wandb_runid: e1187ce8-ef5e-4551-b9e9-17e5e13814ab warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7d81fc6a-cb85-4191-8616-1d619cd8f0e3 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0006 | 1 | nan | | 0.0 | 0.0018 | 3 | nan | | 0.0 | 0.0036 | 6 | nan | | 0.0 | 0.0053 | 9 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
yfarm01/sn29_dec22_c1
yfarm01
"2024-11-22T12:50:58"
36
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-22T12:48:20"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maulairfani/autocomplete_gpt2
maulairfani
"2023-10-05T09:00:37"
147
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-05T08:59:41"
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: autocomplete_gpt2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # autocomplete_gpt2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.14.0
TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF
TinyUD
"2025-01-17T07:43:54"
23
0
transformers
[ "transformers", "gguf", "conversational", "llama-cpp", "gguf-my-repo", "text-generation", "ko", "base_model:rtzr/ko-gemma-2-9b-it", "base_model:quantized:rtzr/ko-gemma-2-9b-it", "license:gemma", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
"2025-01-17T07:43:31"
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license tags: - conversational - llama-cpp - gguf-my-repo base_model: rtzr/ko-gemma-2-9b-it language: - ko --- # TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF This model was converted to GGUF format from [`rtzr/ko-gemma-2-9b-it`](https://huggingface.co/rtzr/ko-gemma-2-9b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/rtzr/ko-gemma-2-9b-it) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF --hf-file ko-gemma-2-9b-it-iq3_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF --hf-file ko-gemma-2-9b-it-iq3_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF --hf-file ko-gemma-2-9b-it-iq3_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo TinyUD/ko-gemma-2-9b-it-IQ3_M-GGUF --hf-file ko-gemma-2-9b-it-iq3_m-imat.gguf -c 2048 ```
Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF
Triangle104
"2025-02-04T07:42:27"
23
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Statuo/Deepseeker-Kunou-Qwen2.5-14b", "base_model:quantized:Statuo/Deepseeker-Kunou-Qwen2.5-14b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-04T07:41:44"
--- base_model: Statuo/Deepseeker-Kunou-Qwen2.5-14b library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo license: apache-2.0 --- # Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF This model was converted to GGUF format from [`Statuo/Deepseeker-Kunou-Qwen2.5-14b`](https://huggingface.co/Statuo/Deepseeker-Kunou-Qwen2.5-14b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Statuo/Deepseeker-Kunou-Qwen2.5-14b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q4_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q4_k_s.gguf -c 2048 ```
zJuu/Qwen-Qwen2-0.5B-1719269708
zJuu
"2024-06-24T22:55:10"
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-0.5B", "base_model:adapter:Qwen/Qwen2-0.5B", "region:us" ]
null
"2024-06-24T22:55:08"
--- library_name: peft base_model: Qwen/Qwen2-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_wnli_256
gokuls
"2023-01-30T02:17:26"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-30T02:16:36"
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_logit_kd_wnli_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue config: wnli split: validation args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_wnli_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3436 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3511 | 1.0 | 3 | 0.3436 | 0.5634 | | 0.3479 | 2.0 | 6 | 0.3457 | 0.5634 | | 0.3474 | 3.0 | 9 | 0.3462 | 0.5634 | | 0.3477 | 4.0 | 12 | 0.3442 | 0.5634 | | 0.3486 | 5.0 | 15 | 0.3442 | 0.5634 | | 0.3479 | 6.0 | 18 | 0.3455 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
davidschulte/ESM_hope_edi_tamil
davidschulte
"2025-03-26T14:34:02"
16
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:dravidianlangtech/hope_edi", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-12-05T17:10:06"
--- base_model: bert-base-multilingual-uncased datasets: - dravidianlangtech/hope_edi license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM dravidianlangtech/hope_edi <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** dravidianlangtech/hope_edi - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** dravidianlangtech/hope_edi - **Subset [optional]:** tamil - **Text Column:** text - **Label Column:** label - **Dataset Split:** train - **Sample size [optional]:** 10000 - **Sample seed [optional]:** 42 ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
LarryAIDraw/kirika_towa_alma_v1
LarryAIDraw
"2023-05-13T18:26:08"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-05-13T18:15:14"
--- license: creativeml-openrail-m --- https://civitai.com/models/64069/kirika-towa-alma-shining-resonance
bartowski/Athene-V2-Agent-GGUF
bartowski
"2024-11-15T02:52:52"
411
6
null
[ "gguf", "RLHF", "Nexusflow", "Athene", "Function Calling", "Agent", "Extraction", "text-generation", "en", "base_model:Nexusflow/Athene-V2-Agent", "base_model:quantized:Nexusflow/Athene-V2-Agent", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
"2024-11-14T16:28:58"
--- quantized_by: bartowski pipeline_tag: text-generation language: - en tags: - RLHF - Nexusflow - Athene - Function Calling - Agent - Extraction base_model: Nexusflow/Athene-V2-Agent license: other --- ## Llamacpp imatrix Quantizations of Athene-V2-Agent Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4058">b4058</a> for quantization. Original model: https://huggingface.co/Nexusflow/Athene-V2-Agent All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Athene-V2-Agent-Q8_0.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. | | [Athene-V2-Agent-Q6_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q6_K_L) | Q6_K_L | 64.94GB | true | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Athene-V2-Agent-Q6_K.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q6_K) | Q6_K | 64.34GB | true | Very high quality, near perfect, *recommended*. | | [Athene-V2-Agent-Q5_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q5_K_L) | Q5_K_L | 55.21GB | true | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Athene-V2-Agent-Q5_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q5_K_M) | Q5_K_M | 54.44GB | true | High quality, *recommended*. | | [Athene-V2-Agent-Q5_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/tree/main/Athene-V2-Agent-Q5_K_S) | Q5_K_S | 51.37GB | true | High quality, *recommended*. | | [Athene-V2-Agent-Q4_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q4_K_L.gguf) | Q4_K_L | 48.33GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Athene-V2-Agent-Q4_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q4_K_M.gguf) | Q4_K_M | 47.41GB | false | Good quality, default size for most use cases, *recommended*. | | [Athene-V2-Agent-Q4_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q4_K_S.gguf) | Q4_K_S | 43.88GB | false | Slightly lower quality with more space savings, *recommended*. | | [Athene-V2-Agent-Q4_0.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, generally not worth using over similarly sized formats | | [Athene-V2-Agent-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q4_0_8_8.gguf) | Q4_0_8_8 | 41.23GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. | | [Athene-V2-Agent-Q3_K_XL.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q3_K_XL.gguf) | Q3_K_XL | 40.59GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Athene-V2-Agent-IQ4_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ4_XS.gguf) | IQ4_XS | 39.70GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Athene-V2-Agent-Q3_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q3_K_L.gguf) | Q3_K_L | 39.50GB | false | Lower quality but usable, good for low RAM availability. | | [Athene-V2-Agent-Q3_K_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q3_K_M.gguf) | Q3_K_M | 37.69GB | false | Low quality. | | [Athene-V2-Agent-IQ3_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Athene-V2-Agent-Q3_K_S.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q3_K_S.gguf) | Q3_K_S | 34.48GB | false | Low quality, not recommended. | | [Athene-V2-Agent-IQ3_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ3_XS.gguf) | IQ3_XS | 32.84GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Athene-V2-Agent-Q2_K_L.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q2_K_L.gguf) | Q2_K_L | 31.02GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Athene-V2-Agent-Q2_K.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. | | [Athene-V2-Agent-IQ2_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [Athene-V2-Agent-IQ2_S.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ2_S.gguf) | IQ2_S | 27.94GB | false | Low quality, uses SOTA techniques to be usable. | | [Athene-V2-Agent-IQ2_XS.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ2_XS.gguf) | IQ2_XS | 27.05GB | false | Low quality, uses SOTA techniques to be usable. | | [Athene-V2-Agent-IQ2_XXS.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. | | [Athene-V2-Agent-IQ1_M.gguf](https://huggingface.co/bartowski/Athene-V2-Agent-GGUF/blob/main/Athene-V2-Agent-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Athene-V2-Agent-GGUF --include "Athene-V2-Agent-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Athene-V2-Agent-GGUF --include "Athene-V2-Agent-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (Athene-V2-Agent-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. Thank you ZeroWw for the inspiration to experiment with embed/output. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
wisejiyoon/bert-finetuned-ner
wisejiyoon
"2023-12-08T05:48:43"
8
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-12-07T08:14:37"
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.8597087378640776 - name: Recall type: recall value: 0.8941433860652979 - name: F1 type: f1 value: 0.8765880217785844 - name: Accuracy type: accuracy value: 0.9760991339759331 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0943 - Precision: 0.8597 - Recall: 0.8941 - F1: 0.8766 - Accuracy: 0.9761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1321 | 1.0 | 1756 | 0.1003 | 0.8010 | 0.8514 | 0.8254 | 0.9687 | | 0.0654 | 2.0 | 3512 | 0.0927 | 0.8331 | 0.8862 | 0.8588 | 0.9739 | | 0.0382 | 3.0 | 5268 | 0.0943 | 0.8597 | 0.8941 | 0.8766 | 0.9761 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
Rodo-Sami/196927de-72a3-4a4c-abf9-e082559e4708
Rodo-Sami
"2025-02-13T04:07:25"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-14B-Chat", "base_model:adapter:Qwen/Qwen1.5-14B-Chat", "license:other", "region:us" ]
null
"2025-02-13T03:26:40"
--- library_name: peft license: other base_model: Qwen/Qwen1.5-14B-Chat tags: - axolotl - generated_from_trainer model-index: - name: 196927de-72a3-4a4c-abf9-e082559e4708 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.6.0` ```yaml adapter: lora base_model: Qwen/Qwen1.5-14B-Chat bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 956404501872e743_train_data.json ds_type: json format: custom path: /workspace/input_data/956404501872e743_train_data.json type: field_input: eval_persona field_instruction: eval_question field_output: eval_whole_desc format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: Rodo-Sami/196927de-72a3-4a4c-abf9-e082559e4708 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/956404501872e743_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: offline wandb_name: 68e338b5-c920-4bfc-ba14-6ee6beade006 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 68e338b5-c920-4bfc-ba14-6ee6beade006 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 196927de-72a3-4a4c-abf9-e082559e4708 This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3674 | 0.0736 | 50 | 1.5408 | | 1.2695 | 0.1471 | 100 | 1.4699 | | 1.2745 | 0.2207 | 150 | 1.4033 | | 1.2681 | 0.2942 | 200 | 1.3864 | ### Framework versions - PEFT 0.14.0 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
kenhktsui/fasttext_test
kenhktsui
"2024-08-04T18:09:05"
11
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-30T14:39:16"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shanhy/xlm-roberta-base_seed42_original_amh-esp-eng_train
shanhy
"2024-02-15T00:53:53"
90
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-15T00:53:13"
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base_seed42_original_amh-esp-eng_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_seed42_original_amh-esp-eng_train This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0175 - Spearman Corr: 0.8510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.76 | 200 | 0.0231 | 0.8199 | | 0.0378 | 3.52 | 400 | 0.0142 | 0.8523 | | 0.021 | 5.29 | 600 | 0.0142 | 0.8544 | | 0.0157 | 7.05 | 800 | 0.0144 | 0.8553 | | 0.0125 | 8.81 | 1000 | 0.0159 | 0.8538 | | 0.0104 | 10.57 | 1200 | 0.0156 | 0.8515 | | 0.0083 | 12.33 | 1400 | 0.0158 | 0.8503 | | 0.0067 | 14.1 | 1600 | 0.0143 | 0.8510 | | 0.0067 | 15.86 | 1800 | 0.0183 | 0.8493 | | 0.0059 | 17.62 | 2000 | 0.0175 | 0.8510 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF
mradermacher
"2025-03-05T11:00:19"
322
1
transformers
[ "transformers", "gguf", "en", "base_model:PurpleAILAB/Llama-3.1-8B-uncensored_SQLi", "base_model:quantized:PurpleAILAB/Llama-3.1-8B-uncensored_SQLi", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-01T18:02:11"
--- base_model: PurpleAILAB/Llama-3.1-8B-uncensored_SQLi language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/PurpleAILAB/Llama-3.1-8B-uncensored_SQLi <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-uncensored_SQLi-GGUF/resolve/main/Llama-3.1-8B-uncensored_SQLi.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF
bunnycore
"2024-10-13T19:23:43"
6
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1", "base_model:quantized:bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1", "endpoints_compatible", "region:us", "imatrix" ]
null
"2024-10-13T19:23:14"
--- base_model: bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF This model was converted to GGUF format from [`bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1`](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-titanfusion-mix-2.1-q4_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-titanfusion-mix-2.1-q4_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-titanfusion-mix-2.1-q4_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-titanfusion-mix-2.1-q4_k_m-imat.gguf -c 2048 ```
Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF
Triangle104
"2025-01-14T10:47:20"
26
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B", "base_model:quantized:grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-01-14T10:46:51"
--- base_model: grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge - llama-cpp - gguf-my-repo license: llama3.1 --- # Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF This model was converted to GGUF format from [`grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B`](https://huggingface.co/grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/grimjim/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF --hf-file llama3.1-supernovalite-huatuoskywork-o1-8b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF --hf-file llama3.1-supernovalite-huatuoskywork-o1-8b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF --hf-file llama3.1-supernovalite-huatuoskywork-o1-8b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama3.1-SuperNovaLite-HuatuoSkywork-o1-8B-Q6_K-GGUF --hf-file llama3.1-supernovalite-huatuoskywork-o1-8b-q6_k.gguf -c 2048 ```
lesso17/94469e1a-cdb2-4fd5-968a-881849e6b91d
lesso17
"2025-03-09T02:40:05"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Maykeye/TinyLLama-v0", "base_model:adapter:Maykeye/TinyLLama-v0", "license:apache-2.0", "region:us" ]
null
"2025-03-09T02:32:04"
--- library_name: peft license: apache-2.0 base_model: Maykeye/TinyLLama-v0 tags: - axolotl - generated_from_trainer model-index: - name: 94469e1a-cdb2-4fd5-968a-881849e6b91d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Maykeye/TinyLLama-v0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 23247f1a767c02a5_train_data.json ds_type: json format: custom path: /workspace/input_data/23247f1a767c02a5_train_data.json type: field_input: context field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso17/94469e1a-cdb2-4fd5-968a-881849e6b91d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000217 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/23247f1a767c02a5_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 170 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6a5e844d-6526-4037-938e-5e00a4fd1f37 wandb_project: 17a wandb_run: your_name wandb_runid: 6a5e844d-6526-4037-938e-5e00a4fd1f37 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 94469e1a-cdb2-4fd5-968a-881849e6b91d This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.5584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000217 - train_batch_size: 4 - eval_batch_size: 4 - seed: 170 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0012 | 1 | 10.3653 | | 7.5451 | 0.5823 | 500 | 7.5584 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
RayLandNika/Belle_RVC_2
RayLandNika
"2024-05-31T07:32:01"
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
"2024-05-31T07:32:01"
--- license: bigscience-openrail-m ---
huggingtweets/java_jigga
huggingtweets
"2021-05-22T09:15:54"
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05"
--- language: en thumbnail: https://www.huggingtweets.com/java_jigga/1617788084385/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377879160173993987/20XH6CdP_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cool Narcissist 🤖 AI Bot </div> <div style="font-size: 15px">@java_jigga bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@java_jigga's tweets](https://twitter.com/java_jigga). | Data | Quantity | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 313 | | Short tweets | 426 | | Tweets kept | 2507 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kvpyc8u1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @java_jigga's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6p3ishch) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6p3ishch/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/java_jigga') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
PriyankSisodia/bloom_3B_test20ep
PriyankSisodia
"2023-11-20T09:32:35"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:bigscience/bloom-3b", "base_model:adapter:bigscience/bloom-3b", "region:us" ]
null
"2023-11-20T09:32:30"
--- library_name: peft base_model: bigscience/bloom-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2
luomingshuang
"2022-07-19T11:56:33"
0
0
null
[ "region:us" ]
null
"2022-05-16T08:24:41"
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/355 And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604. # Pre-trained Transducer-Stateless2 models for the Aidatatang_200zh dataset with icefall. The model was trained on full [Aidatatang_200zh](https://www.openslr.org/62) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2. ## Training procedure The main repositories are list below, we will update the training and decoding scripts with the update of version. k2: https://github.com/k2-fsa/k2 icefall: https://github.com/k2-fsa/icefall lhotse: https://github.com/lhotse-speech/lhotse * Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall. * Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above. ``` git clone https://github.com/k2-fsa/icefall cd icefall ``` * Preparing data. ``` cd egs/aidatatang_200zh/ASR bash ./prepare.sh ``` * Training ``` export CUDA_VISIBLE_DEVICES="0,1" ./pruned_transducer_stateless2/train.py \ --world-size 2 \ --num-epochs 30 \ --start-epoch 0 \ --exp-dir pruned_transducer_stateless2/exp \ --lang-dir data/lang_char \ --max-duration 250 ``` ## Evaluation results The decoding results (WER%) on Aidatatang_200zh(dev and test) are listed below, we got this result by averaging models from epoch 11 to 29. The WERs are | | dev | test | comment | |------------------------------------|------------|------------|------------------------------------------| | greedy search | 5.53 | 6.59 | --epoch 29, --avg 19, --max-duration 100 | | modified beam search (beam size 4) | 5.28 | 6.32 | --epoch 29, --avg 19, --max-duration 100 | | fast beam search (set as default) | 5.29 | 6.33 | --epoch 29, --avg 19, --max-duration 1500|
emilykang/Phi_medmcqa_question_generation-pharmacology_lora
emilykang
"2024-05-18T18:35:45"
4
0
peft
[ "peft", "safetensors", "phi", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
"2024-05-18T17:16:48"
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 datasets: - generator model-index: - name: Phi_medmcqa_question_generation-pharmacology_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi_medmcqa_question_generation-pharmacology_lora This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
WALIDALI/marimrev
WALIDALI
"2023-07-10T05:32:46"
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-10T05:29:09"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### marimrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
1231czx/9b_raft_iter1
1231czx
"2024-07-14T13:50:09"
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-14T13:41:30"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zon5000/MedLLM_merged-GGUF
zon5000
"2024-04-06T21:12:11"
4
0
transformers
[ "transformers", "gguf", "mistral", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-03-31T06:17:04"
--- license: mit language: - en pipeline_tag: text-generation ---
flax-community/arabic-t5-small
flax-community
"2023-11-29T15:17:26"
44
8
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "t5", "text2text-generation", "ar", "dataset:mc4", "dataset:oscar", "dataset:arabic_billion_words", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05"
--- language: - ar datasets: - mc4 - oscar - arabic_billion_words --- # arabic-t5-small This is a T5v1.1 (small) trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets. The model could only be trained for about `10%` of the whole dataset due to time limitations. This is equivalent to `22'000` steps or about `4.3` Billion tokens. ## Training parameters | | | | :-------------------: | :-----------: | | Training batch size | `384` | | Evaluation batch size | `768` | | learning rate | `1e-2` | | dtype | `jnp.float32` | ## Preprocessing and the tokenizer We tried to keep the preprocessing to a bare minimum. We only replaced URLs, emails and social media user mentions with fixed tokens. Contrary to other pretrained Arabic LMs, we decided to not strip the Arabic diacritics and to keep them part of the vocabulary. The tokenizer was trained on `5%` of the training set, with a vocabulary size of `64'000`. For more details about preprocessing, check the [tokenizer code](https://huggingface.co/flax-community/arabic-t5-small/blob/main/t5_tokenizer_model.py) ## Data The model was trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets. A random `0.1%` subset of the data was reserved for evaluation and the rest for training. ## Results | | | | :-----------------: | :-----------: | | Evaluation accuracy | `56.84%` | | Evaluation Loss | `2.423` | | Training Loss | `2.392` | | Training Time | `22h 23m 51s` | ## Note for finetuning This model was pretrained with dropout turned off, so the default `dropout_rate` in the model config is `0`. To finetune the model dropout should be turned be back on, like this: ```python model = T5ForConditionalGeneration.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1) ``` or, ```python model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1) ```
hai-minh-son/lstm-attention-nwp-model3
hai-minh-son
"2025-04-02T05:50:28"
0
0
null
[ "pytorch", "region:us" ]
null
"2025-04-01T04:55:57"
# Mô hình LSTM_ATTENTION cho Next Word Prediction ## Thông tin mô hình - Tên: lstm_attention - Ngày huấn luyện: 2025-04-02 05:50:16 - Kích thước embedding: 256 - Kích thước hidden: 512 - Số lớp: 2 - Tỉ lệ dropout: 0.3 - Phần trăm dữ liệu sử dụng: 1.0% - Epochs: 10 - Thiết bị: cuda - Batch size: 63 ## Hiệu suất mô hình - final_train_loss: 4.7111 - final_val_loss: 4.7266 - final_train_acc: 0.2527 - final_val_acc: 0.2542
DevQuasar/nvidia.Minitron-4B-Base-GGUF
DevQuasar
"2025-02-17T17:46:51"
0
0
null
[ "gguf", "text-generation", "base_model:nvidia/Minitron-4B-Base", "base_model:quantized:nvidia/Minitron-4B-Base", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-17T17:17:58"
--- base_model: - nvidia/Minitron-4B-Base pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [nvidia/Minitron-4B-Base](https://huggingface.co/nvidia/Minitron-4B-Base) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
tanatapanun/fine-tuned-flan-t5-20-epochs
tanatapanun
"2023-12-28T05:05:46"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-12-28T04:03:04"
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-flan-t5-20-epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-flan-t5-20-epochs This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7842 - Rouge1: 0.2614 - Rouge2: 0.0824 - Rougel: 0.226 - Rougelsum: 0.2273 - Gen Len: 14.54 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 301 | 1.8551 | 0.1314 | 0.0425 | 0.1139 | 0.1139 | 11.4 | | 2.7128 | 2.0 | 602 | 0.9826 | 0.1868 | 0.065 | 0.1564 | 0.1571 | 15.06 | | 2.7128 | 3.0 | 903 | 0.8569 | 0.2079 | 0.0718 | 0.1716 | 0.1722 | 15.05 | | 1.1113 | 4.0 | 1204 | 0.8300 | 0.2141 | 0.0705 | 0.181 | 0.181 | 14.59 | | 0.9116 | 5.0 | 1505 | 0.8204 | 0.2254 | 0.0837 | 0.1943 | 0.1945 | 14.92 | | 0.9116 | 6.0 | 1806 | 0.8116 | 0.243 | 0.0807 | 0.2074 | 0.2072 | 15.03 | | 0.8732 | 7.0 | 2107 | 0.8082 | 0.2376 | 0.0752 | 0.2015 | 0.2016 | 14.83 | | 0.8732 | 8.0 | 2408 | 0.8007 | 0.2345 | 0.0735 | 0.2015 | 0.2021 | 14.41 | | 0.8336 | 9.0 | 2709 | 0.7968 | 0.2456 | 0.0757 | 0.2081 | 0.2081 | 14.4 | | 0.8151 | 10.0 | 3010 | 0.7942 | 0.2544 | 0.0752 | 0.2134 | 0.2146 | 14.58 | | 0.8151 | 11.0 | 3311 | 0.7924 | 0.2497 | 0.0783 | 0.2118 | 0.2124 | 14.5 | | 0.8187 | 12.0 | 3612 | 0.7907 | 0.2552 | 0.0769 | 0.2189 | 0.2191 | 14.43 | | 0.8187 | 13.0 | 3913 | 0.7891 | 0.258 | 0.077 | 0.2197 | 0.2199 | 14.37 | | 0.8028 | 14.0 | 4214 | 0.7867 | 0.2511 | 0.0801 | 0.2146 | 0.2147 | 14.71 | | 0.7793 | 15.0 | 4515 | 0.7852 | 0.2551 | 0.0777 | 0.2175 | 0.2177 | 14.67 | | 0.7793 | 16.0 | 4816 | 0.7858 | 0.2594 | 0.0774 | 0.2219 | 0.2219 | 14.47 | | 0.7872 | 17.0 | 5117 | 0.7850 | 0.2609 | 0.0803 | 0.2233 | 0.2244 | 14.56 | | 0.7872 | 18.0 | 5418 | 0.7843 | 0.2599 | 0.0811 | 0.2242 | 0.2256 | 14.55 | | 0.7756 | 19.0 | 5719 | 0.7844 | 0.261 | 0.0824 | 0.2256 | 0.2271 | 14.55 | | 0.7752 | 20.0 | 6020 | 0.7842 | 0.2614 | 0.0824 | 0.226 | 0.2273 | 14.54 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF
itlwas
"2024-12-29T14:37:50"
19
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "reasoning", "llama-3", "llama-cpp", "gguf-my-repo", "en", "dataset:KingNish/reasoning-base-20k", "base_model:KingNish/Reasoning-Llama-1b-v0.1", "base_model:quantized:KingNish/Reasoning-Llama-1b-v0.1", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-29T14:37:42"
--- base_model: KingNish/Reasoning-Llama-1b-v0.1 datasets: - KingNish/reasoning-base-20k language: - en license: llama3.2 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - reasoning - llama-3 - llama-cpp - gguf-my-repo --- # itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`KingNish/Reasoning-Llama-1b-v0.1`](https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF --hf-file reasoning-llama-1b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF --hf-file reasoning-llama-1b-v0.1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF --hf-file reasoning-llama-1b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo itlwas/Reasoning-Llama-1b-v0.1-Q4_K_M-GGUF --hf-file reasoning-llama-1b-v0.1-q4_k_m.gguf -c 2048 ```
Babivill/leidirocha
Babivill
"2023-05-16T09:39:41"
35
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-12-22T19:58:20"
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: leidirocha --- ### leidirocha Dreambooth model trained by Babivill with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: leidirocha (use that on your prompt) ![leidirocha 0](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%281%29.jpg)![leidirocha 1](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%282%29.jpg)![leidirocha 2](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%283%29.jpg)![leidirocha 3](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%284%29.jpg)![leidirocha 4](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%285%29.jpg)![leidirocha 5](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%286%29.jpg)![leidirocha 6](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%287%29.jpg)![leidirocha 7](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%288%29.jpg)![leidirocha 8](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%289%29.jpg)![leidirocha 9](https://huggingface.co/Babivill/leidirocha/resolve/main/concept_images/leidirocha_%2810%29.jpg)
utahnlp/snli_facebook_opt-350m_seed-2
utahnlp
"2024-04-06T02:12:20"
104
0
transformers
[ "transformers", "safetensors", "opt", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-06T02:11:32"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aegunal/FT_IPD_gemma7b
aegunal
"2024-03-11T18:02:53"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-03-11T18:02:50"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DouglasPontes/2020-Q2-full_tweets_combined90
DouglasPontes
"2024-01-22T22:10:02"
19
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-2019-90m", "base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-01-19T13:28:14"
--- license: mit base_model: cardiffnlp/twitter-roberta-base-2019-90m tags: - generated_from_trainer model-index: - name: 2020-Q2-full_tweets_combined90 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2020-Q2-full_tweets_combined90 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.1e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2400000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | No log | 0.02 | 8000 | 2.2220 | | 2.4154 | 0.03 | 16000 | 2.1427 | | 2.4154 | 0.05 | 24000 | 2.1028 | | 2.2273 | 0.07 | 32000 | 2.0824 | | 2.2273 | 0.08 | 40000 | 2.0645 | | 2.1774 | 0.1 | 48000 | 2.0478 | | 2.1774 | 0.12 | 56000 | 2.0327 | | 2.1569 | 0.13 | 64000 | 2.0248 | | 2.1569 | 0.15 | 72000 | 2.0209 | | 2.1439 | 0.17 | 80000 | 2.0049 | | 2.1439 | 0.19 | 88000 | 2.0113 | | 2.1271 | 0.2 | 96000 | 2.0038 | | 2.1271 | 0.22 | 104000 | 2.0065 | | 2.1211 | 0.24 | 112000 | 1.9987 | | 2.1211 | 0.25 | 120000 | 1.9929 | | 2.1194 | 0.27 | 128000 | 1.9922 | | 2.1194 | 0.29 | 136000 | 1.9917 | | 2.1118 | 0.3 | 144000 | 1.9885 | | 2.1118 | 0.32 | 152000 | 1.9870 | | 2.1047 | 0.34 | 160000 | 1.9843 | | 2.1047 | 0.35 | 168000 | 1.9827 | | 2.1015 | 0.37 | 176000 | 1.9826 | | 2.1015 | 0.39 | 184000 | 1.9774 | | 2.1042 | 0.4 | 192000 | 1.9771 | | 2.1042 | 0.42 | 200000 | 1.9770 | | 2.0919 | 0.44 | 208000 | 1.9752 | | 2.0919 | 0.45 | 216000 | 1.9775 | | 2.0953 | 0.47 | 224000 | 1.9684 | | 2.0953 | 0.49 | 232000 | 1.9748 | | 2.0848 | 0.51 | 240000 | 1.9714 | | 2.0848 | 0.52 | 248000 | 1.9781 | | 2.0882 | 0.54 | 256000 | 1.9709 | | 2.0882 | 0.56 | 264000 | 1.9660 | | 2.0922 | 0.57 | 272000 | 1.9651 | | 2.0922 | 0.59 | 280000 | 1.9678 | | 2.0938 | 0.61 | 288000 | 1.9667 | | 2.0938 | 0.62 | 296000 | 1.9630 | | 2.095 | 0.64 | 304000 | 1.9642 | | 2.095 | 0.66 | 312000 | 1.9624 | | 2.0908 | 0.67 | 320000 | 1.9603 | | 2.0908 | 0.69 | 328000 | 1.9649 | | 2.0927 | 0.71 | 336000 | 1.9641 | | 2.0927 | 0.72 | 344000 | 1.9603 | | 2.0931 | 0.74 | 352000 | 1.9590 | | 2.0931 | 0.76 | 360000 | 1.9644 | | 2.087 | 0.77 | 368000 | 1.9635 | | 2.087 | 0.79 | 376000 | 1.9614 | | 2.0792 | 0.81 | 384000 | 1.9591 | | 2.0792 | 0.83 | 392000 | 1.9575 | | 2.0899 | 0.84 | 400000 | 1.9592 | | 2.0899 | 0.86 | 408000 | 1.9619 | | 2.0812 | 0.88 | 416000 | 1.9582 | | 2.0812 | 0.89 | 424000 | 1.9580 | | 2.0948 | 0.91 | 432000 | 1.9587 | | 2.0948 | 0.93 | 440000 | 1.9593 | | 2.0895 | 0.94 | 448000 | 1.9608 | | 2.0895 | 0.96 | 456000 | 1.9566 | | 2.0756 | 0.98 | 464000 | 1.9525 | | 2.0756 | 0.99 | 472000 | 1.9541 | | 2.0842 | 1.01 | 480000 | 1.9601 | | 2.0842 | 1.03 | 488000 | 1.9564 | | 2.0935 | 1.04 | 496000 | 1.9522 | | 2.0935 | 1.06 | 504000 | 1.9532 | | 2.0836 | 1.08 | 512000 | 1.9537 | | 2.0836 | 1.09 | 520000 | 1.9553 | | 2.0876 | 1.11 | 528000 | 1.9469 | | 2.0876 | 1.13 | 536000 | 1.9497 | | 2.0778 | 1.15 | 544000 | 1.9542 | | 2.0778 | 1.16 | 552000 | 1.9516 | | 2.0829 | 1.18 | 560000 | 1.9506 | | 2.0829 | 1.2 | 568000 | 1.9505 | | 2.0864 | 1.21 | 576000 | 1.9531 | | 2.0864 | 1.23 | 584000 | 1.9455 | | 2.0893 | 1.25 | 592000 | 1.9471 | | 2.0893 | 1.26 | 600000 | 1.9539 | | 2.0808 | 1.28 | 608000 | 1.9455 | | 2.0808 | 1.3 | 616000 | 1.9497 | | 2.0838 | 1.31 | 624000 | 1.9466 | | 2.0838 | 1.33 | 632000 | 1.9498 | | 2.0812 | 1.35 | 640000 | 1.9510 | | 2.0812 | 1.36 | 648000 | 1.9526 | | 2.0793 | 1.38 | 656000 | 1.9471 | | 2.0793 | 1.4 | 664000 | 1.9469 | | 2.0789 | 1.41 | 672000 | 1.9455 | | 2.0789 | 1.43 | 680000 | 1.9469 | | 2.0883 | 1.45 | 688000 | 1.9439 | | 2.0883 | 1.47 | 696000 | 1.9439 | | 2.09 | 1.48 | 704000 | 1.9416 | | 2.09 | 1.5 | 712000 | 1.9492 | | 2.0845 | 1.52 | 720000 | 1.9430 | | 2.0845 | 1.53 | 728000 | 1.9484 | | 2.0742 | 1.55 | 736000 | 1.9456 | | 2.0742 | 1.57 | 744000 | 1.9380 | | 2.0839 | 1.58 | 752000 | 1.9418 | | 2.0839 | 1.6 | 760000 | 1.9434 | | 2.0806 | 1.62 | 768000 | 1.9450 | | 2.0806 | 1.63 | 776000 | 1.9426 | | 2.0805 | 1.65 | 784000 | 1.9441 | | 2.0805 | 1.67 | 792000 | 1.9459 | | 2.0833 | 1.68 | 800000 | 1.9435 | | 2.0833 | 1.7 | 808000 | 1.9455 | | 2.0763 | 1.72 | 816000 | 1.9421 | | 2.0763 | 1.73 | 824000 | 1.9438 | | 2.0758 | 1.75 | 832000 | 1.9371 | | 2.0758 | 1.77 | 840000 | 1.9432 | | 2.0888 | 1.79 | 848000 | 1.9414 | | 2.0888 | 1.8 | 856000 | 1.9444 | | 2.0786 | 1.82 | 864000 | 1.9408 | | 2.0786 | 1.84 | 872000 | 1.9397 | | 2.079 | 1.85 | 880000 | 1.9406 | | 2.079 | 1.87 | 888000 | 1.9442 | | 2.0817 | 1.89 | 896000 | 1.9404 | | 2.0817 | 1.9 | 904000 | 1.9450 | | 2.0792 | 1.92 | 912000 | 1.9380 | | 2.0792 | 1.94 | 920000 | 1.9385 | | 2.0741 | 1.95 | 928000 | 1.9449 | | 2.0741 | 1.97 | 936000 | 1.9414 | | 2.0832 | 1.99 | 944000 | 1.9402 | | 2.0832 | 2.0 | 952000 | 1.9410 | | 2.0695 | 2.02 | 960000 | 1.9371 | | 2.0695 | 2.04 | 968000 | 1.9342 | | 2.0813 | 2.05 | 976000 | 1.9376 | | 2.0813 | 2.07 | 984000 | 1.9397 | | 2.0804 | 2.09 | 992000 | 1.9394 | | 2.0804 | 2.11 | 1000000 | 1.9370 | | 2.0789 | 2.12 | 1008000 | 1.9350 | | 2.0789 | 2.14 | 1016000 | 1.9327 | | 2.0754 | 2.16 | 1024000 | 1.9421 | | 2.0754 | 2.17 | 1032000 | 1.9371 | | 2.0774 | 2.19 | 1040000 | 1.9411 | | 2.0774 | 2.21 | 1048000 | 1.9337 | | 2.0766 | 2.22 | 1056000 | 1.9387 | | 2.0766 | 2.24 | 1064000 | 1.9334 | | 2.079 | 2.26 | 1072000 | 1.9386 | | 2.079 | 2.27 | 1080000 | 1.9335 | | 2.068 | 2.29 | 1088000 | 1.9363 | | 2.068 | 2.31 | 1096000 | 1.9420 | | 2.0786 | 2.32 | 1104000 | 1.9331 | | 2.0786 | 2.34 | 1112000 | 1.9327 | | 2.0734 | 2.36 | 1120000 | 1.9391 | | 2.0734 | 2.37 | 1128000 | 1.9363 | | 2.0787 | 2.39 | 1136000 | 1.9321 | | 2.0787 | 2.41 | 1144000 | 1.9333 | | 2.0731 | 2.43 | 1152000 | 1.9369 | | 2.0731 | 2.44 | 1160000 | 1.9357 | | 2.0816 | 2.46 | 1168000 | 1.9353 | | 2.0816 | 2.48 | 1176000 | 1.9319 | | 2.0758 | 2.49 | 1184000 | 1.9366 | | 2.0758 | 2.51 | 1192000 | 1.9301 | | 2.0725 | 2.53 | 1200000 | 1.9329 | | 2.0725 | 2.54 | 1208000 | 1.9370 | | 2.085 | 2.56 | 1216000 | 1.9251 | | 2.085 | 2.58 | 1224000 | 1.9369 | | 2.0809 | 2.59 | 1232000 | 1.9377 | | 2.0809 | 2.61 | 1240000 | 1.9398 | | 2.0742 | 2.63 | 1248000 | 1.9368 | | 2.0742 | 2.64 | 1256000 | 1.9389 | | 2.0743 | 2.66 | 1264000 | 1.9287 | | 2.0743 | 2.68 | 1272000 | 1.9337 | | 2.0822 | 2.69 | 1280000 | 1.9323 | | 2.0822 | 2.71 | 1288000 | 1.9348 | | 2.0845 | 2.73 | 1296000 | 1.9328 | | 2.0845 | 2.75 | 1304000 | 1.9324 | | 2.0706 | 2.76 | 1312000 | 1.9304 | | 2.0706 | 2.78 | 1320000 | 1.9322 | | 2.0813 | 2.8 | 1328000 | 1.9320 | | 2.0813 | 2.81 | 1336000 | 1.9379 | | 2.0768 | 2.83 | 1344000 | 1.9283 | | 2.0768 | 2.85 | 1352000 | 1.9352 | | 2.0776 | 2.86 | 1360000 | 1.9266 | | 2.0776 | 2.88 | 1368000 | 1.9339 | | 2.0776 | 2.9 | 1376000 | 1.9371 | | 2.0776 | 2.91 | 1384000 | 1.9353 | | 2.072 | 2.93 | 1392000 | 1.9290 | | 2.072 | 2.95 | 1400000 | 1.9337 | | 2.077 | 2.96 | 1408000 | 1.9318 | | 2.077 | 2.98 | 1416000 | 1.9326 | | 2.0777 | 3.0 | 1424000 | 1.9338 | | 2.0777 | 3.01 | 1432000 | 1.9307 | | 2.0846 | 3.03 | 1440000 | 1.9305 | | 2.0846 | 3.05 | 1448000 | 1.9312 | | 2.0744 | 3.07 | 1456000 | 1.9332 | | 2.0744 | 3.08 | 1464000 | 1.9313 | | 2.0767 | 3.1 | 1472000 | 1.9311 | | 2.0767 | 3.12 | 1480000 | 1.9322 | | 2.082 | 3.13 | 1488000 | 1.9362 | | 2.082 | 3.15 | 1496000 | 1.9329 | | 2.0774 | 3.17 | 1504000 | 1.9335 | | 2.0774 | 3.18 | 1512000 | 1.9342 | | 2.0793 | 3.2 | 1520000 | 1.9326 | | 2.0793 | 3.22 | 1528000 | 1.9313 | | 2.0834 | 3.23 | 1536000 | 1.9302 | | 2.0834 | 3.25 | 1544000 | 1.9299 | | 2.0698 | 3.27 | 1552000 | 1.9288 | | 2.0698 | 3.28 | 1560000 | 1.9311 | | 2.0721 | 3.3 | 1568000 | 1.9262 | | 2.0721 | 3.32 | 1576000 | 1.9320 | | 2.0742 | 3.33 | 1584000 | 1.9278 | | 2.0742 | 3.35 | 1592000 | 1.9333 | | 2.0774 | 3.37 | 1600000 | 1.9252 | | 2.0774 | 3.39 | 1608000 | 1.9301 | | 2.0766 | 3.4 | 1616000 | 1.9344 | | 2.0766 | 3.42 | 1624000 | 1.9320 | | 2.0702 | 3.44 | 1632000 | 1.9307 | | 2.0702 | 3.45 | 1640000 | 1.9304 | | 2.0772 | 3.47 | 1648000 | 1.9280 | | 2.0772 | 3.49 | 1656000 | 1.9324 | | 2.0757 | 3.5 | 1664000 | 1.9343 | | 2.0757 | 3.52 | 1672000 | 1.9312 | | 2.0747 | 3.54 | 1680000 | 1.9304 | | 2.0747 | 3.55 | 1688000 | 1.9360 | | 2.068 | 3.57 | 1696000 | 1.9297 | | 2.068 | 3.59 | 1704000 | 1.9337 | | 2.0825 | 3.6 | 1712000 | 1.9293 | | 2.0825 | 3.62 | 1720000 | 1.9295 | | 2.0811 | 3.64 | 1728000 | 1.9315 | | 2.0811 | 3.65 | 1736000 | 1.9279 | | 2.0844 | 3.67 | 1744000 | 1.9289 | | 2.0844 | 3.69 | 1752000 | 1.9279 | | 2.0827 | 3.71 | 1760000 | 1.9283 | | 2.0827 | 3.72 | 1768000 | 1.9295 | | 2.0684 | 3.74 | 1776000 | 1.9281 | | 2.0684 | 3.76 | 1784000 | 1.9330 | | 2.0724 | 3.77 | 1792000 | 1.9294 | | 2.0724 | 3.79 | 1800000 | 1.9276 | | 2.074 | 3.81 | 1808000 | 1.9227 | | 2.074 | 3.82 | 1816000 | 1.9320 | | 2.0801 | 3.84 | 1824000 | 1.9275 | | 2.0801 | 3.86 | 1832000 | 1.9302 | | 2.0783 | 3.87 | 1840000 | 1.9333 | | 2.0783 | 3.89 | 1848000 | 1.9296 | | 2.0787 | 3.91 | 1856000 | 1.9302 | | 2.0787 | 3.92 | 1864000 | 1.9347 | | 2.0733 | 3.94 | 1872000 | 1.9298 | | 2.0733 | 3.96 | 1880000 | 1.9302 | | 2.0742 | 3.97 | 1888000 | 1.9279 | | 2.0742 | 3.99 | 1896000 | 1.9258 | | 2.0769 | 4.01 | 1904000 | 1.9255 | | 2.0769 | 4.03 | 1912000 | 1.9282 | | 2.0736 | 4.04 | 1920000 | 1.9298 | | 2.0736 | 4.06 | 1928000 | 1.9325 | | 2.0713 | 4.08 | 1936000 | 1.9296 | | 2.0713 | 4.09 | 1944000 | 1.9293 | | 2.0825 | 4.11 | 1952000 | 1.9345 | | 2.0825 | 4.13 | 1960000 | 1.9346 | | 2.0828 | 4.14 | 1968000 | 1.9311 | | 2.0828 | 4.16 | 1976000 | 1.9307 | | 2.0821 | 4.18 | 1984000 | 1.9336 | | 2.0821 | 4.19 | 1992000 | 1.9265 | | 2.0768 | 4.21 | 2000000 | 1.9284 | | 2.0768 | 4.23 | 2008000 | 1.9290 | | 2.0695 | 4.24 | 2016000 | 1.9306 | | 2.0695 | 4.26 | 2024000 | 1.9299 | | 2.0698 | 4.28 | 2032000 | 1.9230 | | 2.0698 | 4.29 | 2040000 | 1.9272 | | 2.0776 | 4.31 | 2048000 | 1.9306 | | 2.0776 | 4.33 | 2056000 | 1.9243 | | 2.0797 | 4.35 | 2064000 | 1.9266 | | 2.0797 | 4.36 | 2072000 | 1.9249 | | 2.0808 | 4.38 | 2080000 | 1.9279 | | 2.0808 | 4.4 | 2088000 | 1.9262 | | 2.0776 | 4.41 | 2096000 | 1.9350 | | 2.0776 | 4.43 | 2104000 | 1.9297 | | 2.0805 | 4.45 | 2112000 | 1.9337 | | 2.0805 | 4.46 | 2120000 | 1.9302 | | 2.0791 | 4.48 | 2128000 | 1.9337 | | 2.0791 | 4.5 | 2136000 | 1.9298 | | 2.0771 | 4.51 | 2144000 | 1.9268 | | 2.0771 | 4.53 | 2152000 | 1.9370 | | 2.0807 | 4.55 | 2160000 | 1.9307 | | 2.0807 | 4.56 | 2168000 | 1.9292 | | 2.0856 | 4.58 | 2176000 | 1.9300 | | 2.0856 | 4.6 | 2184000 | 1.9329 | | 2.0744 | 4.61 | 2192000 | 1.9319 | | 2.0744 | 4.63 | 2200000 | 1.9352 | | 2.0839 | 4.65 | 2208000 | 1.9368 | | 2.0839 | 4.67 | 2216000 | 1.9343 | | 2.0706 | 4.68 | 2224000 | 1.9290 | | 2.0706 | 4.7 | 2232000 | 1.9347 | | 2.0745 | 4.72 | 2240000 | 1.9294 | | 2.0745 | 4.73 | 2248000 | 1.9255 | | 2.0767 | 4.75 | 2256000 | 1.9271 | | 2.0767 | 4.77 | 2264000 | 1.9296 | | 2.0753 | 4.78 | 2272000 | 1.9268 | | 2.0753 | 4.8 | 2280000 | 1.9292 | | 2.0716 | 4.82 | 2288000 | 1.9310 | | 2.0716 | 4.83 | 2296000 | 1.9267 | | 2.0778 | 4.85 | 2304000 | 1.9301 | | 2.0778 | 4.87 | 2312000 | 1.9280 | | 2.0724 | 4.88 | 2320000 | 1.9283 | | 2.0724 | 4.9 | 2328000 | 1.9289 | | 2.0811 | 4.92 | 2336000 | 1.9315 | | 2.0811 | 4.93 | 2344000 | 1.9268 | | 2.0816 | 4.95 | 2352000 | 1.9304 | | 2.0816 | 4.97 | 2360000 | 1.9302 | | 2.0775 | 4.99 | 2368000 | 1.9292 | | 2.0775 | 5.0 | 2376000 | 1.9274 | | 2.0807 | 5.02 | 2384000 | 1.9317 | | 2.0807 | 5.04 | 2392000 | 1.9298 | | 2.0668 | 5.05 | 2400000 | 1.9349 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.0
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s108_v4_l4_v100
KingKazma
"2023-08-13T15:46:55"
0
0
peft
[ "peft", "region:us" ]
null
"2023-08-13T15:46:54"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
CyantifiCQ/ppo-Huggy_01
CyantifiCQ
"2022-12-25T17:46:34"
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2022-12-25T17:46:24"
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: CyantifiCQ/ppo-Huggy_01 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
camilomj/ROSEDEBUT
camilomj
"2024-05-16T23:22:56"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-05-16T23:22:05"
--- license: apache-2.0 ---
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,447

Collection including librarian-bots/model_cards_with_metadata