modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
75.3M
likes
int64
0
10.6k
library_name
stringclasses
189 values
tags
sequencelengths
1
1.84k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
leimu/24
leimu
"2024-04-15T01:54:24Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T01:54:24Z"
Entry not found
Sarojj/plcalbkmistral-GG8
Sarojj
"2024-04-15T02:01:39Z"
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-15T01:57:49Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** Sarojj - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45
Santp98
"2024-04-15T01:58:11Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "dataset:Santp98/query_generated-title-secop2", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-15T01:58:03Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - Santp98/query_generated-title-secop2 --- # Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45') model = AutoModel.from_pretrained('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1178 with parameters: ``` {'batch_size': 86, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `src.models.utils.custom_parts.CustomMultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 6, "evaluation_steps": 500, "evaluator": "src.models.utils.custom_parts.CustomEmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
SaiSaketh/my_awesome_qa_model
SaiSaketh
"2024-04-15T01:58:46Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-15T01:58:12Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy base_model: bert-base-uncased model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1231 - Accuracy: 0.69 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 1.5999 | 0.558 | | 1.85 | 2.0 | 500 | 1.2074 | 0.662 | | 1.85 | 3.0 | 750 | 1.1231 | 0.69 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Tokenizers 0.15.2
Sumail/Ame1
Sumail
"2024-04-15T02:01:59Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:gotchachurchkhela/SN6-23", "base_model:tom-brady/sn6_200", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T01:58:53Z"
--- base_model: - gotchachurchkhela/SN6-23 - tom-brady/sn6_200 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [gotchachurchkhela/SN6-23](https://huggingface.co/gotchachurchkhela/SN6-23) * [tom-brady/sn6_200](https://huggingface.co/tom-brady/sn6_200) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: gotchachurchkhela/SN6-23 layer_range: [0, 24] - model: tom-brady/sn6_200 layer_range: [0, 24] merge_method: slerp base_model: gotchachurchkhela/SN6-23 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
zzttbrdd/sn6_6m
zzttbrdd
"2024-04-15T02:08:15Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T01:59:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ikozlov/MobileDiffusion
ikozlov
"2024-04-15T02:01:41Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-15T02:01:41Z"
--- license: openrail ---
udit-k/Mistral7BTamil
udit-k
"2024-04-15T02:01:47Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:01:47Z"
Entry not found
leimu/25
leimu
"2024-04-15T02:03:05Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:03:05Z"
Entry not found
yongsun-shim/eeve-8bit
yongsun-shim
"2024-04-15T02:05:58Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:05:58Z"
Entry not found
Fizzarolli/lust-7b-GGUF
Fizzarolli
"2024-04-15T02:20:40Z"
0
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-04-15T02:06:08Z"
--- license: apache-2.0 --- # lust 7b yeah yeah you get the drill its just the gargamels. proper quantizations coming sometime soon
hundredl/diffusers-train
hundredl
"2024-04-15T02:24:48Z"
0
0
null
[ "tensorboard", "region:us" ]
null
"2024-04-15T02:06:33Z"
Entry not found
yongsun-shim/eeve-8bit-test
yongsun-shim
"2024-04-15T02:20:19Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
"2024-04-15T02:06:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
inXistant/MPOClassification
inXistant
"2024-04-15T02:20:45Z"
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-15T02:06:53Z"
Entry not found
Sumail/Ame2
Sumail
"2024-04-15T02:10:53Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:gotchachurchkhela/SN6-23", "base_model:GamblerOnTrain/danke20a", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T02:07:47Z"
--- base_model: - gotchachurchkhela/SN6-23 - GamblerOnTrain/danke20a library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [gotchachurchkhela/SN6-23](https://huggingface.co/gotchachurchkhela/SN6-23) * [GamblerOnTrain/danke20a](https://huggingface.co/GamblerOnTrain/danke20a) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: gotchachurchkhela/SN6-23 layer_range: [0, 24] - model: GamblerOnTrain/danke20a layer_range: [0, 24] merge_method: slerp base_model: gotchachurchkhela/SN6-23 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Shalazary/ruBert-base-sberquad-0.005-filtered
Shalazary
"2024-04-15T02:11:35Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
"2024-04-15T02:11:32Z"
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: ai-forever/ruBert-base model-index: - name: ruBert-base-sberquad-0.005-filtered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.005-filtered This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
sjhpark/dolly-v2-3b-finetuned-medmcqa
sjhpark
"2024-04-15T02:11:35Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:11:35Z"
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: databricks/dolly-v2-3b model-index: - name: dolly-v2-3b-finetuned-medmcqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dolly-v2-3b-finetuned-medmcqa This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.6.2
BrandonM001/bert-finetuned-ner2
BrandonM001
"2024-04-15T02:23:56Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-15T02:13:24Z"
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0603 - Precision: 0.9332 - Recall: 0.9517 - F1: 0.9423 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0747 | 1.0 | 1756 | 0.0679 | 0.8990 | 0.9307 | 0.9146 | 0.9807 | | 0.0346 | 2.0 | 3512 | 0.0641 | 0.9331 | 0.9478 | 0.9404 | 0.9857 | | 0.0233 | 3.0 | 5268 | 0.0603 | 0.9332 | 0.9517 | 0.9423 | 0.9864 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Guilherme34/Samantha-mixtraldolphin-GGUF-q2
Guilherme34
"2024-04-15T02:27:24Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-04-15T02:14:31Z"
Entry not found
jetx/ih9suy5
jetx
"2024-04-15T02:17:31Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T02:15:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
f0ster/PhotographyLoRA
f0ster
"2024-04-15T02:25:36Z"
0
1
null
[ "art", "text-to-image", "en", "region:us" ]
text-to-image
"2024-04-15T02:15:41Z"
--- language: - en pipeline_tag: text-to-image tags: - art ---
lurenbai/gemma-7b-it-pytorch
lurenbai
"2024-04-15T02:16:49Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:16:49Z"
Entry not found
ai-er/llama-2-medi-dialog-mini-finetuned
ai-er
"2024-04-15T02:21:17Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-15T02:17:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DBOETTGE/DANILO
DBOETTGE
"2024-04-15T02:19:36Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:19:35Z"
Entry not found
Enagamirzayev/whisper-small-llm-lingo-adapters_m
Enagamirzayev
"2024-04-15T02:20:09Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-15T02:19:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anologicon/q-FrozenLake-v1-4x4-noSlippery
anologicon
"2024-04-15T02:44:40Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-04-15T02:20:08Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="anologicon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
aipib/karasu-1.1B-merge1
aipib
"2024-04-15T02:21:26Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector", "conversational", "base_model:lightblue/karasu-1.1B", "base_model:niryuu/Karasu-1.1b-chat-vector", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T02:20:13Z"
Temporary Redirect. Redirecting to /aipib/karasu-1.1B-linear2/resolve/main/README.md
liminerity/Bitnet-M7-70m
liminerity
"2024-04-15T02:25:12Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Mistral", "1bit", "bitnet", "abideen", "M7", "Liminerity", "dataset:abideen/Cosmopedia-100k-pretrain", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T02:21:00Z"
--- datasets: - abideen/Cosmopedia-100k-pretrain tags: - Mistral - 1bit - bitnet - abideen - M7 - Liminerity --- """this is my second attempt at converting a model float16 quantized model to 1.5bit. i used my model liminerity/M7-7b for the base model and trained on: abideen/cosmopedia-100k-pretain dataset and used his google colab project to make this""" #EXAMPLE INFERENCE CODE FROM ABIDEEN'S COLAB PROJECT ``` from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.models.llama.modeling_llama import * # Load a pretrained BitNet model model = "liminerity/Bitnet-M7-70M" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained(model) def activation_quant(x): scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5) y = (x * scale).round().clamp_(-128, 127) y = y / scale return y def weight_quant(w): scale = 1.0 / w.abs().mean().clamp_(min=1e-5) u = (w * scale).round().clamp_(-1, 1) u = u / scale return u class BitLinear(nn.Linear): def forward(self, x): w = self.weight # a weight tensor with shape [d, k] x = x.to(w.device) RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device) x_norm = RMSNorm(x) # A trick for implementing Straight−Through−Estimator (STE) using detach() x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach() w_quant = w + (weight_quant(w) - w).detach() y = F.linear(x_quant, w_quant) return y def convert_to_bitnet(model, copy_weights): for name, module in model.named_modules(): # Replace linear layers with BitNet if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP): for child_name, child_module in module.named_children(): if isinstance(child_module, nn.Linear): bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0") if copy_weights: bitlinear.weight = child_module.weight if child_module.bias is not None: bitlinear.bias = child_module.bias setattr(module, child_name, bitlinear) # Remove redundant input_layernorms elif isinstance(module, LlamaDecoderLayer): for child_name, child_module in module.named_children(): if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm": setattr(module, child_name, nn.Identity().to(device="cuda:0")) convert_to_bitnet(model, copy_weights=True) model.to(device="cuda:0") prompt = "What is Machine Learning?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) generate_ids = model.generate(inputs.input_ids, max_length=50) tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ```
notzero/qlora_mistral
notzero
"2024-04-15T02:21:37Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-15T02:21:08Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tom-brady/sn6_247
tom-brady
"2024-04-15T02:46:28Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T02:21:17Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yixuan-chia/mistral-7b-v0.2-l40-trtllm-0.8.0
yixuan-chia
"2024-04-15T02:21:48Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:21:47Z"
Entry not found
mergekit-community/mergekit-slerp-fwhqbxq
mergekit-community
"2024-04-15T02:21:55Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:21:54Z"
Invalid username or password.
jdeklerk10/DS-6.7B-schema_2
jdeklerk10
"2024-04-15T02:38:30Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-15T02:22:29Z"
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: deepseek-ai/deepseek-coder-6.7b-instruct model-index: - name: DS-6.7B-schema_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DS-6.7B-schema_2 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0235 | 0.19 | 50 | 0.2107 | | 0.0528 | 0.38 | 100 | 0.1890 | | 0.055 | 0.57 | 150 | 0.1867 | | 0.053 | 0.76 | 200 | 0.1722 | | 0.0843 | 0.95 | 250 | 0.1718 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Enagamirzayev/whisper-small-llm-lingo_m
Enagamirzayev
"2024-04-15T02:26:35Z"
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-15T02:22:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
m-aliabbas1/medicare_idrak
m-aliabbas1
"2024-04-15T02:25:07Z"
0
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "region:us" ]
text-classification
"2024-04-15T02:24:33Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: i didn't quite hear you can you repeat your previous message - text: well no i have no interest in that right now - text: hey how's life treating you - text: sorry i can't pick up the call leave your message after the beep and i'll respond - text: i have already reached a decision on the matter pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 29 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | DNQ | <ul><li>'not a good match'</li><li>"i'm not suitable"</li><li>'not a good fit'</li></ul> | | GreetBack | <ul><li>"hello i'm well how's your energy"</li><li>'greetings how may i be of service to you'</li><li>"hey there how's your day turning out"</li></ul> | | Not_Interested | <ul><li>"i really don't think we're interested"</li><li>"i don't need two of them"</li><li>"i'm not ready to take on another expense so no thank you"</li></ul> | | DNC | <ul><li>'please keep an eye on your list'</li><li>"you're calling me on the same different numbers all the time"</li><li>'taking off your goddamn list'</li></ul> | | abusive | <ul><li>'let the fuck up the dog home and fucking phone asshole'</li><li>'i fucking said no'</li><li>"you're a line sack of shit scammer"</li></ul> | | can_you_email | <ul><li>'can you provide more details via email'</li><li>'is email the primary method of correspondence'</li><li>'can you send an email with the details'</li></ul> | | interested | <ul><li>"i'm all in"</li><li>"i'm keen to learn more about this topic"</li><li>'this is compelling'</li></ul> | | greetings | <ul><li>'hi there trust you enjoyed your day'</li><li>'hello'</li><li>'night'</li></ul> | | answering_machine | <ul><li>"this is george can't come to the phone right now they've a message or call that way"</li><li>'this is the automated assistant kindly leave your message'</li><li>"if you're satisfied with the message press 1 to listen to your message press 2 to erase and rerecord press 3 to continue recording where you left off press 4"</li></ul> | | where_are_you_calling_from | <ul><li>'is india where your organization is based'</li><li>'can you confirm if your company originates from the philippines'</li><li>"what's the city of your company's primary office"</li></ul> | | scam | <ul><li>'is this call legitimate or could it be a scam'</li><li>"i'm not going to give you that information"</li><li>"i don't give out my age"</li></ul> | | provide_age | <ul><li>'i was born on september 3 1998'</li><li>'my year of birth is 1995'</li><li>'i hereby confirm that i am 32 years old'</li></ul> | | who_are_you | <ul><li>'what name should i address you by'</li><li>"can you inform me of your company's name"</li><li>'state your name and purpose'</li></ul> | | weather | <ul><li>'describe the weather conditions currently'</li><li>"what's the temperature like right now"</li><li>"how's the weather forecast for the month"</li></ul> | | affirmation | <ul><li>"yes but i'll note that it says yes to the moment thank you you can call me"</li><li>'yeah what'</li><li>'yeah sure yep'</li></ul> | | not_decision_maker | <ul><li>"i wish i could help but i can't decide"</li><li>"i don't have the final say on this matter"</li><li>'decisions about this are not in my hands'</li></ul> | | calling_about | <ul><li>'why do you want to speak with me'</li><li>'what do you hope to accomplish with this call'</li><li>"what's the central purpose of your call"</li></ul> | | where_get_number | <ul><li>'who provided you with my phone number'</li><li>"i don't recall sharing my number with you clarify"</li><li>'where did you get my number'</li></ul> | | BUSY | <ul><li>"i'm occupied with work can we talk later"</li><li>"sorry busy right now let's talk later"</li><li>"well i'm just fixing to walk out the door i'm sorry i don't have time to talk"</li></ul> | | decline | <ul><li>"i don't really need any of that"</li><li>"no i don't work with that"</li><li>'not in my repertoire'</li></ul> | | are_you_bot | <ul><li>'are you a virtual assistant'</li><li>"can you confirm if you're a bot"</li><li>"can you tell me if you're a robot or not"</li></ul> | | language_barrier | <ul><li>'espaol please translate'</li><li>'not english speaker'</li><li>"i'm no english i'm sorry"</li></ul> | | complain_calls | <ul><li>'you call me everyday'</li><li>'stop interfering with my life with these calls'</li><li>"this is harassment and i won't stand for it"</li></ul> | | already | <ul><li>'your inquiry was already forwarded to the relevant department'</li><li>'already resolved'</li><li>'i am pleased to report that this has been taken care of already'</li></ul> | | hold_a_sec | <ul><li>'hold for just a moment while i verify that'</li><li>"i'll be right back stay on the line"</li><li>'hang on a second'</li></ul> | | transfer_request | <ul><li>"i'm requesting to speak with your superior"</li><li>'transfer my call to someone with more authority'</li><li>'transfer my call to your manager please'</li></ul> | | other | <ul><li>"oh i'm just getting on the roof"</li><li>"i've been curious about different religious practices and beliefs around the world"</li><li>"i'm planning to attend a workshop on mindfulness and meditation"</li></ul> | | say_again | <ul><li>'i missed the last part can you repeat it'</li><li>"i didn't hear you properly say it again please"</li><li>"i'm struggling to hear you say it again"</li></ul> | | sorry_greeting | <ul><li>"it's been a bit of a rough patch"</li><li>"sorry i'm feeling a bit down today"</li><li>"i'm not really feeling upbeat today"</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("m-aliabbas1/medicare_idrak") # Run inference preds = model("hey how's life treating you") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 7.3731 | 109 | | Label | Training Sample Count | |:---------------------------|:----------------------| | BUSY | 530 | | DNC | 583 | | DNQ | 102 | | GreetBack | 228 | | Not_Interested | 499 | | abusive | 135 | | affirmation | 213 | | already | 69 | | answering_machine | 329 | | are_you_bot | 216 | | calling_about | 141 | | can_you_email | 125 | | complain_calls | 67 | | decline | 399 | | greetings | 81 | | hold_a_sec | 76 | | interested | 89 | | language_barrier | 164 | | not_decision_maker | 85 | | other | 54 | | provide_age | 351 | | say_again | 81 | | scam | 106 | | sorry_greeting | 94 | | transfer_request | 74 | | weather | 140 | | where_are_you_calling_from | 251 | | where_get_number | 127 | | who_are_you | 219 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 40 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0000 | 1 | 0.2008 | - | | 0.0018 | 50 | 0.2072 | - | | 0.0036 | 100 | 0.1592 | - | | 0.0053 | 150 | 0.1905 | - | | 0.0071 | 200 | 0.1583 | - | | 0.0089 | 250 | 0.1144 | - | | 0.0107 | 300 | 0.121 | - | | 0.0124 | 350 | 0.1347 | - | | 0.0142 | 400 | 0.0711 | - | | 0.0160 | 450 | 0.0998 | - | | 0.0178 | 500 | 0.1193 | - | | 0.0195 | 550 | 0.131 | - | | 0.0213 | 600 | 0.1147 | - | | 0.0231 | 650 | 0.0851 | - | | 0.0249 | 700 | 0.0704 | - | | 0.0267 | 750 | 0.0456 | - | | 0.0284 | 800 | 0.0505 | - | | 0.0302 | 850 | 0.0357 | - | | 0.0320 | 900 | 0.0476 | - | | 0.0338 | 950 | 0.0374 | - | | 0.0355 | 1000 | 0.0262 | - | | 0.0373 | 1050 | 0.1045 | - | | 0.0391 | 1100 | 0.0428 | - | | 0.0409 | 1150 | 0.0036 | - | | 0.0426 | 1200 | 0.0161 | - | | 0.0444 | 1250 | 0.0494 | - | | 0.0462 | 1300 | 0.0418 | - | | 0.0480 | 1350 | 0.0219 | - | | 0.0498 | 1400 | 0.0087 | - | | 0.0515 | 1450 | 0.0075 | - | | 0.0533 | 1500 | 0.0548 | - | | 0.0551 | 1550 | 0.0101 | - | | 0.0569 | 1600 | 0.0465 | - | | 0.0586 | 1650 | 0.0144 | - | | 0.0604 | 1700 | 0.0091 | - | | 0.0622 | 1750 | 0.0169 | - | | 0.0640 | 1800 | 0.0483 | - | | 0.0657 | 1850 | 0.0431 | - | | 0.0675 | 1900 | 0.0066 | - | | 0.0693 | 1950 | 0.0591 | - | | 0.0711 | 2000 | 0.0238 | - | | 0.0729 | 2050 | 0.0022 | - | | 0.0746 | 2100 | 0.0446 | - | | 0.0764 | 2150 | 0.006 | - | | 0.0782 | 2200 | 0.0046 | - | | 0.0800 | 2250 | 0.0065 | - | | 0.0817 | 2300 | 0.0054 | - | | 0.0835 | 2350 | 0.0035 | - | | 0.0853 | 2400 | 0.006 | - | | 0.0871 | 2450 | 0.01 | - | | 0.0888 | 2500 | 0.0055 | - | | 0.0906 | 2550 | 0.0053 | - | | 0.0924 | 2600 | 0.0898 | - | | 0.0942 | 2650 | 0.0077 | - | | 0.0959 | 2700 | 0.0057 | - | | 0.0977 | 2750 | 0.0016 | - | | 0.0995 | 2800 | 0.0493 | - | | 0.1013 | 2850 | 0.0009 | - | | 0.1031 | 2900 | 0.0046 | - | | 0.1048 | 2950 | 0.0027 | - | | 0.1066 | 3000 | 0.0041 | - | | 0.1084 | 3050 | 0.0038 | - | | 0.1102 | 3100 | 0.0038 | - | | 0.1119 | 3150 | 0.0019 | - | | 0.1137 | 3200 | 0.0011 | - | | 0.1155 | 3250 | 0.0466 | - | | 0.1173 | 3300 | 0.0037 | - | | 0.1190 | 3350 | 0.0006 | - | | 0.1208 | 3400 | 0.0429 | - | | 0.1226 | 3450 | 0.0662 | - | | 0.1244 | 3500 | 0.0223 | - | | 0.1262 | 3550 | 0.0624 | - | | 0.1279 | 3600 | 0.0154 | - | | 0.1297 | 3650 | 0.0454 | - | | 0.1315 | 3700 | 0.0009 | - | | 0.1333 | 3750 | 0.0017 | - | | 0.1350 | 3800 | 0.0025 | - | | 0.1368 | 3850 | 0.0551 | - | | 0.1386 | 3900 | 0.0412 | - | | 0.1404 | 3950 | 0.0028 | - | | 0.1421 | 4000 | 0.017 | - | | 0.1439 | 4050 | 0.0009 | - | | 0.1457 | 4100 | 0.0013 | - | | 0.1475 | 4150 | 0.0008 | - | | 0.1493 | 4200 | 0.0009 | - | | 0.1510 | 4250 | 0.0012 | - | | 0.1528 | 4300 | 0.0007 | - | | 0.1546 | 4350 | 0.0029 | - | | 0.1564 | 4400 | 0.0045 | - | | 0.1581 | 4450 | 0.0014 | - | | 0.1599 | 4500 | 0.0405 | - | | 0.1617 | 4550 | 0.0209 | - | | 0.1635 | 4600 | 0.0065 | - | | 0.1652 | 4650 | 0.0414 | - | | 0.1670 | 4700 | 0.0009 | - | | 0.1688 | 4750 | 0.001 | - | | 0.1706 | 4800 | 0.0052 | - | | 0.1724 | 4850 | 0.0005 | - | | 0.1741 | 4900 | 0.0005 | - | | 0.1759 | 4950 | 0.0007 | - | | 0.1777 | 5000 | 0.0015 | - | | 0.1795 | 5050 | 0.0004 | - | | 0.1812 | 5100 | 0.0504 | - | | 0.1830 | 5150 | 0.0008 | - | | 0.1848 | 5200 | 0.0086 | - | | 0.1866 | 5250 | 0.0015 | - | | 0.1883 | 5300 | 0.0009 | - | | 0.1901 | 5350 | 0.0557 | - | | 0.1919 | 5400 | 0.0004 | - | | 0.1937 | 5450 | 0.0008 | - | | 0.1955 | 5500 | 0.0007 | - | | 0.1972 | 5550 | 0.0081 | - | | 0.1990 | 5600 | 0.0008 | - | | 0.2008 | 5650 | 0.0037 | - | | 0.2026 | 5700 | 0.0614 | - | | 0.2043 | 5750 | 0.0087 | - | | 0.2061 | 5800 | 0.014 | - | | 0.2079 | 5850 | 0.0035 | - | | 0.2097 | 5900 | 0.0097 | - | | 0.2114 | 5950 | 0.0009 | - | | 0.2132 | 6000 | 0.0135 | - | | 0.2150 | 6050 | 0.0003 | - | | 0.2168 | 6100 | 0.0009 | - | | 0.2186 | 6150 | 0.001 | - | | 0.2203 | 6200 | 0.0014 | - | | 0.2221 | 6250 | 0.0042 | - | | 0.2239 | 6300 | 0.0002 | - | | 0.2257 | 6350 | 0.0009 | - | | 0.2274 | 6400 | 0.0016 | - | | 0.2292 | 6450 | 0.0007 | - | | 0.2310 | 6500 | 0.0009 | - | | 0.2328 | 6550 | 0.001 | - | | 0.2345 | 6600 | 0.0002 | - | | 0.2363 | 6650 | 0.0005 | - | | 0.2381 | 6700 | 0.0621 | - | | 0.2399 | 6750 | 0.0004 | - | | 0.2416 | 6800 | 0.0003 | - | | 0.2434 | 6850 | 0.0812 | - | | 0.2452 | 6900 | 0.0004 | - | | 0.2470 | 6950 | 0.0004 | - | | 0.2488 | 7000 | 0.0001 | - | | 0.2505 | 7050 | 0.0002 | - | | 0.2523 | 7100 | 0.0067 | - | | 0.2541 | 7150 | 0.0004 | - | | 0.2559 | 7200 | 0.0021 | - | | 0.2576 | 7250 | 0.0291 | - | | 0.2594 | 7300 | 0.0142 | - | | 0.2612 | 7350 | 0.0002 | - | | 0.2630 | 7400 | 0.0003 | - | | 0.2647 | 7450 | 0.001 | - | | 0.2665 | 7500 | 0.0001 | - | | 0.2683 | 7550 | 0.0006 | - | | 0.2701 | 7600 | 0.0002 | - | | 0.2719 | 7650 | 0.005 | - | | 0.2736 | 7700 | 0.0003 | - | | 0.2754 | 7750 | 0.0003 | - | | 0.2772 | 7800 | 0.0003 | - | | 0.2790 | 7850 | 0.0002 | - | | 0.2807 | 7900 | 0.0006 | - | | 0.2825 | 7950 | 0.0002 | - | | 0.2843 | 8000 | 0.0044 | - | | 0.2861 | 8050 | 0.0003 | - | | 0.2878 | 8100 | 0.0001 | - | | 0.2896 | 8150 | 0.0006 | - | | 0.2914 | 8200 | 0.0002 | - | | 0.2932 | 8250 | 0.0023 | - | | 0.2950 | 8300 | 0.0067 | - | | 0.2967 | 8350 | 0.0338 | - | | 0.2985 | 8400 | 0.0002 | - | | 0.3003 | 8450 | 0.0003 | - | | 0.3021 | 8500 | 0.0013 | - | | 0.3038 | 8550 | 0.0025 | - | | 0.3056 | 8600 | 0.0002 | - | | 0.3074 | 8650 | 0.0434 | - | | 0.3092 | 8700 | 0.0002 | - | | 0.3109 | 8750 | 0.0003 | - | | 0.3127 | 8800 | 0.0003 | - | | 0.3145 | 8850 | 0.0002 | - | | 0.3163 | 8900 | 0.0002 | - | | 0.3181 | 8950 | 0.0005 | - | | 0.3198 | 9000 | 0.0005 | - | | 0.3216 | 9050 | 0.0005 | - | | 0.3234 | 9100 | 0.0005 | - | | 0.3252 | 9150 | 0.0008 | - | | 0.3269 | 9200 | 0.0024 | - | | 0.3287 | 9250 | 0.0002 | - | | 0.3305 | 9300 | 0.0004 | - | | 0.3323 | 9350 | 0.0449 | - | | 0.3340 | 9400 | 0.0062 | - | | 0.3358 | 9450 | 0.0004 | - | | 0.3376 | 9500 | 0.0004 | - | | 0.3394 | 9550 | 0.0003 | - | | 0.3412 | 9600 | 0.0002 | - | | 0.3429 | 9650 | 0.0002 | - | | 0.3447 | 9700 | 0.0003 | - | | 0.3465 | 9750 | 0.0003 | - | | 0.3483 | 9800 | 0.0001 | - | | 0.3500 | 9850 | 0.0002 | - | | 0.3518 | 9900 | 0.0003 | - | | 0.3536 | 9950 | 0.0344 | - | | 0.3554 | 10000 | 0.0581 | - | | 0.3571 | 10050 | 0.0001 | - | | 0.3589 | 10100 | 0.0027 | - | | 0.3607 | 10150 | 0.0002 | - | | 0.3625 | 10200 | 0.0004 | - | | 0.3643 | 10250 | 0.0004 | - | | 0.3660 | 10300 | 0.0579 | - | | 0.3678 | 10350 | 0.0007 | - | | 0.3696 | 10400 | 0.0617 | - | | 0.3714 | 10450 | 0.0334 | - | | 0.3731 | 10500 | 0.0004 | - | | 0.3749 | 10550 | 0.0002 | - | | 0.3767 | 10600 | 0.0003 | - | | 0.3785 | 10650 | 0.0029 | - | | 0.3802 | 10700 | 0.0004 | - | | 0.3820 | 10750 | 0.0002 | - | | 0.3838 | 10800 | 0.0001 | - | | 0.3856 | 10850 | 0.0002 | - | | 0.3873 | 10900 | 0.0002 | - | | 0.3891 | 10950 | 0.0002 | - | | 0.3909 | 11000 | 0.0001 | - | | 0.3927 | 11050 | 0.0003 | - | | 0.3945 | 11100 | 0.0004 | - | | 0.3962 | 11150 | 0.0159 | - | | 0.3980 | 11200 | 0.0005 | - | | 0.3998 | 11250 | 0.0003 | - | | 0.4016 | 11300 | 0.0009 | - | | 0.4033 | 11350 | 0.0002 | - | | 0.4051 | 11400 | 0.0002 | - | | 0.4069 | 11450 | 0.0011 | - | | 0.4087 | 11500 | 0.0002 | - | | 0.4104 | 11550 | 0.0023 | - | | 0.4122 | 11600 | 0.0001 | - | | 0.4140 | 11650 | 0.0002 | - | | 0.4158 | 11700 | 0.0003 | - | | 0.4176 | 11750 | 0.0002 | - | | 0.4193 | 11800 | 0.0001 | - | | 0.4211 | 11850 | 0.0002 | - | | 0.4229 | 11900 | 0.0002 | - | | 0.4247 | 11950 | 0.0001 | - | | 0.4264 | 12000 | 0.0003 | - | | 0.4282 | 12050 | 0.0002 | - | | 0.4300 | 12100 | 0.0001 | - | | 0.4318 | 12150 | 0.0001 | - | | 0.4335 | 12200 | 0.0615 | - | | 0.4353 | 12250 | 0.0002 | - | | 0.4371 | 12300 | 0.0008 | - | | 0.4389 | 12350 | 0.0002 | - | | 0.4407 | 12400 | 0.0004 | - | | 0.4424 | 12450 | 0.0002 | - | | 0.4442 | 12500 | 0.0002 | - | | 0.4460 | 12550 | 0.0001 | - | | 0.4478 | 12600 | 0.0002 | - | | 0.4495 | 12650 | 0.0002 | - | | 0.4513 | 12700 | 0.0019 | - | | 0.4531 | 12750 | 0.0001 | - | | 0.4549 | 12800 | 0.056 | - | | 0.4566 | 12850 | 0.0011 | - | | 0.4584 | 12900 | 0.0001 | - | | 0.4602 | 12950 | 0.0005 | - | | 0.4620 | 13000 | 0.0002 | - | | 0.4638 | 13050 | 0.0001 | - | | 0.4655 | 13100 | 0.0001 | - | | 0.4673 | 13150 | 0.0001 | - | | 0.4691 | 13200 | 0.0035 | - | | 0.4709 | 13250 | 0.0002 | - | | 0.4726 | 13300 | 0.0055 | - | | 0.4744 | 13350 | 0.0002 | - | | 0.4762 | 13400 | 0.0001 | - | | 0.4780 | 13450 | 0.0546 | - | | 0.4797 | 13500 | 0.0008 | - | | 0.4815 | 13550 | 0.0023 | - | | 0.4833 | 13600 | 0.0269 | - | | 0.4851 | 13650 | 0.0046 | - | | 0.4869 | 13700 | 0.0002 | - | | 0.4886 | 13750 | 0.0001 | - | | 0.4904 | 13800 | 0.0001 | - | | 0.4922 | 13850 | 0.0018 | - | | 0.4940 | 13900 | 0.0001 | - | | 0.4957 | 13950 | 0.0002 | - | | 0.4975 | 14000 | 0.0002 | - | | 0.4993 | 14050 | 0.0002 | - | | 0.5011 | 14100 | 0.0099 | - | | 0.5028 | 14150 | 0.0001 | - | | 0.5046 | 14200 | 0.0386 | - | | 0.5064 | 14250 | 0.0003 | - | | 0.5082 | 14300 | 0.0001 | - | | 0.5100 | 14350 | 0.0001 | - | | 0.5117 | 14400 | 0.0003 | - | | 0.5135 | 14450 | 0.0001 | - | | 0.5153 | 14500 | 0.0002 | - | | 0.5171 | 14550 | 0.0001 | - | | 0.5188 | 14600 | 0.0004 | - | | 0.5206 | 14650 | 0.0001 | - | | 0.5224 | 14700 | 0.0003 | - | | 0.5242 | 14750 | 0.0002 | - | | 0.5259 | 14800 | 0.0002 | - | | 0.5277 | 14850 | 0.0001 | - | | 0.5295 | 14900 | 0.0004 | - | | 0.5313 | 14950 | 0.0001 | - | | 0.5330 | 15000 | 0.025 | - | | 0.5348 | 15050 | 0.0018 | - | | 0.5366 | 15100 | 0.0001 | - | | 0.5384 | 15150 | 0.0001 | - | | 0.5402 | 15200 | 0.0003 | - | | 0.5419 | 15250 | 0.0001 | - | | 0.5437 | 15300 | 0.0003 | - | | 0.5455 | 15350 | 0.0001 | - | | 0.5473 | 15400 | 0.0001 | - | | 0.5490 | 15450 | 0.0001 | - | | 0.5508 | 15500 | 0.0004 | - | | 0.5526 | 15550 | 0.0001 | - | | 0.5544 | 15600 | 0.0002 | - | | 0.5561 | 15650 | 0.0005 | - | | 0.5579 | 15700 | 0.0012 | - | | 0.5597 | 15750 | 0.0003 | - | | 0.5615 | 15800 | 0.0001 | - | | 0.5633 | 15850 | 0.0001 | - | | 0.5650 | 15900 | 0.0001 | - | | 0.5668 | 15950 | 0.0001 | - | | 0.5686 | 16000 | 0.0001 | - | | 0.5704 | 16050 | 0.0002 | - | | 0.5721 | 16100 | 0.0001 | - | | 0.5739 | 16150 | 0.0001 | - | | 0.5757 | 16200 | 0.0001 | - | | 0.5775 | 16250 | 0.0005 | - | | 0.5792 | 16300 | 0.0515 | - | | 0.5810 | 16350 | 0.0003 | - | | 0.5828 | 16400 | 0.0001 | - | | 0.5846 | 16450 | 0.0001 | - | | 0.5864 | 16500 | 0.017 | - | | 0.5881 | 16550 | 0.0001 | - | | 0.5899 | 16600 | 0.0001 | - | | 0.5917 | 16650 | 0.0003 | - | | 0.5935 | 16700 | 0.0001 | - | | 0.5952 | 16750 | 0.0001 | - | | 0.5970 | 16800 | 0.0002 | - | | 0.5988 | 16850 | 0.0001 | - | | 0.6006 | 16900 | 0.0001 | - | | 0.6023 | 16950 | 0.0001 | - | | 0.6041 | 17000 | 0.0063 | - | | 0.6059 | 17050 | 0.0001 | - | | 0.6077 | 17100 | 0.0002 | - | | 0.6095 | 17150 | 0.0001 | - | | 0.6112 | 17200 | 0.0001 | - | | 0.6130 | 17250 | 0.0002 | - | | 0.6148 | 17300 | 0.0001 | - | | 0.6166 | 17350 | 0.0001 | - | | 0.6183 | 17400 | 0.0001 | - | | 0.6201 | 17450 | 0.0002 | - | | 0.6219 | 17500 | 0.0001 | - | | 0.6237 | 17550 | 0.0001 | - | | 0.6254 | 17600 | 0.0003 | - | | 0.6272 | 17650 | 0.0003 | - | | 0.6290 | 17700 | 0.0002 | - | | 0.6308 | 17750 | 0.0002 | - | | 0.6326 | 17800 | 0.0002 | - | | 0.6343 | 17850 | 0.0009 | - | | 0.6361 | 17900 | 0.0003 | - | | 0.6379 | 17950 | 0.0003 | - | | 0.6397 | 18000 | 0.0006 | - | | 0.6414 | 18050 | 0.0008 | - | | 0.6432 | 18100 | 0.0011 | - | | 0.6450 | 18150 | 0.0001 | - | | 0.6468 | 18200 | 0.0001 | - | | 0.6485 | 18250 | 0.0031 | - | | 0.6503 | 18300 | 0.0001 | - | | 0.6521 | 18350 | 0.0001 | - | | 0.6539 | 18400 | 0.0001 | - | | 0.6557 | 18450 | 0.0002 | - | | 0.6574 | 18500 | 0.0001 | - | | 0.6592 | 18550 | 0.0008 | - | | 0.6610 | 18600 | 0.0002 | - | | 0.6628 | 18650 | 0.0002 | - | | 0.6645 | 18700 | 0.0001 | - | | 0.6663 | 18750 | 0.0002 | - | | 0.6681 | 18800 | 0.0001 | - | | 0.6699 | 18850 | 0.0001 | - | | 0.6716 | 18900 | 0.0001 | - | | 0.6734 | 18950 | 0.0001 | - | | 0.6752 | 19000 | 0.0001 | - | | 0.6770 | 19050 | 0.0001 | - | | 0.6787 | 19100 | 0.0003 | - | | 0.6805 | 19150 | 0.0002 | - | | 0.6823 | 19200 | 0.0001 | - | | 0.6841 | 19250 | 0.0001 | - | | 0.6859 | 19300 | 0.0009 | - | | 0.6876 | 19350 | 0.0002 | - | | 0.6894 | 19400 | 0.0001 | - | | 0.6912 | 19450 | 0.0001 | - | | 0.6930 | 19500 | 0.0004 | - | | 0.6947 | 19550 | 0.0006 | - | | 0.6965 | 19600 | 0.0001 | - | | 0.6983 | 19650 | 0.0001 | - | | 0.7001 | 19700 | 0.0001 | - | | 0.7018 | 19750 | 0.0001 | - | | 0.7036 | 19800 | 0.0001 | - | | 0.7054 | 19850 | 0.0004 | - | | 0.7072 | 19900 | 0.0001 | - | | 0.7090 | 19950 | 0.0001 | - | | 0.7107 | 20000 | 0.0001 | - | | 0.7125 | 20050 | 0.0001 | - | | 0.7143 | 20100 | 0.0006 | - | | 0.7161 | 20150 | 0.0001 | - | | 0.7178 | 20200 | 0.0001 | - | | 0.7196 | 20250 | 0.0002 | - | | 0.7214 | 20300 | 0.0465 | - | | 0.7232 | 20350 | 0.0003 | - | | 0.7249 | 20400 | 0.0002 | - | | 0.7267 | 20450 | 0.0001 | - | | 0.7285 | 20500 | 0.0001 | - | | 0.7303 | 20550 | 0.0004 | - | | 0.7321 | 20600 | 0.0002 | - | | 0.7338 | 20650 | 0.0001 | - | | 0.7356 | 20700 | 0.0001 | - | | 0.7374 | 20750 | 0.0003 | - | | 0.7392 | 20800 | 0.0001 | - | | 0.7409 | 20850 | 0.0016 | - | | 0.7427 | 20900 | 0.0001 | - | | 0.7445 | 20950 | 0.0001 | - | | 0.7463 | 21000 | 0.0003 | - | | 0.7480 | 21050 | 0.0001 | - | | 0.7498 | 21100 | 0.0001 | - | | 0.7516 | 21150 | 0.0026 | - | | 0.7534 | 21200 | 0.0003 | - | | 0.7552 | 21250 | 0.0001 | - | | 0.7569 | 21300 | 0.0001 | - | | 0.7587 | 21350 | 0.0002 | - | | 0.7605 | 21400 | 0.0001 | - | | 0.7623 | 21450 | 0.0001 | - | | 0.7640 | 21500 | 0.0001 | - | | 0.7658 | 21550 | 0.0001 | - | | 0.7676 | 21600 | 0.0023 | - | | 0.7694 | 21650 | 0.0001 | - | | 0.7711 | 21700 | 0.0001 | - | | 0.7729 | 21750 | 0.0001 | - | | 0.7747 | 21800 | 0.0001 | - | | 0.7765 | 21850 | 0.0001 | - | | 0.7783 | 21900 | 0.0002 | - | | 0.7800 | 21950 | 0.0001 | - | | 0.7818 | 22000 | 0.0001 | - | | 0.7836 | 22050 | 0.0001 | - | | 0.7854 | 22100 | 0.0001 | - | | 0.7871 | 22150 | 0.0002 | - | | 0.7889 | 22200 | 0.0001 | - | | 0.7907 | 22250 | 0.0001 | - | | 0.7925 | 22300 | 0.0001 | - | | 0.7942 | 22350 | 0.0001 | - | | 0.7960 | 22400 | 0.0012 | - | | 0.7978 | 22450 | 0.0001 | - | | 0.7996 | 22500 | 0.0001 | - | | 0.8014 | 22550 | 0.0004 | - | | 0.8031 | 22600 | 0.0001 | - | | 0.8049 | 22650 | 0.0001 | - | | 0.8067 | 22700 | 0.0001 | - | | 0.8085 | 22750 | 0.0003 | - | | 0.8102 | 22800 | 0.0001 | - | | 0.8120 | 22850 | 0.0009 | - | | 0.8138 | 22900 | 0.0001 | - | | 0.8156 | 22950 | 0.0 | - | | 0.8173 | 23000 | 0.0006 | - | | 0.8191 | 23050 | 0.0001 | - | | 0.8209 | 23100 | 0.0001 | - | | 0.8227 | 23150 | 0.0001 | - | | 0.8244 | 23200 | 0.0029 | - | | 0.8262 | 23250 | 0.0001 | - | | 0.8280 | 23300 | 0.0001 | - | | 0.8298 | 23350 | 0.0 | - | | 0.8316 | 23400 | 0.0001 | - | | 0.8333 | 23450 | 0.0001 | - | | 0.8351 | 23500 | 0.0001 | - | | 0.8369 | 23550 | 0.0001 | - | | 0.8387 | 23600 | 0.0001 | - | | 0.8404 | 23650 | 0.0001 | - | | 0.8422 | 23700 | 0.0001 | - | | 0.8440 | 23750 | 0.0001 | - | | 0.8458 | 23800 | 0.0 | - | | 0.8475 | 23850 | 0.0001 | - | | 0.8493 | 23900 | 0.0001 | - | | 0.8511 | 23950 | 0.0001 | - | | 0.8529 | 24000 | 0.0001 | - | | 0.8547 | 24050 | 0.0001 | - | | 0.8564 | 24100 | 0.0001 | - | | 0.8582 | 24150 | 0.0002 | - | | 0.8600 | 24200 | 0.0005 | - | | 0.8618 | 24250 | 0.0024 | - | | 0.8635 | 24300 | 0.0001 | - | | 0.8653 | 24350 | 0.0001 | - | | 0.8671 | 24400 | 0.0025 | - | | 0.8689 | 24450 | 0.0001 | - | | 0.8706 | 24500 | 0.0001 | - | | 0.8724 | 24550 | 0.0001 | - | | 0.8742 | 24600 | 0.0013 | - | | 0.8760 | 24650 | 0.0001 | - | | 0.8778 | 24700 | 0.0001 | - | | 0.8795 | 24750 | 0.0001 | - | | 0.8813 | 24800 | 0.0001 | - | | 0.8831 | 24850 | 0.0001 | - | | 0.8849 | 24900 | 0.0001 | - | | 0.8866 | 24950 | 0.0001 | - | | 0.8884 | 25000 | 0.0001 | - | | 0.8902 | 25050 | 0.0001 | - | | 0.8920 | 25100 | 0.0001 | - | | 0.8937 | 25150 | 0.0001 | - | | 0.8955 | 25200 | 0.0001 | - | | 0.8973 | 25250 | 0.0001 | - | | 0.8991 | 25300 | 0.0001 | - | | 0.9009 | 25350 | 0.0001 | - | | 0.9026 | 25400 | 0.0001 | - | | 0.9044 | 25450 | 0.0001 | - | | 0.9062 | 25500 | 0.0001 | - | | 0.9080 | 25550 | 0.0001 | - | | 0.9097 | 25600 | 0.0001 | - | | 0.9115 | 25650 | 0.0001 | - | | 0.9133 | 25700 | 0.0001 | - | | 0.9151 | 25750 | 0.0001 | - | | 0.9168 | 25800 | 0.0003 | - | | 0.9186 | 25850 | 0.0001 | - | | 0.9204 | 25900 | 0.0001 | - | | 0.9222 | 25950 | 0.0001 | - | | 0.9240 | 26000 | 0.0001 | - | | 0.9257 | 26050 | 0.0001 | - | | 0.9275 | 26100 | 0.0001 | - | | 0.9293 | 26150 | 0.0001 | - | | 0.9311 | 26200 | 0.0002 | - | | 0.9328 | 26250 | 0.0001 | - | | 0.9346 | 26300 | 0.0001 | - | | 0.9364 | 26350 | 0.0004 | - | | 0.9382 | 26400 | 0.0001 | - | | 0.9399 | 26450 | 0.0001 | - | | 0.9417 | 26500 | 0.0001 | - | | 0.9435 | 26550 | 0.0001 | - | | 0.9453 | 26600 | 0.0001 | - | | 0.9471 | 26650 | 0.0001 | - | | 0.9488 | 26700 | 0.0001 | - | | 0.9506 | 26750 | 0.0001 | - | | 0.9524 | 26800 | 0.0001 | - | | 0.9542 | 26850 | 0.0001 | - | | 0.9559 | 26900 | 0.0001 | - | | 0.9577 | 26950 | 0.0002 | - | | 0.9595 | 27000 | 0.0001 | - | | 0.9613 | 27050 | 0.0001 | - | | 0.9630 | 27100 | 0.0001 | - | | 0.9648 | 27150 | 0.0001 | - | | 0.9666 | 27200 | 0.0003 | - | | 0.9684 | 27250 | 0.0001 | - | | 0.9701 | 27300 | 0.0001 | - | | 0.9719 | 27350 | 0.0001 | - | | 0.9737 | 27400 | 0.0002 | - | | 0.9755 | 27450 | 0.0001 | - | | 0.9773 | 27500 | 0.0001 | - | | 0.9790 | 27550 | 0.0001 | - | | 0.9808 | 27600 | 0.0001 | - | | 0.9826 | 27650 | 0.0001 | - | | 0.9844 | 27700 | 0.0001 | - | | 0.9861 | 27750 | 0.0001 | - | | 0.9879 | 27800 | 0.0001 | - | | 0.9897 | 27850 | 0.0001 | - | | 0.9915 | 27900 | 0.0001 | - | | 0.9932 | 27950 | 0.0001 | - | | 0.9950 | 28000 | 0.0001 | - | | 0.9968 | 28050 | 0.0001 | - | | 0.9986 | 28100 | 0.0001 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
yongsun-shim/eeve-4bit-test
yongsun-shim
"2024-04-15T02:30:44Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-15T02:25:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sarojj/plcalbk-GEMMA2V-VLLM
Sarojj
"2024-04-15T02:28:43Z"
0
0
transformers
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T02:25:54Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-2b-it-bnb-4bit --- # Uploaded model - **Developed by:** Sarojj - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
K00B404/BioStral
K00B404
"2024-04-15T02:26:03Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:26:03Z"
Invalid username or password.
maverickrzw/ct_detection
maverickrzw
"2024-04-15T02:34:43Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-15T02:26:16Z"
--- license: apache-2.0 ---
paytonison/your-model
paytonison
"2024-04-15T02:26:42Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:26:42Z"
Entry not found
mergekit-community/mergekit-slerp-cnjisco
mergekit-community
"2024-04-15T02:27:48Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:27:47Z"
Invalid username or password.
joeldabest638/rudytabootie-chalkzone
joeldabest638
"2024-04-15T02:29:00Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-15T02:28:35Z"
--- license: openrail ---
ahforoughi/PPO-LunarLander-v2
ahforoughi
"2024-04-15T02:29:02Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-04-15T02:28:38Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.63 +/- 48.55 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
solidrust/Mistral-22B-v0.2-AWQ
solidrust
"2024-04-15T02:48:23Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "quantized", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "en", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "region:us" ]
text-generation
"2024-04-15T02:28:46Z"
--- tags: - quantized - 4-bit - AWQ - autotrain_compatible - endpoints_compatible - text-generation-inference license: apache-2.0 language: - en base_model: mistral-community/Mixtral-8x22B-v0.1 model_creator: Vezora model_name: Mistral-22B-v0.2 model_type: mistral pipeline_tag: text-generation inference: false --- # Vezora/Mistral-22B-v0.1 AWQ - Model creator: [Vezora](https://huggingface.co/Vezora) - Original model: [Mistral-22B-v0.2](https://huggingface.co/Vezora/Mistral-22B-v0.2) ## Model Summary - Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion. - v0.2 has trained on 8x more data than v0.1! ## How to use **GUANACO PROMPT FORMAT** YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results. - This model requires a specific chat template, as the training format was Guanaco this is what it looks like: - "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
sidnarsipur/controlnet_models
sidnarsipur
"2024-04-15T02:29:42Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:29:42Z"
Entry not found
mooo16/gemini-all-data20240415_022942
mooo16
"2024-04-15T02:29:53Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:29:52Z"
Entry not found
Extrabass/test_trainer
Extrabass
"2024-04-15T02:30:37Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-chinese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-15T02:29:56Z"
--- base_model: google-bert/bert-base-chinese tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0253 - Accuracy: 0.9973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 214 | 0.0540 | 0.9905 | | No log | 2.0 | 428 | 0.0606 | 0.9932 | | 0.0648 | 3.0 | 642 | 0.0253 | 0.9973 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.1
mradermacher/collosus_120b-i1-GGUF
mradermacher
"2024-04-15T02:30:13Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:30:05Z"
--- exported_from: ibivibiv/collosus_120b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/ibivibiv/collosus_120b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/collosus_120b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q2_K.gguf) | i1-Q2_K | 43.3 | IQ3_XXS probably better | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
algograp-Inc/algograpV4
algograp-Inc
"2024-04-15T02:31:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:30:59Z"
Invalid username or password.
cilantro9246/m3bryby
cilantro9246
"2024-04-15T02:35:42Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T02:33:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aga-group/gate_0415_v1
aga-group
"2024-04-15T02:41:45Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-04-15T02:34:36Z"
--- license: other license_name: taide-l-models-community-license-agreement license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view extra_gated_heading: 您需要先同意授權條款才能使用此模型 extra_gated_fields: 姓名(Name): text 生日(Date of birth): date_picker 國家(Country): country 所屬單位(Affiliation): text geo: ip_location 按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox extra_gated_prompt: >- * ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view) * ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view) extra_gated_button_content: 送出(Submit) ---
taolu/test
taolu
"2024-04-15T02:36:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:36:00Z"
Invalid username or password.
Sarojj/plcalbk-GEMMA2V-GG8
Sarojj
"2024-04-15T02:37:24Z"
0
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-15T02:36:02Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf base_model: unsloth/gemma-2b-it-bnb-4bit --- # Uploaded model - **Developed by:** Sarojj - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MuradA/Transformers_Project
MuradA
"2024-04-15T02:43:53Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-15T02:37:42Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall base_model: distilbert-base-cased model-index: - name: Transformers_Project results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This is prediction for Suicide and Non-Suicide: Label-1 is Suicide and Label-0 is Non-Suicide. # Transformers_Project This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1389 - Accuracy: 0.9672 - F1: 0.9672 - Precision: 0.9676 - Recall: 0.9667 - Zero One Loss: 0.0328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Zero One Loss | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:| | 0.2495 | 1.0 | 875 | 0.1397 | 0.9552 | 0.9563 | 0.9320 | 0.982 | 0.0448 | | 0.0865 | 2.0 | 1750 | 0.1163 | 0.9692 | 0.9692 | 0.9696 | 0.9687 | 0.0308 | | 0.0344 | 3.0 | 2625 | 0.1389 | 0.9672 | 0.9672 | 0.9676 | 0.9667 | 0.0328 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Herry443/SOLAR-10.7B-KNUT-ref-voice-V0.2-script
Herry443
"2024-04-15T02:39:02Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:39:02Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Key1111/BERTTTTT
Key1111
"2024-04-15T02:39:16Z"
0
0
null
[ "doi:10.57967/hf/2066", "region:us" ]
null
"2024-04-15T02:39:16Z"
Entry not found
Herry443/SOLAR-10.7B-KNUT-ref-voice-V0.3-script
Herry443
"2024-04-15T02:43:10Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:43:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SuperPowerMz/Mistral-7B-OLoRA-Peft
SuperPowerMz
"2024-04-15T02:47:05Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T02:43:22Z"
Invalid username or password.
aidiary/outputs
aidiary
"2024-04-15T02:44:24Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-1.1-7b-it", "license:gemma", "region:us" ]
null
"2024-04-15T02:43:50Z"
Invalid username or password.
Piyush2512/wav2vec_base
Piyush2512
"2024-04-15T02:47:10Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
"2024-04-15T02:44:13Z"
Entry not found
deepaknh/falcon7B_FineTuning_ReExperiment_1_QLORA_7perParam_ILR_increased_v4
deepaknh
"2024-04-15T02:46:12Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "region:us" ]
null
"2024-04-15T02:45:16Z"
--- library_name: peft base_model: vilsonrodrigues/falcon-7b-instruct-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.1
APLunch/dqn-SpaceInvadersNoFrameskip-v4
APLunch
"2024-04-15T02:46:02Z"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-04-15T02:45:29Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 654.00 +/- 223.01 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga APLunch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga APLunch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga APLunch ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Apple0000/Text-Video
Apple0000
"2024-04-15T02:46:14Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-04-15T02:46:14Z"
--- license: other license_name: commune license_link: LICENSE ---
OmnicromsBrain/NeuralStar_AlphaWriter_4x7b
OmnicromsBrain
"2024-04-15T02:46:54Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:46:54Z"
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - mlabonne/AlphaMonarch-7B - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B - SanjiWatsuki/Kunoichi-DPO-v2-7B - OmnicromsBrain/NeuralStar-7b-Lazy base_model: - mlabonne/AlphaMonarch-7B - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B - SanjiWatsuki/Kunoichi-DPO-v2-7B - OmnicromsBrain/NeuralStar-7b-Lazy --- # NeuralStar_AlphaWriter_4x7b NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [OmnicromsBrain/NeuralStar-7b-Lazy](https://huggingface.co/OmnicromsBrain/NeuralStar-7b-Lazy) ## 🧩 Configuration ```yaml base_model: mlabonne/AlphaMonarch-7B experts: - source_model: mlabonne/AlphaMonarch-7B positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - "I want" - source_model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B positive_prompts: - "edit" - "rewrite" - "evaluate" - "spelling" - "grammer" - source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B positive_prompts: - "storywriting" - "write" - "scene" - "prose" - "character" - source_model: OmnicromsBrain/NeuralStar-7b-Lazy positive_prompts: - "codex" - "plot" - "outline" - "scenebeat" - "count" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "OmnicromsBrain/NeuralStar_AlphaWriter_4x7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mergekit-community/mergekit-slerp-wmtrqox
mergekit-community
"2024-04-15T02:46:59Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:46:59Z"
Invalid username or password.
ShushantLLM/LLama_music_generator
ShushantLLM
"2024-04-15T02:47:21Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:47:21Z"
--- base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - missing lyric Llama2 - generated_from_trainer - missing lyric Llama2 1 datasets: - generator model-index: - name: LLama_music_generator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLama_music_generator This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_ratio: 0.04 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
alexchen4ai/octopus3-1
alexchen4ai
"2024-04-15T02:47:56Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:47:55Z"
Invalid username or password.
alpindale/Mistral-7B-Instruct-v0.2-EETQ
alpindale
"2024-04-15T02:48:37Z"
0
0
null
[ "region:us" ]
null
"2024-04-15T02:48:37Z"
Model quantized using a modified [EETQ](https://github.com/NetEase-FuXi/EETQ) repo. Currently working on decoupling its kernels from CUTLASS to make this a bit easier to use. 8bits.