modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
wdli/llama3-instruct_depression_4
wdli
"2024-07-03T00:07:34Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-02T23:54:10Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. The model is trained on reddit_depression_dataset, The epoch = 3. The training is in dialog format, but the user's input is ignored. For example ```python def formatting_prompts_func(examples): texts_dataset = examples['text'] formatted_prompts = [] for text in texts_dataset: dialog = [ {"role": "system", "content": "You are a patient undergoing depression."}, # {"role": "user", "content": ""}, {"role": "assistant", "content": text} ] formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False) formatted_prompts.append(formatted_prompt) return {"text": formatted_prompts} ```
jfranklin-foundry/Qwen-Qwen1.5-4B-1719964486
jfranklin-foundry
"2024-07-02T23:54:18Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-4B", "region:us" ]
null
"2024-07-02T23:54:15Z"
--- base_model: Qwen/Qwen1.5-4B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
RAY2L/pythia-410m-deduped-SimPOW-2-single_v1
RAY2L
"2024-07-02T23:55:07Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2024-07-02T23:54:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LuisVargasIGG/comunicacion-aviso-donut-cord
LuisVargasIGG
"2024-07-03T01:24:52Z"
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-02T23:54:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf
RichardErkhov
"2024-07-03T00:03:02Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-02T23:55:30Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tinyllama-730M-test - GGUF - Model creator: https://huggingface.co/Josephgflowers/ - Original model: https://huggingface.co/Josephgflowers/tinyllama-730M-test/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tinyllama-730M-test.Q2_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q2_K.gguf) | Q2_K | 0.28GB | | [tinyllama-730M-test.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.IQ3_XS.gguf) | IQ3_XS | 0.31GB | | [tinyllama-730M-test.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.IQ3_S.gguf) | IQ3_S | 0.32GB | | [tinyllama-730M-test.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q3_K_S.gguf) | Q3_K_S | 0.32GB | | [tinyllama-730M-test.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.IQ3_M.gguf) | IQ3_M | 0.33GB | | [tinyllama-730M-test.Q3_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q3_K.gguf) | Q3_K | 0.35GB | | [tinyllama-730M-test.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q3_K_M.gguf) | Q3_K_M | 0.35GB | | [tinyllama-730M-test.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q3_K_L.gguf) | Q3_K_L | 0.38GB | | [tinyllama-730M-test.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.IQ4_XS.gguf) | IQ4_XS | 0.39GB | | [tinyllama-730M-test.Q4_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q4_0.gguf) | Q4_0 | 0.41GB | | [tinyllama-730M-test.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.IQ4_NL.gguf) | IQ4_NL | 0.41GB | | [tinyllama-730M-test.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q4_K_S.gguf) | Q4_K_S | 0.41GB | | [tinyllama-730M-test.Q4_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q4_K.gguf) | Q4_K | 0.43GB | | [tinyllama-730M-test.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q4_K_M.gguf) | Q4_K_M | 0.43GB | | [tinyllama-730M-test.Q4_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q4_1.gguf) | Q4_1 | 0.45GB | | [tinyllama-730M-test.Q5_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q5_0.gguf) | Q5_0 | 0.49GB | | [tinyllama-730M-test.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q5_K_S.gguf) | Q5_K_S | 0.49GB | | [tinyllama-730M-test.Q5_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q5_K.gguf) | Q5_K | 0.5GB | | [tinyllama-730M-test.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q5_K_M.gguf) | Q5_K_M | 0.5GB | | [tinyllama-730M-test.Q5_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q5_1.gguf) | Q5_1 | 0.53GB | | [tinyllama-730M-test.Q6_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q6_K.gguf) | Q6_K | 0.57GB | | [tinyllama-730M-test.Q8_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_tinyllama-730M-test-gguf/blob/main/tinyllama-730M-test.Q8_0.gguf) | Q8_0 | 0.74GB | Original model description: --- license: mit widget: - text: '<|system|> You are a helpful assistant</s> <|user|> What is your name? Tell me about yourself.</s> <|assistant|>' model-index: - name: tinyllama-730M-test results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 25.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 33.82 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 24.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.9 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test name: Open LLM Leaderboard --- I cut my TinyLlama 1.1B cinder v 2 down from 22 layers to 14. At 14 there was no coherent text but there were emerging ideas of a response. 1000 steps on step-by-step dataset. 6000 on Reason-with-cinder. The loss was still over 1 and the learning rate was still over 4. This model needs significat training. I am putting it up as a base model that needs work. If you continue training please let me know on the tinyllama discord, I have some interesting plans for this model. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__tinyllama-730M-test) | Metric |Value| |---------------------------------|----:| |Avg. |29.55| |AI2 Reasoning Challenge (25-Shot)|25.09| |HellaSwag (10-Shot) |33.82| |MMLU (5-Shot) |24.43| |TruthfulQA (0-shot) |42.90| |Winogrande (5-shot) |51.07| |GSM8k (5-shot) | 0.00|
lalafenno/falcon-7b-HA
lalafenno
"2024-07-03T00:47:56Z"
0
0
null
[ "region:us" ]
null
"2024-07-02T23:58:26Z"
Testing
GalaktischeGurke/Tess-v2.5.2-Qwen2-72B-4Bit-GPTQ
GalaktischeGurke
"2024-07-03T00:14:24Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2024-07-02T23:59:26Z"
Entry not found
Bazor99/mistral-7b-new-finetune
Bazor99
"2024-07-03T00:20:52Z"
0
0
transformers
[ "transformers", "safetensors", "en", "dataset:bitext/Bitext-customer-support-llm-chatbot-training-dataset", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:05:03Z"
--- library_name: transformers license: mit datasets: - bitext/Bitext-customer-support-llm-chatbot-training-dataset language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Kingsley Odenigbo] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [CasualLM] - **Language(s) (NLP):** [English] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [Mistral-7b] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/ArthurZ_-_mamba-130m-gguf
RichardErkhov
"2024-07-03T00:08:19Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:05:42Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mamba-130m - GGUF - Model creator: https://huggingface.co/ArthurZ/ - Original model: https://huggingface.co/ArthurZ/mamba-130m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mamba-130m.Q2_K.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q2_K.gguf) | Q2_K | 0.08GB | | [mamba-130m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.IQ3_XS.gguf) | IQ3_XS | 0.09GB | | [mamba-130m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.IQ3_S.gguf) | IQ3_S | 0.09GB | | [mamba-130m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q3_K_S.gguf) | Q3_K_S | 0.09GB | | [mamba-130m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.IQ3_M.gguf) | IQ3_M | 0.09GB | | [mamba-130m.Q3_K.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q3_K.gguf) | Q3_K | 0.09GB | | [mamba-130m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [mamba-130m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [mamba-130m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [mamba-130m.Q4_0.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q4_0.gguf) | Q4_0 | 0.1GB | | [mamba-130m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [mamba-130m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [mamba-130m.Q4_K.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q4_K.gguf) | Q4_K | 0.1GB | | [mamba-130m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [mamba-130m.Q4_1.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q4_1.gguf) | Q4_1 | 0.1GB | | [mamba-130m.Q5_0.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q5_0.gguf) | Q5_0 | 0.11GB | | [mamba-130m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [mamba-130m.Q5_K.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q5_K.gguf) | Q5_K | 0.11GB | | [mamba-130m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q5_K_M.gguf) | Q5_K_M | 0.11GB | | [mamba-130m.Q5_1.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q5_1.gguf) | Q5_1 | 0.11GB | | [mamba-130m.Q6_K.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q6_K.gguf) | Q6_K | 0.12GB | | [mamba-130m.Q8_0.gguf](https://huggingface.co/RichardErkhov/ArthurZ_-_mamba-130m-gguf/blob/main/mamba-130m.Q8_0.gguf) | Q8_0 | 0.14GB | Original model description: --- library_name: transformers tags: [] --- ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b", padding_side = "left") >>> tokenizer.pad_token = tokenizer.eos_token >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m", vocab_size=50280, num_hidden_layers=24, torch_dtype=torch.float32) >>> model.config.use_cache = True >>> input_ids = tokenizer(["Hey how are you doing?", "Explain how soy sauce is made"], padding=True, return_tensors= "pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["<|endoftext|>Hey how are you doing?\n\nI'm a newbie to the game", 'Explain how soy sauce is made.\n\n1. Add the soy sauce to'] ```
sachit56/bertscamdetection2
sachit56
"2024-07-03T01:24:43Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-07-03T00:10:18Z"
Entry not found
bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-2-checkpoints-tmp
bobox
"2024-07-03T00:11:41Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:11:41Z"
Entry not found
SQAI/bge-embedding-model4
SQAI
"2024-07-03T00:12:52Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:39", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:SQAI/bge-embedding-model3", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-07-03T00:12:31Z"
--- base_model: SQAI/bge-embedding-model3 datasets: [] language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:39 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: [] model-index: - name: BGE base Financial Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.27224251686702133 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.12023809523809523 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.12550125313283206 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.27224251686702133 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.12023809523809523 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.12550125313283206 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.26409405480256265 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.11190476190476191 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.11880131362889983 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.27224251686702133 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.12023809523809523 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.12550125313283206 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.2 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.2 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.06666666666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.04 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.06000000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.26409405480256265 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1619047619047619 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.1838345864661654 name: Cosine Map@100 --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [SQAI/bge-embedding-model3](https://huggingface.co/SQAI/bge-embedding-model3). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [SQAI/bge-embedding-model3](https://huggingface.co/SQAI/bge-embedding-model3) <!-- at revision c44f1df1b9e196562e31804bc857fa677db5002c --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the πŸ€— Hub model = SentenceTransformer("SQAI/bge-embedding-model4") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.8 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.08 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.8 | | cosine_ndcg@10 | 0.2722 | | cosine_mrr@10 | 0.1202 | | **cosine_map@100** | **0.1255** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.8 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.08 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.8 | | cosine_ndcg@10 | 0.2722 | | cosine_mrr@10 | 0.1202 | | **cosine_map@100** | **0.1255** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.8 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.08 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.8 | | cosine_ndcg@10 | 0.2641 | | cosine_mrr@10 | 0.1119 | | **cosine_map@100** | **0.1188** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.8 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.08 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.8 | | cosine_ndcg@10 | 0.2722 | | cosine_mrr@10 | 0.1202 | | **cosine_map@100** | **0.1255** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.2 | | cosine_accuracy@5 | 0.2 | | cosine_accuracy@10 | 0.6 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0667 | | cosine_precision@5 | 0.04 | | cosine_precision@10 | 0.06 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.2 | | cosine_recall@5 | 0.2 | | cosine_recall@10 | 0.6 | | cosine_ndcg@10 | 0.2641 | | cosine_mrr@10 | 0.1619 | | **cosine_map@100** | **0.1838** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 39 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 10.64 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 22.74 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------| | <code>Electrical current drawn by the streetlight, measured in amperes</code> | <code>suggestions based on data for street = 'Elm Park' in geozone = 'Westwood' in streetlighting in streetlighting</code> | | <code>Name of the street for the streetlight in error (failure)</code> | <code>community concerns for streetlight on street = 'Old Town Lane' in geozone = 'Greenfield'</code> | | <code>Total energy consumed by the streetlight, recorded in kilowatt-hours</code> | <code>suggestions based on data for street = 'Elm Park' in geozone = 'Westwood' in streetlighting in streetlighting</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 5 evaluation samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 8.4 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 21.4 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | positive | anchor | |:-------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------| | <code>geoZone of streetlight</code> | <code>community concerns for streetlight on street = 'Old Town Lane' in geozone = 'Greenfield'</code> | | <code>geoZone of streetlight</code> | <code>compare energy usage for current year vs last year for street = 'Elm Park' in geozone = 'Westwood' in streetlighting</code> | | <code>Geographic zone identifier for the streetlight in error (failure)</code> | <code>failure count for all failure types for geozone = 7 in streetlighting</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-06 - `weight_decay`: 0.03 - `num_train_epochs`: 50 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.2 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-06 - `weight_decay`: 0.03 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 50 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:--------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 1.0 | 1 | 1.1716 | 2.1695 | 0.1130 | 0.1375 | 0.1255 | 0.1130 | 0.1255 | | 2.0 | 2 | 2.9195 | - | - | - | - | - | - | | 4.5 | 3 | 1.5176 | 2.1784 | 0.1255 | 0.1375 | 0.1411 | 0.1369 | 0.1411 | | 3.0 | 4 | 1.456 | 2.1665 | 0.1244 | 0.1250 | 0.1369 | 0.1369 | 0.1369 | | 4.0 | 5 | 2.8245 | - | - | - | - | - | - | | 5.5 | 6 | 1.2473 | 2.1675 | 0.1422 | 0.1178 | 0.1244 | 0.1255 | 0.1244 | | 5.0 | 7 | 1.6864 | 2.1599 | 0.1255 | 0.1178 | 0.1422 | 0.1369 | 0.1422 | | 6.0 | 8 | 3.0378 | - | - | - | - | - | - | | 6.5 | 9 | 0.369 | 2.1608 | 0.1422 | 0.1136 | 0.1411 | 0.1422 | 0.1411 | | 7.0 | 10 | 2.5498 | 2.1537 | 0.1411 | 0.1250 | 0.1411 | 0.1130 | 0.1411 | | 7.5 | 11 | 2.4485 | - | - | - | - | - | - | | 8.0 | 12 | 0.5676 | 2.1566 | 0.1297 | 0.1250 | 0.1411 | 0.1297 | 0.1411 | | 9.0 | 13 | 2.8167 | - | - | - | - | - | - | | 12.0 | 14 | 1.5614 | 2.1424 | 0.1411 | 0.1250 | 0.1411 | 0.1130 | 0.1411 | | **10.0** | **15** | **1.0927** | **2.1388** | **0.1536** | **0.1167** | **0.1244** | **0.1536** | **0.1244** | | 11.0 | 16 | 3.0631 | - | - | - | - | - | - | | 13.0 | 17 | 1.3145 | 2.1416 | 0.1297 | 0.1053 | 0.1244 | 0.1297 | 0.1244 | | 12.0 | 18 | 1.5231 | 2.1400 | 0.1411 | 0.1250 | 0.1411 | 0.1411 | 0.1411 | | 13.0 | 19 | 2.6641 | - | - | - | - | - | - | | 14.0 | 20 | 0.6712 | 2.1449 | 0.1297 | 0.1250 | 0.1297 | 0.1411 | 0.1297 | | 14.0 | 21 | 1.8562 | 2.1404 | 0.1411 | 0.1250 | 0.1255 | 0.1255 | 0.1255 | | 15.0 | 22 | 2.6721 | 2.1429 | 0.1422 | 0.1178 | 0.1536 | 0.1422 | 0.1536 | | 16.0 | 23 | 2.4791 | - | - | - | - | - | - | | 19.5 | 24 | 1.6395 | 2.1522 | 0.1422 | 0.1261 | 0.1411 | 0.1297 | 0.1411 | | 17.0 | 25 | 1.0163 | 2.1501 | 0.1297 | 0.1250 | 0.1297 | 0.1297 | 0.1297 | | 18.0 | 26 | 2.6189 | - | - | - | - | - | - | | 20.5 | 27 | 1.5023 | 2.1608 | 0.1369 | 0.1186 | 0.1369 | 0.2119 | 0.1369 | | 19.0 | 28 | 1.0764 | 2.1595 | 0.1536 | 0.1269 | 0.1255 | 0.1838 | 0.1255 | | 20.0 | 29 | 2.7707 | - | - | - | - | - | - | | 21.5 | 30 | 1.122 | 2.1511 | 0.1130 | 0.1186 | 0.1130 | 0.1038 | 0.1130 | | 21.0 | 31 | 1.445 | 2.1533 | 0.1244 | 0.1175 | 0.1244 | 0.1038 | 0.1244 | | 22.0 | 32 | 2.53 | - | - | - | - | - | - | | 22.5 | 33 | 0.32 | 2.1521 | 0.1244 | 0.1175 | 0.1244 | 0.1153 | 0.1244 | | 23.0 | 34 | 2.0318 | 2.1559 | 0.1130 | 0.1144 | 0.1130 | 0.1038 | 0.1130 | | 23.5 | 35 | 1.874 | - | - | - | - | - | - | | 24.0 | 36 | 0.5331 | 2.1640 | 0.1255 | 0.1302 | 0.1369 | 0.1838 | 0.1369 | | 25.0 | 37 | 2.5571 | - | - | - | - | - | - | | 28.0 | 38 | 1.4146 | 2.1610 | 0.1244 | 0.1177 | 0.1244 | 0.1153 | 0.1244 | | 26.0 | 39 | 1.0342 | 2.1712 | 0.1130 | 0.1177 | 0.1130 | 0.1153 | 0.1130 | | 27.0 | 40 | 2.5759 | - | - | - | - | - | - | | 29.0 | 41 | 1.1287 | 2.1718 | 0.1536 | 0.13 | 0.1255 | 0.2119 | 0.1255 | | 28.0 | 42 | 1.3659 | 2.1708 | 0.1411 | 0.1261 | 0.1411 | 0.2005 | 0.1411 | | 29.0 | 43 | 2.4577 | - | - | - | - | - | - | | 30.0 | 44 | 0.6564 | 2.1707 | 0.1255 | 0.1188 | 0.1255 | 0.1838 | 0.1255 | | 30.0 | 45 | 1.9711 | 2.1697 | 0.1411 | 0.1061 | 0.1130 | 0.1038 | 0.1130 | | 31.0 | 46 | 2.4943 | 2.1660 | 0.1422 | 0.1146 | 0.1130 | 0.1205 | 0.1130 | | 32.0 | 47 | 2.3846 | - | - | - | - | - | - | | 35.5 | 48 | 1.5413 | 2.1786 | 0.1411 | 0.1261 | 0.1422 | 0.1205 | 0.1422 | | 33.0 | 49 | 1.036 | 2.1811 | 0.1411 | 0.1146 | 0.1411 | 0.1153 | 0.1411 | | 34.0 | 50 | 2.2672 | 2.1818 | 0.1255 | 0.1188 | 0.1255 | 0.1838 | 0.1255 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF
mradermacher
"2024-07-03T01:24:13Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:12:39Z"
--- base_model: anakin87/yo-Llama-3-8B-Instruct language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/anakin87/yo-Llama-3-8B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/yo-Llama-3-8B-Instruct-i1-GGUF/resolve/main/yo-Llama-3-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf
RichardErkhov
"2024-07-03T00:26:28Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
"2024-07-03T00:12:42Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) finetuned_llama2-1.1b-chat - GGUF - Model creator: https://huggingface.co/Harikrishnan46624/ - Original model: https://huggingface.co/Harikrishnan46624/finetuned_llama2-1.1b-chat/ | Name | Quant method | Size | | ---- | ---- | ---- | | [finetuned_llama2-1.1b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q2_K.gguf) | Q2_K | 0.4GB | | [finetuned_llama2-1.1b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [finetuned_llama2-1.1b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.IQ3_S.gguf) | IQ3_S | 0.47GB | | [finetuned_llama2-1.1b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [finetuned_llama2-1.1b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.IQ3_M.gguf) | IQ3_M | 0.48GB | | [finetuned_llama2-1.1b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q3_K.gguf) | Q3_K | 0.51GB | | [finetuned_llama2-1.1b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [finetuned_llama2-1.1b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [finetuned_llama2-1.1b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [finetuned_llama2-1.1b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q4_0.gguf) | Q4_0 | 0.59GB | | [finetuned_llama2-1.1b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [finetuned_llama2-1.1b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [finetuned_llama2-1.1b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q4_K.gguf) | Q4_K | 0.62GB | | [finetuned_llama2-1.1b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [finetuned_llama2-1.1b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q4_1.gguf) | Q4_1 | 0.65GB | | [finetuned_llama2-1.1b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q5_0.gguf) | Q5_0 | 0.71GB | | [finetuned_llama2-1.1b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [finetuned_llama2-1.1b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q5_K.gguf) | Q5_K | 0.73GB | | [finetuned_llama2-1.1b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [finetuned_llama2-1.1b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q5_1.gguf) | Q5_1 | 0.77GB | | [finetuned_llama2-1.1b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q6_K.gguf) | Q6_K | 0.84GB | | [finetuned_llama2-1.1b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Harikrishnan46624_-_finetuned_llama2-1.1b-chat-gguf/blob/main/finetuned_llama2-1.1b-chat.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - Harikrishnan46624/AI_QA_Data language: - en metrics: - bertscore - precision - recall pipeline_tag: text2text-generation --- # Model Card for Model ID The model is a specilay fined tuned on AI realated fields like NLP, GEN AI, DL, ML datasets for text generation ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model is fine tuned with Data science dataset with, NLP, CV, ML, DL, Gen AI, the base model is tiny llama - **Developed by:** Harikrishnan - **Model type:** llama - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model :** Tinyllama ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Dataset:** Harikrishnan46624/AI_QA_Data ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation Evaluted using BERT score with 90% accuracy ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
disciplelikethat/ian
disciplelikethat
"2024-07-03T00:19:03Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-07-03T00:14:56Z"
--- license: unknown ---
wdli/llama3-instruct_depression_5
wdli
"2024-07-03T00:18:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:14:58Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. trained on reddit_depression, text completion task, epoch = 1.
Vaaly/bc
Vaaly
"2024-07-03T00:15:26Z"
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:15:24Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Vaaly - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Trickshotblaster/mike
Trickshotblaster
"2024-07-03T00:40:46Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "text-generation", "dataset:Open-Orca/OpenOrca", "dataset:HuggingFaceFW/fineweb-edu", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-03T00:15:47Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin license: mit datasets: - Open-Orca/OpenOrca - HuggingFaceFW/fineweb-edu pipeline_tag: text-generation --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
mradermacher/latxa-13b-v1.2-GGUF
mradermacher
"2024-07-03T01:07:08Z"
0
0
transformers
[ "transformers", "gguf", "eu", "en", "dataset:HiTZ/latxa-corpus-v1.1", "base_model:HiTZ/latxa-13b-v1.2", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:17:25Z"
--- base_model: HiTZ/latxa-13b-v1.2 datasets: - HiTZ/latxa-corpus-v1.1 language: - eu - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/HiTZ/latxa-13b-v1.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/latxa-13b-v1.2-GGUF/resolve/main/latxa-13b-v1.2.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Vaaly/businesscentral
Vaaly
"2024-07-03T00:30:21Z"
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:17:43Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Vaaly - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BabyChou/dolma-fasttext-model-quantized
BabyChou
"2024-07-03T00:18:56Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:17:59Z"
Entry not found
baseten/pg-test123
baseten
"2024-07-03T00:19:23Z"
0
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:18:00Z"
Entry not found
reirusia/cafla_asr_dict_words
reirusia
"2024-07-03T01:13:13Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-03T00:18:05Z"
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: cafla_asr_dict_words results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cafla_asr_dict_words This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0638 - Cer: 0.6242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 5.1692 | 0.1241 | 1000 | 1.2042 | 0.8773 | | 0.6259 | 0.2482 | 2000 | 0.3884 | 0.6901 | | 0.3244 | 0.3723 | 3000 | 0.2874 | 0.6752 | | 0.2552 | 0.4964 | 4000 | 0.2577 | 0.6691 | | 0.2132 | 0.6205 | 5000 | 0.1871 | 0.6550 | | 0.1935 | 0.7446 | 6000 | 0.2197 | 0.6609 | | 0.1805 | 0.8687 | 7000 | 0.1631 | 0.6504 | | 0.157 | 0.9928 | 8000 | 0.1453 | 0.6443 | | 0.128 | 1.1169 | 9000 | 0.1281 | 0.6403 | | 0.1136 | 1.2410 | 10000 | 0.1285 | 0.6393 | | 0.1132 | 1.3651 | 11000 | 0.1271 | 0.6396 | | 0.1056 | 1.4892 | 12000 | 0.1123 | 0.6373 | | 0.0967 | 1.6133 | 13000 | 0.1128 | 0.6376 | | 0.0926 | 1.7374 | 14000 | 0.0934 | 0.6351 | | 0.0826 | 1.8615 | 15000 | 0.0975 | 0.6333 | | 0.0667 | 1.9856 | 16000 | 0.0958 | 0.6322 | | 0.0435 | 2.1097 | 17000 | 0.0920 | 0.6301 | | 0.0421 | 2.2338 | 18000 | 0.0863 | 0.6297 | | 0.0405 | 2.3579 | 19000 | 0.0871 | 0.6284 | | 0.037 | 2.4820 | 20000 | 0.0804 | 0.6278 | | 0.032 | 2.6061 | 21000 | 0.0739 | 0.6265 | | 0.0327 | 2.7302 | 22000 | 0.0729 | 0.6258 | | 0.0323 | 2.8543 | 23000 | 0.0646 | 0.6246 | | 0.038 | 2.9784 | 24000 | 0.0638 | 0.6242 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.4.0.dev20240610+cu124 - Datasets 2.19.2 - Tokenizers 0.19.1
Yuki20/llama3_8b_sql5
Yuki20
"2024-07-03T00:19:41Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:19:34Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Yuki20 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vjvk/shoptalk_assist_mistral_7b_qlora_finetuned
vjvk
"2024-07-03T00:23:01Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:22:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vjvk/Mistral-7B-Instruct-v0.2-GPTQ_finetune
vjvk
"2024-07-03T00:23:04Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
"2024-07-03T00:23:02Z"
--- base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: Mistral-7B-Instruct-v0.2-GPTQ_finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.2-GPTQ_finetune This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2048 | 0.5714 | 1 | 1.8649 | | 1.559 | 1.7143 | 3 | 1.7810 | | 1.3635 | 2.8571 | 5 | 1.7115 | | 1.3561 | 4.0 | 7 | 1.6645 | | 2.7801 | 4.5714 | 8 | 1.6496 | | 0.898 | 5.7143 | 10 | 1.6337 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
DimLibS/testingsmthh
DimLibS
"2024-07-03T00:28:59Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:24:10Z"
Entry not found
alexzarate/bryson_dechambeau
alexzarate
"2024-07-03T00:25:38Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:25:38Z"
Entry not found
RichardErkhov/kmfoda_-_gpt2-500m-gguf
RichardErkhov
"2024-07-03T00:31:31Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
"2024-07-03T00:25:45Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-500m - GGUF - Model creator: https://huggingface.co/kmfoda/ - Original model: https://huggingface.co/kmfoda/gpt2-500m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-500m.Q2_K.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q2_K.gguf) | Q2_K | 0.24GB | | [gpt2-500m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.IQ3_XS.gguf) | IQ3_XS | 0.27GB | | [gpt2-500m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.IQ3_S.gguf) | IQ3_S | 0.27GB | | [gpt2-500m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q3_K_S.gguf) | Q3_K_S | 0.27GB | | [gpt2-500m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.IQ3_M.gguf) | IQ3_M | 0.29GB | | [gpt2-500m.Q3_K.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q3_K.gguf) | Q3_K | 0.31GB | | [gpt2-500m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q3_K_M.gguf) | Q3_K_M | 0.31GB | | [gpt2-500m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q3_K_L.gguf) | Q3_K_L | 0.33GB | | [gpt2-500m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.IQ4_XS.gguf) | IQ4_XS | 0.33GB | | [gpt2-500m.Q4_0.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q4_0.gguf) | Q4_0 | 0.34GB | | [gpt2-500m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.IQ4_NL.gguf) | IQ4_NL | 0.34GB | | [gpt2-500m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q4_K_S.gguf) | Q4_K_S | 0.34GB | | [gpt2-500m.Q4_K.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q4_K.gguf) | Q4_K | 0.37GB | | [gpt2-500m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q4_K_M.gguf) | Q4_K_M | 0.37GB | | [gpt2-500m.Q4_1.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q4_1.gguf) | Q4_1 | 0.37GB | | [gpt2-500m.Q5_0.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q5_0.gguf) | Q5_0 | 0.4GB | | [gpt2-500m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q5_K_S.gguf) | Q5_K_S | 0.4GB | | [gpt2-500m.Q5_K.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q5_K.gguf) | Q5_K | 0.42GB | | [gpt2-500m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q5_K_M.gguf) | Q5_K_M | 0.42GB | | [gpt2-500m.Q5_1.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q5_1.gguf) | Q5_1 | 0.43GB | | [gpt2-500m.Q6_K.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q6_K.gguf) | Q6_K | 0.47GB | | [gpt2-500m.Q8_0.gguf](https://huggingface.co/RichardErkhov/kmfoda_-_gpt2-500m-gguf/blob/main/gpt2-500m.Q8_0.gguf) | Q8_0 | 0.6GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vbv373/text2struc_reduced_com_codegen_0702-outputs
vbv373
"2024-07-03T00:26:49Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:26:49Z"
Entry not found
RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf
RichardErkhov
"2024-07-03T00:52:34Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
"2024-07-03T00:27:21Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) malaysian-tinyllama-1.1b-16k-instructions-v4 - GGUF - Model creator: https://huggingface.co/mesolitica/ - Original model: https://huggingface.co/mesolitica/malaysian-tinyllama-1.1b-16k-instructions-v4/ | Name | Quant method | Size | | ---- | ---- | ---- | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q2_K.gguf) | Q2_K | 0.4GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_S.gguf) | IQ3_S | 0.47GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.IQ3_M.gguf) | IQ3_M | 0.48GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K.gguf) | Q3_K | 0.51GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_0.gguf) | Q4_0 | 0.59GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K.gguf) | Q4_K | 0.62GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q4_1.gguf) | Q4_1 | 0.65GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_0.gguf) | Q5_0 | 0.71GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K.gguf) | Q5_K | 0.73GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q5_1.gguf) | Q5_1 | 0.77GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q6_K.gguf) | Q6_K | 0.84GB | | [malaysian-tinyllama-1.1b-16k-instructions-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-tinyllama-1.1b-16k-instructions-v4-gguf/blob/main/malaysian-tinyllama-1.1b-16k-instructions-v4.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/sdxl-anime-Illustration-v1-sdxl
John6666
"2024-07-03T00:32:16Z"
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-07-03T00:27:50Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/553058/sdxl-anime-illustration?modelVersionId=615464).
tensorkelechi/sky_diffuse
tensorkelechi
"2024-07-03T00:51:48Z"
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2024-07-03T00:30:06Z"
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- This model is a diffusion model for unconditional image generation of clouds, skies, etc ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('tensorkelechi/sky_diffuse') image = pipeline().images[0] image
magnifi/parser_user_v10c_epoch_7_lr_0.002
magnifi
"2024-07-03T00:34:42Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-03T00:32:22Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Vaaly/ticket
Vaaly
"2024-07-03T00:45:02Z"
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:34:12Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Vaaly - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qsdcfqsdfcxqfqs/A-Psychologist-Explains-The-Rising-Concern-Around-Hangxiety-35-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:36:35Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:35:22Z"
--- language: - en --- [![Build Status](https://imageio.forbes.com/specials-images/imageserve/6683e88897861d00afa5ef26/0x0.jpg?format=jpg&crop=3295,2469,x21,y0,safe&height=600&width=1200&fit=bounds)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2144424533&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5533345554&Connector=https://unitedstatednews.com Flash News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3443125134&Connector=https://unitedstatednews.com Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4353233433&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com Other Sources : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4353233433&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3443125134&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us3451522425&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4451414523&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5315135332&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2252313353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5453143314&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2112154144&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4355431414&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1143252321&Connector=https://unitedstatednews.com The morning after a night out can be rough -- grappling with a pounding headache, nausea and a tidal wave of anxiety. The euphoria of alcohol returns with a vengeance, exacting its toll as you sift through last night's memories, cringing at potential missteps and feeling an overwhelming sense of dread and regret over the things you said. The accompanying sense of doom makes you want to disappear into oblivion, though the way you feel right now might make it seem like you're already halfway there. This combination of a hangover and anxiety, popularly known as "hangxiety," can be an all-too-familiar scenario for many. If that's you, then you're not alone, and understanding the underlying causes can help you manage it better. Hangxiety refers to the anxious feelings that often accompany a hangover. While a hangover's physical symptoms -- such as headache, nausea and dehydration -- are well-known, hangxiety adds a layer of psychological distress, including feelings of guilt, regret and nervousness about one's actions while intoxicated. While hangovers are a common consequence of heavy alcohol consumption, only a small percentage of individuals experience anxiety as part of their hangover symptoms. A 2017 study suggests that about 22% of participants experienced anxiety during their hangovers. However, hangxiety is reported with greater severity among individuals who already experience significant anxiety, particularly those who are naturally more anxious or introverted. Another 2019 study published in Personality and Individual Differences found that individuals who are highly shy or have a social anxiety disorder (SAD) are more vulnerable to hangxiety. While alcohol consumption may provide temporary relief from anxiety in highly shy individuals, it leads to a significant increase in anxiety the following day. It also increases the risk of alcohol use disorder (AUD). Individuals may drink in social situations to reduce anxiety, only to feel more anxious the next day. This can lead them to turn to alcohol again to calm down, creating a vicious cycle that becomes increasingly difficult to break. People experiencing hangxiety often describe feeling an exaggerated sense of fear and discomfort, with some describing it as a minor panic attack. These symptoms can make the hangover significantly more distressing. Common symptoms of hangxiety can be both physical and psychological, including: Throughout the day following heavy drinking, you may find it hard to stay awake, but anxiety can also make falling asleep difficult, creating a hellish tug-of-war. "I'm so sleepy, in a way I've never experienced after a night of drinking. I'm laying down and I'm having the hardest time trying to keep my eyes open and stop going cross eyed," explains a Reddit user, " I keep involuntarily closing my eyes and because my breathing is shallow right now, I keep getting scared? Not sure how to describe it but the feeling of being so sleepy and actually falling asleep is making me incredibly anxious about doing it." The most straightforward way to prevent hangxiety is to reduce alcohol consumption. Moderation is vital -- research suggests that limiting alcohol intake and duration can significantly reduce the risk of hangover-related anxiety. If you struggle with social anxiety and find it hard to avoid the "liquid courage" at parties, preparing beforehand and giving yourself a pep talk can help. Here are some strategies to manage your drinking and anxiety: If you find yourself experiencing hangxiety after a night of heavy drinking, try these steps to manage your symptoms. Alleviating the physical symptoms of a hangover can also help reduce the psychological ones:....
qsdcfqsdfcxqfqs/Beloved-Perth-teen-loses-gruelling-fight-with-cancer-d5-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:36:41Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:35:28Z"
--- language: - en --- [![Build Status](https://images.7news.com.au/publication/C-15219720/76bdec9d7a00d37dc61b8e60bb6fdc6c619b2751-16x9-x1y0w2446h1376.jpg?imwidth=1200)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5443511152&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2255453243&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5214555453&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1243344125&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1215211155&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5155253254&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3135255341&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2331452241&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2134414422&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3411235113&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5414543232&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5454221511&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4143221543&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4154553321&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5533345554&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1413323245&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2554132423&Connector=https://unitedstatednews.com A young West Australian who captured the hearts of people across the nation has tragically lost his life following a gruelling battle with cancer. A young West Australian who captured the hearts of people across the nation has tragically lost his life following a gruelling battle with cancer. Levi Tracy, 19, peacefully passed away on Tuesday morning alongside his family at Fiona Stanley Hospital. Levi was first diagnosed with non-Hodgkins T-Cell Lymphoma when he was just seven years old and underwent two and a half years of chemotherapy which put him into remission. Know the news with the 7NEWS app: Download today Sadly, 10 years later when Levi was supposed to be out enjoying his adolescence, he was instead diagnosed with cancer again, this time with Acute Myeloid Leukaemia. Over the past two years, his family have rallied behind him to help him find a blood stem cell match -- a form of bone marrow transplant -- to treat his cancer. This was when the viral Facebook page Lifeline for Levi was born, urging people to sign up to donate blood and get tested to potentially save Levi's life. The page -- which has amassed more than 3000 members -- was created by Levi's father Mark dedicated to supporting the young teen throughout his journey. Family members have posted updates every day documenting Levi's journey, with each post racking up hundreds of likes, shares and comments. A GoFundMe was launched by friends of the Tracy family to help support Levi's parents while they take time off work and help with other costs. The fundraiser -- which has raised an incredible $17,700 with more than 250 donations -- revealed the numerous procedures the teenager had to endure in his fight over the past two years. Despite the immense support, Levi's condition was still deteriorating and the infection was extremely exhausting on his body. On Wednesday, an update was shared that Levi was really struggling and that he had made the brave decision to be placed into a coma. The plan was for him to spend three to five days on life support to give his body a much-needed rest. Tragically, five days later, Levi's father shared the devastating update that his son had passed. "This day was never meant to arrive," the emotional post began. "This morning at 9:15am our brave warrior, our Mighty Fighting Tiger gained his wings. Levi passed away peacefully this morning with Nikki and Nikita and myself by his side. "His lungs were no longer able to cope and his blood pressure dropped to unsupportable levels. In the end his heart just simply stopped. The load on his entire body was just too much. The infection in his lungs was the cause, and triggered everything else. "He fought a massive fight and never gave up, the Mountain was just too big. The respect he earned from the ICU team was immense as he courageously pushed himself beyond all of their predictions and in the end they called him a Champion." Mark requested his thousands of supporters stay present on the Facebook page as the family plans to transition it over time to a page that honours Levi's life and legacy. "We are broken beyond compare. We have no idea how to move forward and keep going, but we will find a way. Levi would want that," Mark wrote. Hundreds of Levi's supporters have flooded the comments to send their condolences to the grieving family. One person commented: "I am crying like a baby for this beautiful warrior, Levi, and his beautiful family. Xx. Like thousands of other people I have admired his strength, and their,determination and fight to the end . Xx" "My heart is breaking for you all πŸ˜” I have followed Levi's journey and would check my Facebook for updates ❀️ RIP Levi ❀️" another person said. Speaking to PerthNow, Mark said that Levi's warm personality "touched people deeply". "Levi was fun. He was loyal, strong, funny, kind, humble, gentle, loving," he said. "He will be remembered as funny and engaging. His warmth and personality transcended boundaries and touched people deeply and left a lasting legacy.....
RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf
RichardErkhov
"2024-07-03T01:31:52Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:36:17Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Sina-Thor-7b-Merge - GGUF - Model creator: https://huggingface.co/Azazelle/ - Original model: https://huggingface.co/Azazelle/Sina-Thor-7b-Merge/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Sina-Thor-7b-Merge.Q2_K.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q2_K.gguf) | Q2_K | 2.53GB | | [Sina-Thor-7b-Merge.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Sina-Thor-7b-Merge.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Sina-Thor-7b-Merge.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Sina-Thor-7b-Merge.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Sina-Thor-7b-Merge.Q3_K.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q3_K.gguf) | Q3_K | 3.28GB | | [Sina-Thor-7b-Merge.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Sina-Thor-7b-Merge.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Sina-Thor-7b-Merge.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Sina-Thor-7b-Merge.Q4_0.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q4_0.gguf) | Q4_0 | 3.83GB | | [Sina-Thor-7b-Merge.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Sina-Thor-7b-Merge.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Sina-Thor-7b-Merge.Q4_K.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q4_K.gguf) | Q4_K | 4.07GB | | [Sina-Thor-7b-Merge.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Sina-Thor-7b-Merge.Q4_1.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q4_1.gguf) | Q4_1 | 4.24GB | | [Sina-Thor-7b-Merge.Q5_0.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q5_0.gguf) | Q5_0 | 4.65GB | | [Sina-Thor-7b-Merge.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Sina-Thor-7b-Merge.Q5_K.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q5_K.gguf) | Q5_K | 4.78GB | | [Sina-Thor-7b-Merge.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Sina-Thor-7b-Merge.Q5_1.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q5_1.gguf) | Q5_1 | 5.07GB | | [Sina-Thor-7b-Merge.Q6_K.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q6_K.gguf) | Q6_K | 5.53GB | | [Sina-Thor-7b-Merge.Q8_0.gguf](https://huggingface.co/RichardErkhov/Azazelle_-_Sina-Thor-7b-Merge-gguf/blob/main/Sina-Thor-7b-Merge.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- pipeline_tag: text-generation tags: - mistral - merge license: cc-by-4.0 --- # Model Card for Sina-Thor-7b-Merge <!-- Provide a quick summary of what the model is/does. --> Part of a series of experimental DARE merges. .yaml file for mergekit ```.yaml: models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: rishiraj/smol-7b #75 parameters: weight: 0.2 density: 0.41 - model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125 parameters: weight: 0.33 density: 0.54 - model: Azazelle/Dumb-Maidlet #200 parameters: weight: 0.53 density: 0.71 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ```
vumichien/Florence-2-FT-Caption
vumichien
"2024-07-03T00:38:43Z"
0
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
"2024-07-03T00:37:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RAY2L/pythia-410m-deduped-SimPOW-2-two_v1
RAY2L
"2024-07-03T00:38:12Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2024-07-03T00:37:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dbr103/gundalorian
dbr103
"2024-07-03T00:38:32Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-07-03T00:38:32Z"
--- license: apache-2.0 ---
qsdcfqsdfcxqfqs/Transformational-manufacturing-push-to-be-introduced-ee-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:39:58Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:38:45Z"
--- language: - en --- [![Build Status](https://www.westernjournal.com/wp-content/uploads/2024/07/Joe-Biden-2.jpg)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1513434325&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4513343442&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4123231334&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5334415454&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2252313353&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2221235355&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3435135553&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5515442433&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1423544342&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2342112254&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4123231334&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3135255341&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1122152453&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3542322225&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5352244133&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4354431244&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5414543232&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3425442432&Connector=https://unitedstatednews.com A "new generation of prosperity" will be unlocked with a plan to boost investment in renewable energy and critical minerals being brought to parliament. The federal government will introduce laws for its signature Future Made in Australia policy on Wednesday. The strategy, which will spend more than $22 billion over 10 years, will look to safeguard Australia's control over its resources and revive the country's manufacturing base. It will also seek to invest in emerging industries in the renewable-energy sector as the global economy shifts towards net-zero emissions. Treasurer Jim Chalmers said the proposal would make the most of Australia's potential in the years ahead. "The legislation that I'll introduce today is about a new generation of prosperity," he told Sky News on Wednesday. "It's about good, secure, well paid jobs." Dr Chalmers said the bill would unlock private sector investment in renewable energy that would enable Australian workers and communities to benefit from the global transition to net zero emissions. Under the laws setting out the strategy, a national interest framework would be set up, which will oversee what projects would be funded. Any project receiving funds would also have to ensure that jobs would be safe and well paid, engage with the communities, strengthen supply chains and develop skilled and inclusive workforces. An innovation fund will be set up to support emerging technologies such as green metals and manufacturing in clean energy. Independent analysis would be carried by the Treasury to determine which sectors of the economy would benefit the most from the investment. Deputy Opposition Leader Sussan Ley accused the government of running an advertising campaign instead of a proper plan for manufacturers, claiming insolvencies in the sector had tripled since Labor came to office. "How can you have a future made in Australia, when you can't keep the lights on today?" she said. Dr Chalmers said the coalition's alternative, building a nuclear energy industry, would only push power prices higher. He argued the extra funding the Future Made in Australia policy would inject into the economy would not exacerbate inflation as it would be phased in gradually. "It's a medium term and long term plan to make Australia an indispensable part of the global net zero transformation by leveraging our industry, our energy, our resources and our strengths and skills and our attractiveness as an investment destination," Dr Chalmers said. The government will need the support of the Greens and cross bench to pass its reforms, with the coalition already signalling it will oppose the plan. The Greens said they would want to ensure funding included in the plans would not finance new coal and gas projects.....
qsdcfqsdfcxqfqs/Democratic-Congressman-Says-Trump-Is-Going-to-Win-After-Bidens-Disastrous-Debate-Cle-c2-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:40:04Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:38:51Z"
--- language: - en --- [![Build Status](https://bloximages.chicago2.vip.townnews.com/eagletribune.com/content/tncms/custom/image/ae213140-df8c-11e7-b06d-b798580d75a5.jpg?resize=600%2C333)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1345131534&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2325442523&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2115352134&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5421115125&Connector=https://unitedstatednews.com Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4354431244&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4451414523&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4355431414&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5124223154&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4355113511&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1254415432&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1254415432&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5334415454&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3134253244&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4532235124&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5433141541&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2135121341&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4154214315&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2144424533&Connector=https://unitedstatednews.com A Democratic congressman said President Joe Biden's abysmal showing in last week's debate made clear what he has known for a while -- Biden is heading for defeat. "Biden's poor performance in the debate was not a surprise. It also didn't rattle me as it has others, because the outcome of this election has been clear to me for months: While I don't plan to vote for him, Donald Trump is going to win. And I'm OK with that," Rep. Jared Golden wrote in an Op-Ed published by the Bangor Daily News. "There are winners and losers in every election. Democrats' post-debate hand-wringing is based on the idea that a Trump victory is not just a political loss, but a unique threat to our democracy," he wrote. "I reject the premise. Unlike Biden and many others, I refuse to participate in a campaign to scare voters with the idea that Trump will end our democratic system," he continued, adding, "Pearl-clutching about a Trump victory ignores the strength of our democracy." Golden said good work will be accomplished with or without Biden. "Some of Congress' best work in recent years has happened in spite of the president, not because of him. A handful of responsible Democrats, including myself and West Virginia Sen. Joe Manchin, rejected Biden's bloated 'Build Back Better' bill and instead passed a law that supercharged American energy production, saved Medicare billions of dollars and reduced the deficit," he wrote. "In 2025, I believe Trump is going to be in the White House. Maine's representatives will need to work with him when it benefits Mainers, hold him accountable when it does not and work independently across the aisle no matter what." Golden's comments followed those of Democratic Rep. Lloyd Doggett of Texas who became the first sitting member of Congress to say Biden should end his campaign. "President Biden has continued to run substantially behind Democratic senators in key states and in most polls has trailed Donald Trump," Doggett said, according to the Texas Tribune. "I had hoped that the debate would provide some momentum to change that. It did not," he said. Doggett said other Democrats share his position. "There are many people who would like to make a statement like this but are concerned about, among other things, doing anything that might make it even more difficult for President Biden," Doggett said in the interview. "I represent the heart of a congressional district once represented by Lyndon Johnson. Under very different circumstances, he made the painful decision to withdraw. President Biden should do the same," Doggett said. Former House Speaker Nancy Pelosi added her voice to the doubts about Biden, according to NBC. "I think it's a legitimate question to say, is this an episode, or is this a condition? And so when people ask that question, it's completely legitimate," said Pelosi, who still serves as a Democratic member of Congress from California, adding that her comment applied to both Biden and former President Donald Trump.....
mradermacher/L3-8B-Everything-COT-i1-GGUF
mradermacher
"2024-07-03T01:28:45Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:38:52Z"
--- base_model: FPHam/L3-8B-Everything-COT language: - en library_name: transformers quantized_by: mradermacher tags: - llm - llama - llama3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/FPHam/L3-8B-Everything-COT <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3-8B-Everything-COT-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Everything-COT-i1-GGUF/resolve/main/L3-8B-Everything-COT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
wdli/llama3-instruct_depression_6
wdli
"2024-07-03T00:48:58Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:40:05Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. trained on reddit_depression, text completion task, epoch = 2.
qsdcfqsdfcxqfqs/Anioma-State-proposal-Path-implementation-and-regional-advantages-explored-2d-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:43:48Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:42:32Z"
--- language: - en --- [![Build Status](https://www.eurasiareview.com/wp-content/uploads/2019/09/a-141.jpg)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us3334213453&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5433141541&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1413323245&Connector=https://unitedstatednews.com Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2351253222&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1153441323&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4253221553&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4444315311&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com https://huggingface.co/qsdcfqsdfcxqfqs/Beloved-Perth-teen-loses-gruelling-fight-with-cancer-d5-updated https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4221141213&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4354431244&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2314215413&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4143221543&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5224144435&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5144355523&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2314215413&Connector=https://unitedstatednews.com Senator Ned Nwoko, renowned for his advocacy in Delta North, champions bills and motions of profound societal impact. Recently, he affirmed his intent to sponsor legislation for the establishment of 'Anioma,' a prospective state in the South-East. Presently, this region comprises five states, whereas others boast six, barring the North-West, which possesses seven. Senator Nwoko posits that creating Anioma State would rectify a longstanding historical disparity. "Invoking the Doctrine of Necessity: Steps to Establish Anioma State" Given the current security situation in th... [9:12 pm, 02\/07\/2024] CNL THOMAS Imonikhet: South East Govs to meet Tinubu over Nnamdi Kanu's release Governors under the aegis of the South East Governors' Forum on Tuesday resolved to meet with President Bola Ahmed Tinubu to seek the release of Mazi Nnamdi Kanu, the detained leader of the Indigenous People of Biafra (IPOB). Governor Uzodinma of Imo state and chairman of the forum, disclosed this at the end of a meeting of the five governors from the zone held in Enugu on Tuesday. The governors in attendance were Uzodinma of Imo, Alex Otti of Abia, Prof. Chukwuma Soludo of Anambra, Francis Nwifuru of Ebonyi, and Dr.Peter Mbah of Enugu. Uzodimma said the delegation of former President Olusegun Obasanjo, Chief Emeka Anyaoku, former Commonwealth Secretary-General, Nnaemeka Achebe, and the Obi of Onitsha paid a solidarity visit to the forum. He added that the forum has resolved to meet with the federal government to discuss the "pressing issues" in the South East zone. "The forum commiserated with the government and people of Abia state, Ebonyi state, Imo state, South East Nigeria, and Chief Ogbonnaya Onu's family on the demise of His Excellency, Dr. Ogbonnaya Onu," the Imo governor stated. According to him, "The forum received the delegation of the former President Chief Olusegun Obasanjo GCFR, Chief Emeka Anyaoku, GCON, and His Royal Majesty Igwe Nnaemeka Achebe CFR, Obi of Onitsha, who came on a solidarity visit to the forum. "The forum deliberated on the reviewed report of the South-east security and economic summit held in Owerri on the 28th of September 2023 and agreed to implement the aspects of the report pertaining to security and economic integration and affirmed its desire to put actionable plans on the key issues agreed. "The forum resolved to visit Mr. President to discuss pressing issues concerning the South-east region. "The forum also resolved to interface with the federal government to secure the release of Mr. Nnamdi Kanu."....
GTsuya/queen_grimhilde_pony
GTsuya
"2024-07-03T00:49:00Z"
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:GraydientPlatformAPI/autism-pony", "license:mit", "region:us" ]
text-to-image
"2024-07-03T00:44:10Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/181750-HighresFix_00001_.png - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/182220-HighresFix_00001_.png - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/182315-HighresFix_00001_.png - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/182402-HighresFix_00001_.png - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/182449-HighresFix_00001_.png - text: 'cartoon, score_9, score_8_up, score_7_up, embedding:zPDXL, queen_grimhilde, (mature_woman, old_woman:1.1), dark purple and black dress, crown, looking_at_viewer' parameters: negative_prompt: source_pony, source_furry, embedding:zPDXL-neg output: url: images/182553-HighresFix_00001_.png base_model: GraydientPlatformAPI/autism-pony instance_prompt: queen_grimhilde license: mit --- # queen_grimhilde <Gallery /> ## Model description This LoRA model has been trained with Kohya SS using Queen Grimhilde character artworks from Snow White Disney's movie on Autism Mix SDXL checkpoint. Obtained graphics could be really close the original character style. ## Trigger words You should use `queen_grimhilde` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/GTsuya/queen_grimhilde_pony/tree/main) them in the Files & versions tab.
qsdcfqsdfcxqfqs/Melting-Of-Alaskan-Glaciers-Accelerating-Faster-Than-Previously-Thought-3d-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:45:41Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:44:26Z"
--- language: - en --- [![Build Status](https://www.dailylocal.com/wp-content/uploads/2024/07/Israel_Palestinians_85049_968071.jpg?w=640)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1133455554&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3134253244&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5315135332&Connector=https://unitedstatednews.com Biden last Talk : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1332243522&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2115352134&Connector=https://unitedstatednews.com Other Sources : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4525245233&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5534534353&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5224144435&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1214223412&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4154133311&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1234531333&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1153441323&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1133455554&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5232513332&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3132113212&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4425432454&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2255453243&Connector=https://unitedstatednews.com Melting of glaciers in a major Alaskan icefield has accelerated and could reach an irreversible tipping point earlier than previously thought, new research suggests. The research, led by scientists at Newcastle University, UK, found that glacier loss on Juneau Icefield, which straddles the boundary between Alaska and British Columbia, Canada, has increased dramatically since 2010. The team, which also included universities in the UK, USA and Europe, looked at records going back to 1770 and identified three distinct periods in how icefield volume changed. They saw that glacier volume loss remained fairly consistent from 1770 - 1979 at between 0.65- 1.01 km per year, increasing to 3.08-3.72 km per year between 1979-2010. Between 2010-2020 there was a sharp acceleration when the rate of ice loss doubled, reaching 5.91 km per year. In particular, the research, published in Nature Communications, found that icefield-wide, rates of glacier area shrinkage were five times faster from 2015-2019 relative to 1948-1979. Overall, the total ice loss across the Juneau icefield between 1770-2020 (315.3 Β± 237.5 km) equated to just under a quarter of the original ice volume. The increased rate of glacier thinning has also been accompanied by increased glacier fragmentation. The team mapped a dramatic increase in disconnections, where the lower parts of a glacier become separated from the upper parts. Additionally, 100% of glaciers mapped in 2019 have receded relative to their position in 1770, and 108 glaciers have disappeared completely. Study lead, Dr Bethan Davies, Senior Lecturer, Newcastle University, said: "It's incredibly worrying that our research found a rapid acceleration since the early 21st century in the rate of glacier loss across the Juneau icefield. "Alaskan icefields - which are predominantly flat, plateau icefields - are particularly vulnerable to accelerated melt as the climate warms since ice loss happens across the whole surface, meaning a much greater area is affected. "Additionally, flatter ice caps and icefields cannot retreat to higher elevations and find a new equilibrium. "As glacier thinning on the Juneau plateau continues and ice retreats to lower levels and warmer air, the feedback processes this sets in motion is likely to prevent future glacier regrowth, potentially pushing glaciers beyond a tipping point into irreversible recession." Alaska contains some of the world's largest plateau icefields and their melting is a major contributor to current sea level rise. The researchers think the processes they observed at Juneau are likely to affect other, similar icefields elsewhere across Alaska and Canada, as well as Greenland, Norway and other high-Arctic locations. They also say current published projections for the Juneau icefield that suggest ice volume loss will be linear until 2040, accelerating only after 2070, may need to be updated to reflect the processes detailed in this latest study. Dr Davies said: "This work has shown that different processes can accelerate melt, which means that currrent glacier projections may be too small and underestimate glacier melt in the future." The team used a combination of historical glacier inventory records, 20 century archival aerial photographs, and satellite imagery as well as geomorphological mapping conducted during fieldwork in 2022 to piece together a comprehensive picture of changes over the past 250 years. Dr Robert McNabb, Lecturer in Remote Sensing, Ulster University, said: "What was really exciting about this research was piecing together thousands of archived aerial photographs to extract elevation, which gave us a really detailed insight into the long-term behaviour of the icefield. "Putting together this archive of photographs, collected 70 and 50 years ago, was a little like doing the world's hardest jigsaw puzzle but the quality of the imagery meant we were able to reconstruct the icefield elevation in the pre-satellite era for the first time. Longer term archives like this one are an incredibly valuable resource, as they give us a much better understanding of the thresholds for accelerating change, as we've seen on the Juneau Icefield."....
minionai/webarena_amazon_v0_070224
minionai
"2024-07-03T01:17:59Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-70B", "license:llama3", "region:us" ]
null
"2024-07-03T00:45:06Z"
--- base_model: meta-llama/Meta-Llama-3-70B library_name: peft license: llama3 tags: - axolotl - generated_from_trainer model-index: - name: llama3-70b-lora16-cove_format_062024_ift results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: meta-llama/Meta-Llama-3-70B bf16: true dataset_prepared_path: last_run_prepared debug: null deepspeed: null early_stopping_patience: null eval_table_size: null evals_per_epoch: 0 flash_attention: true fp16: null deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false group_by_length: false hub_model_id: minionai/llama3-70b-lora16-cove_format_062024_ift hub_strategy: all_checkpoints learning_rate: 1e-4 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lora_target_modules: null lr_scheduler: cosine micro_batch_size: 1 model_type: LlamaForCausalLM num_epochs: 3 optimizer: adamw_torch output_dir: ./lora-out pad_to_sequence_len: true resume_from_checkpoint: null auto_resume_from_checkpoints: true sample_packing: true wandb_entity: minionai wandb_name: webarena_amazon_v0 wandb_project: webarena saves_per_epoch: 1 sequence_len: 8192 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false val_set_size: 0 warmup_steps: 100 weight_decay: 0.0 datasets: - path: minionai/prod_070124_amzn_webarena_v0_ift type: system_prompt: "" system_format: "{system}" field_system: system field_instruction: instruction field_input: input field_output: output format: |- User: {instruction} {input} Assistant: # 'no_input_format' cannot include {input} no_input_format: "### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\nverify(\"" ``` </details><br> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/minionai/webarena/runs/wwb1oxg9) # llama3-70b-lora16-cove_format_062024_ift This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.3 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
mradermacher/Mahou-1.3-spark-7B-GGUF
mradermacher
"2024-07-03T01:30:32Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:45:43Z"
--- base_model: flammenai/Mahou-1.3-spark-7B datasets: - flammenai/FlameMix-DPO-v1 - flammenai/MahouMix-v1 - flammenai/Grill-Flammen-v1_chatML language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/flammenai/Mahou-1.3-spark-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.IQ3_M.gguf) | IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-spark-7B-GGUF/resolve/main/Mahou-1.3-spark-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
c-eshih/human_stable_diffusion_cropped_new
c-eshih
"2024-07-03T00:46:56Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:46:56Z"
Entry not found
qsdcfqsdfcxqfqs/My-take-on-the-Supreme-Court-immunity-decision-Not-great-not-terrible-5g-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:48:55Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:47:40Z"
--- language: - en --- [![Build Status](https://www.dailylocal.com/wp-content/uploads/2024/07/Biden_Excessive_Heat_80776.jpg?w=640)]() read the full article here : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3443125134&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3134253244&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4141122115&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5421115125&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1345131534&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_1241533151&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2325442523&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1122114443&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4532235124&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1435445135&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1332243522&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1122114443&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2134414422&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2135121341&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2535344321&Connector=https://unitedstatednews.com Obviously it was not the decision anyone here wanted. But the danger of "official act immunity" stems more from the prospect of a (God forbid) second Trump term than the ongoing Trump prosecutions. It would have been a bigger blow if Trump was on track for trial in one of the federal cases before the election. Unfortunately we already know that won't be happening, so that's a moo point now. All that really is going to happen is that there will be an additional trial court analysis as to whether the acts were official or unofficial acts. I feel confident that Judge Chutkan will find that they were unofficial and thus subject to prosecution. Unfortunately Trump will probably appeal that determination and that will delay matters some more . But again, the pre-election trial date was already blown, so things really haven't changed that radically. Judge Cannon in the documents case is far more of a wild card. But the most serious charges in that case deal with Trump's obstruction and refusal to return the documents, which all happened after he was president. So immunity shouldn't even apply there, and even if Judge Cannon tries to twist it otherwise, I think an appeals court will likely overrule her. It is a frustrating, annoying ruling, but the one I fully expected. It's far from a death blow for the criminal cases against Trump, though. The worse implications for the ruling go to Trump's future actions in office. But hopefully we'll never get there.....
samox90/VQA-SPANISH
samox90
"2024-07-03T01:02:53Z"
0
0
null
[ "pytorch", "visual-question-answering", "es", "arxiv:1910.09700", "region:us" ]
visual-question-answering
"2024-07-03T00:49:33Z"
--- language: - es metrics: - accuracy - f1 pipeline_tag: visual-question-answering --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VanguardAI/CoT_multi_llama_LoRA_4bit
VanguardAI
"2024-07-03T00:50:17Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:49:36Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** VanguardAI - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
luccazen/rubra-ai-meta-llama-3-70b-instruct-awq
luccazen
"2024-07-03T00:49:50Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T00:49:50Z"
Entry not found
qsdcfqsdfcxqfqs/Biden-Support-Plunges-Lower-Than-Vice-President-Harris-Opens-New-Battleground-States-ee-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:52:47Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:51:33Z"
--- language: - en --- [![Build Status](https://gdb.voanews.com/B311E3D8-E018-459C-813F-9B1DDA468F49.jpg)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us3334213453&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4513343442&Connector=https://unitedstatednews.com Flash News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2422212441&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://tempaste.com/Q8LfEs3peyE Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2314215413&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3522321353&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5523352511&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1513434325&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1243344125&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2252313353&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4525245233&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4223411532&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2325442523&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4123231334&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5124223154&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1111534523&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1214223412&Connector=https://unitedstatednews.com The report shows that states - New Hampshire, Virginia and New Mexico - are now leaning towards former President Donald Trump. In previous recent elections, those states were considered safe for Democrats. The Democratic win streaks in Virginia and New Mexico go back to 2004, while no Republican has won New Hampshire since 1988.What is even more shocking, according to the news agency, is that Biden appeared in the polls even behind unpopular Vice President Kamala Harris in hypothetical one-on-one matchups with Trump.Earlier on Tuesday, US Congressman Lloyd Doggett became the first Democrat to call on Biden to withdraw from the 2024 presidential race following his poor performance in the first debate against Donald Trump. Biden and Trump are scheduled to have a second debate on September 10 but it remains to be seen whether Biden will remain as the Democratic Party presidential candidate.....
bedeabza/llama-3-8b-chat-doctor
bedeabza
"2024-07-03T00:52:06Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:52:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Belal-Elshenety/Llama3
Belal-Elshenety
"2024-07-03T00:52:20Z"
0
0
null
[ "license:llama3", "region:us" ]
null
"2024-07-03T00:52:20Z"
--- license: llama3 ---
qsdcfqsdfcxqfqs/Is-this-a-wedge-I-see-before-me-for-SPSPX-by-RogueEconomics-d4-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:53:35Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:52:20Z"
--- language: - en --- [![Build Status](https://cdn1.img.sputnikglobe.com/images/sharing/article/eng/1119227758.jpg?10975733241719959509)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3151355445&Connector=https://unitedstatednews.com Flash News : https://tempaste.com/dgZckwa5ytl Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2252511535&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5415311252&Connector=https://unitedstatednews.com Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3323255331&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4154214315&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1332243522&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2421333225&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1234531333&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5523352511&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4355113511&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1122114443&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1122152453&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2422212441&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5124223154&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5534252421&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2135121341&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5315135332&Connector=https://unitedstatednews.com I have noticed that the rally since 2022 lows is forming itself into a decent rising wedge pattern on the SPX. There is also a small RSI divergence in place that has since migrated to the hourly time frames. This could be indicative. There is also a disappointing looking NFP report in the pipeline this week. I have no active trade, and I will not take a position on this until it confirms (likely to be later in the month if it does confirm) however, I believe that the pattern here which has 3 touchpoints at resistance and 3 touchpoints at support (suggesting the pattern is well defined) warrants attention from traders. Watch out and trade safe!....
ufoscw/agent_classifier_240530_llama3_8b-instruct-FP16
ufoscw
"2024-07-03T01:09:46Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-07-03T00:52:49Z"
Invalid username or password.
HamAndCheese82/math-ocr-donut-v2.3
HamAndCheese82
"2024-07-03T00:54:59Z"
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T00:54:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qsdcfqsdfcxqfqs/Bidens-Debate-Night-Was-Bad-His-Tuesday-May-Have-Been-Even-Worse-bh-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:57:15Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:56:01Z"
--- language: - en --- [![Build Status](https://s3.tradingview.com/7/7JzrEeW5_big.png)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4154133311&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4123355211&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2515111512&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1411341441&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2323235443&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2221235355&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4525245233&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3322423525&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3135255341&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4253221553&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2115352134&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5334415454&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2323235443&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4141215354&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5345334144&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2331452241&Connector=https://unitedstatednews.com https://tempaste.com/F6LcpNxamlp https://tempaste.com/cKbOyIGPXYJ The 24 hours after a noticeably more orange President Joe Biden addressed the public for five minutes and left without answering questions have not gone well. When Biden took the debate stage Thursday evening, he was pale with a noticeably raspy voice as he stumbled through answers and appeared to be lost. All but thirty minutes into the debate, Democrats descended into panic, calling for the leader of their party to drop out of the presidential race with just four months until election day. Since, the Biden campaign has been in damage control mode, and faced an even rougher Tuesday.(RELATED: Panicked Dems Scramble For Biden Replacement After 'Disaster' Debate Performance) The president, seemingly more tan, gave unplanned remarks on Monday to address the Supreme Court's ruling on presidential immunity. Since his first major appearance post-debate, Biden's press secretary has taken questions on whether he has dementia, stories have published about concern for the president's mental fitness and internal polling has leaked showing the implications his performance might have. About an hour before White House press secretary Karine Jean-Pierre faced reporters in the briefing room for the first time since the debate, a few troublesome stories dropped. First, Democratic Texas Rep. Lloyd Doggett became the first congressional Democrat to publicly call for Biden to drop out of the race. Then, NBC News reported that Hunter Biden, recently convicted on three felony gun charges, has been joining his father when in meetings with top aides. The first son reportedly started popping in on calls and meetings with his father after the family returned from its weekend vacation. "What the hell is happening?" a person familiar with the matter told the outlet about the reaction from the White House on the first son's appearances. Jean-Pierre, who said she saw the reporting as she was entering the briefing, told reporters that Hunter Biden had come off a vacation with his dad at Camp David over the weekend and did attend meetings on his Monday speech preparation. The NBC article was not the only time the press secretary had to respond to reporting happening in and around the briefing room. While taking questions from the lectern, the New York Times posted an article detailing concerns that Biden's mental lapses have become more frequent. (RELATED: White House Officially Claims Biden Has Made 148 Mistakes During 2024 Public Remarks) Several current and former officials alongside other who met him away from the public eye told The New York Times that Biden is noticeably more dazed, tired and easily loses track of the conversation. Though the president appears to often be alert, sources told the NYT that Biden's mental lapses appear to be becoming more common, more obvious and more concerning. Jean-Pierre, who hadn't yet seen the reporting, responded using her regular interactions with Biden saying she sees a "strong, resolute president who is always willing and able to work on behalf of the American people." Internal Democrat polling also leaked ahead of the briefing, showing just how much Biden's stumbling on the debate stage could affect the presidential race. The internal polling memo, obtained by Puck News, showed that New Hampshire, Virginia and New Mexico -- states projected to go Biden -- are now up for grabs following the debate. Perhaps even more damaging, Trump's lead within key swing states such as Michigan, Pennsylvania, Georgia, North Carolina and Wisconsin grew, the polling shows, according to Puck News. The questions on Hunter's meetings and the NYT reporting weren't outliers in the briefing, as nearly every question pertained to Biden's debate performance and his mental fitness for office. "Does President Biden at 81-years-old have Alzheimer's, any form of dementia or degenerative illness that would call these sorts of lapses?" "I have an answer for you. Are you ready for it? It's a no. And I hope you're asking the other guy the same exact question," Jean-Pierre snapped back. Jean-Pierre even took questions on the spin the White House put out regarding the president's poor performance. While Biden was coughing on the debate stage, several outlets began reporting that the president had a cold that contributed to his slow start. "What medications was the president taking in the days or hours leading up to the debate?" a reporter asked. "He was not taking any cold medication," Jean-Pierre responded. "Was he taking any medication? Would it have interfered with his performance?" the reporter followed up. "He was not taking any cold medication, that is what I can speak to from his doctor and what he stated to us," Jean-Pierre added. The press secretary did not have any information to share with reporters on whether the president was examined by a doctor following the debate, or since his February physical or if he had received a neurological exam after his performance. With the White House and the president's campaign on defense, Chief-of-Staff Jeff Zients is reportedly poised to hold an all-staff meeting on Wednesday. Biden, though giving two addresses in Washington, D.C., has yet to answer questions on the response to his debate performance. But amid the backlash, the president is admitting his age at campaign events -- in an attempt to contrast his record to former President Donald Trump. "I know I'm not a young man. I don't walk as easy as I used to. I don't speak as smoothly as I used to. I don't debate as well as I used to, but I know what I do know -- I know how to tell the truth," Biden said during a Friday campaign rally. Despite the president's attempt at reassurance, his own staff is reportedly feeling anger, sadness, irritation and a sense of resolve over his debate showing, Axios reported on Tuesday. "Everyone is freaking the fuck out," an official told the outlet.....
magnifi/parser_user_v10c_epoch_6_lr_0.002
magnifi
"2024-07-03T00:58:35Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-03T00:56:07Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qsdcfqsdfcxqfqs/Asheville-police-seek-publics-help-to-identify-person-of-interest-in-library-assault-3d-updated
qsdcfqsdfcxqfqs
"2024-07-03T00:58:22Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:57:07Z"
--- language: - en --- [![Build Status](https://media.nbcdfw.com/2024/07/n6pt-p-kwa-tarrant-jail_KXAS5JA2_2024-07-02-17-29-43.jpg?quality=85&strip=all&resize=1200%2C675)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5124223154&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5144355523&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2545215235&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1143252321&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5515442433&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4154214315&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5515442433&Connector=https://unitedstatednews.com https://tempaste.com/P7ssuV7fbXn https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1243344125&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5443511152&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5533345554&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4354235442&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5131114125&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4355113511&Connector=https://unitedstatednews.com https://tempaste.com/lLAdAA0ZTwR https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4253221553&Connector=https://unitedstatednews.com https://tempaste.com/0y1zIdJYpxo https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2215122242&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com ASHEVILLE, N.C. (WLOS) -- The Asheville Police Department (APD) is asking for the public's help in identifying a person of interest in the West Asheville Library Assault investigation. On Saturday, June 29, three people, two of them Jewish, attended a seminar hosted by "Another Carolina Anarchist Bookfair." The seminar was called "Strategic Lessons from the Palestinian Resistance." The victims, David Moritz, Monica Buckley and Bob Campbell all said that they were attacked after Buckley began to live stream the event on her phone. All three suffered minor injuries. ASHEVILLE POLICE INVESTIGATE REPORTED ASSAULT WHERE 3 VICTIMS REPORT MINOR INJURIES "We're taking this very seriously at the Asheville Police Department,," Capt. Joe Silberman said to News 13 on Tuesday, July 2. Silberman added that they have continued to look into the incident and review video footage since it occurred. He described this as a high profile investigation and that more charges could be coming. When asked if they would describe this as a hate crime, Silberman said that the department is taking the investigation very seriously and have a detective in the major case unit doing an inquiry into it. He added that depending on what physical evidence is discovered, they will then decide the most appropriate charges. This is going to take some time," Silberman said. "There's a number of different people we need to interview and there's a lot of different footage to go over. He said that they need to review body camera footage to see if there is anything that officers missed. "Right now, this is an ongoing investigation and like in many of our investigations, we're soliciting the public's help," he said. APD previously arrested Taylor Danielle Zarkin, 35, and charged her with two counts of resisting, delay and obstruct. POLICE: 1 ARRESTED AFTER REPORTED ASSAULT AT 'PALESTINIAN RESISTANCE' EVENT On Tuesday, July 2, Buncombe County released a statement regarding the incident. The county said that an outside group reserved a meeting room at the West Asheville Library under the name of "Another Carolina Anarchist Bookfair" for an unspecified workshop. The county said that this was not a county-sponsored event. They added that during the event, a county librarian who was not in the meeting room was alerted to a disturbance and called 911. The county is aware that the person arrested, Zarkin, listed Asheville Public Library System as their employer, but clarified that this person has never been a Buncombe County employee. The release added that the library system makes meeting rooms available to groups, and that policies are not intended to control the content of the programs or events held in the meeting room. But, the release said that a meeting room does not constitute an endorsement of the program or organization by the library or Buncombe County. Buncombe County Public Libraries do not condone violence or hate crimes in any way, and leadership is actively working with the Asheville Police Department," the release stated. Anyone with information about the incident or the person of interest is asked to reach out to APD by texting TIP2APD to 847411, using the TIP2APD smartphone app or calling 828-252-1110.....
ncoskun/ner-convbert-specialtokens70epoch
ncoskun
"2024-07-03T00:57:35Z"
0
0
transformers
[ "transformers", "safetensors", "convbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-07-03T00:57:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Delcos_-_Velara-11B-v3-gguf
RichardErkhov
"2024-07-03T01:21:04Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T00:59:03Z"
Entry not found
qsdcfqsdfcxqfqs/Salinas-Valley-News-Briefs-July-2-2024-The-King-City-Rustler-Your-Local-News-Source-cf-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:01:04Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T00:59:47Z"
--- language: - en --- [![Build Status](https://cdn01.dailycaller.com/wp-content/uploads/2024/07/GettyImages-2159650556-scaled-e1719956251308.jpg)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1435445135&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5421115125&Connector=https://unitedstatednews.com Other Sources : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2351253222&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2535344321&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2545215235&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2324543523&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4125114145&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4425432454&Connector=https://unitedstatednews.com https://huggingface.co/qsdcfqsdfcxqfqs/Transformational-manufacturing-push-to-be-introduced-ee-updated https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2551451155&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3322232213&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4215525345&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3411235113&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5523352511&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5134311352&Connector=https://unitedstatednews.com MONTEREY COUNTY -- The Housing Authority of the County of Monterey (HACM) opened the wait list for the Housing Choice Voucher Program on July 1. The wait list is the primary way for families and individuals to have an opportunity to receive a Housing Choice Voucher. The wait list application process will last two weeks, closing on July 12. Once the wait list application process closes, HACM will run a lottery to place the first 5,000 applicants on the wait list. Being placed on the wait list does not guarantee applicants will be eligible for the program. Eligibility will be determined based upon HUD Income Limits (75% of families selected from the wait list must be at 50% of Area Median Income for the family size) at the time of selection from the waiting list. HACM also determines program eligibility based upon criminal background checks and previous program compliance. To apply, go to apply.hamonterey.org. For more information about the program, call 831-775-5000 or email [email protected]. MONTEREY COUNTY -- All branches of the Monterey County Free Libraries will be closed on Thursday, July 4, in honor of Independence Day. However, online resources and downloadable ebooks and audiobooks are available 24\/7. Ebook and audiobooks are available through Libby and the Palace Project Library. Go to emcfl.org for more information. MONTEREY COUNTY -- Monterey County Supervisor Chris Lopez has announced the fourth annual "We Are Southern Monterey County" Painting and Photography Competition for all artists in Monterey County. All painting, drawing and photography mediums are welcome. Artwork may be any style and should reflect the pride, beauty and spirit of Southern Monterey County. Along with various prizes, the winner's art will be displayed in Lopez's District 3 office in Greenfield for one year. Up to three entries may be submitted, and are due by Friday, July 5. Submissions must be of high resolution and can be emailed to [email protected] or dropped off at the District 3 office at 599 El Camino Real in Greenfield. The District 3 Open House, which will be announced at a later date, will recognize this year's winners as well as provide an opportunity to thank last year's winners for allowing the District 3 office to display their art.....
aoome123/my_model
aoome123
"2024-07-03T01:28:08Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-07-03T01:00:06Z"
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer model-index: - name: my_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_model This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
qsdcfqsdfcxqfqs/Euro-2024-Schedule-and-stadiums-of-the-quarterfinal-matches-3d-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:02:00Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:00:44Z"
--- language: - en --- [![Build Status](https://wlos.com/resources/media/3d7a5945-1aae-4fcc-9687-421015f4293f-large16x9_AshevilleLibraryAssaultVOSOT.mxfframe1592.png)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3411235113&Connector=https://unitedstatednews.com Source : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2351253222&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3322423525&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2554132423&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3135255341&Connector=https://unitedstatednews.com Other Sources : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5443511152&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3451535353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1154243311&Connector=https://unitedstatednews.com https://tempaste.com/oESj2Wwh20Y https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2521422125&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5352313555&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3443125134&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2342112254&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4231353425&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1154243311&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4444315311&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3534455521&Connector=https://unitedstatednews.com This website is maintained and managed by KosovaPress News Agency. KosovaPress holds the reserved copyright rights according to the legal provisions on copyright and intellectual property. Use, modification and distribution for commercial purposes without agreement with KosovaPress is strictly prohibited. This website application is developed with the support of #SustainMediaProgramme, co-financed by the European Union and the German Government, the part implemented by GIZ, DW Akademie and Internews. Its content is the sole responsibility of KosovaPress and does not necessarily reflect the views of the EU or the German Government.....
miamia333/lora-trained-xl
miamia333
"2024-07-03T01:00:50Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:00:50Z"
Entry not found
mradermacher/Viking-33B-GGUF
mradermacher
"2024-07-03T01:29:28Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T01:02:21Z"
--- base_model: LumiOpen/Viking-33B datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - mc4 language: - fi - en - da - sv - no - nn - is library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LumiOpen/Viking-33B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q2_K.gguf) | Q2_K | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q3_K_M.gguf) | Q3_K_M | 16.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q3_K_L.gguf) | Q3_K_L | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.IQ4_XS.gguf) | IQ4_XS | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q4_K_S.gguf) | Q4_K_S | 19.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q4_K_M.gguf) | Q4_K_M | 20.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q5_K_S.gguf) | Q5_K_S | 23.0 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q5_K_M.gguf) | Q5_K_M | 23.6 | | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q6_K.gguf) | Q6_K | 27.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Viking-33B-GGUF/resolve/main/Viking-33B.Q8_0.gguf) | Q8_0 | 35.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
qsdcfqsdfcxqfqs/Cheers-greet-Boris-Johnsons-appearance-at-Conservative-campaign-event-dg-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:04:45Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:03:31Z"
--- language: - en --- [![Build Status](https://kingcityrustler.com/wp-content/uploads/sites/9/2022/10/News-briefs-graphic-1.jpg)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3132113212&Connector=https://unitedstatednews.com Source : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_255&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1435445135&Connector=https://unitedstatednews.com Biden last Talk : https://tempaste.com/c1bXEGPBO3M Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2343444552&Connector=https://unitedstatednews.com Other Sources : https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1143252321&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_144&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us3451522425&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3425442432&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2342112254&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3122414432&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1254415432&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2255453243&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5415311252&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2314215413&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2345334154&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3324451314&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5352313555&Connector=https://unitedstatednews.com Boris Johnson was greeted with applause and cheers when he joined one of the final events of the Conservative Party's General Election campaign. The former prime minister has been largely absent from the campaign, although supportive in his newspaper column, and has been writing letters of endorsement and backing a number of Tories in social media posts. As he stepped onto the stage at the National Army Museum on Tuesday, less than 48 hours before polls open, Mr Johnson was greeted by applause and chants of "Boris, Boris, Boris". He thanked attendees at the late evening event for "coming so late, way past Keir Starmer's bedtime" in a dig at the Labour leader's comment that he largely avoids working after 6pm on Fridays to spend time with his family. Mr Johnson added: "When Rishi asked me to come and help of course I couldn't say no. "We're all here because we love our country." Mr Johnson told the audience a Labour government would increase taxes and would not stand up to Vladimir Putin. "They will scrap the Rwanda plan," he said before describing Labour MPs as "Kremlin crawlers". Mr Johnson, who led the Tories to a landslide victory in 2019 against a Labour Party led by Jeremy Corbyn, told cheering crowds: "They can achieve nothing in this election except to usher in the most left-wing Labour government since the war with a huge majority, and we must not let it happen. "Don't let the Putinistas deliver the Corbynistas. Don't let Putin's pet parrots give this entire country psittacosis - which is a disease you get by the way from cosying up to pet parrots. "Friends, if you actually - everybody if you actually want higher taxes next week, this year, if you feel you've got a few thousands to spare, then vote Labour on Thursday. If you want uncontrolled immigration and mandatory wokery, and pointless kowtowing to Brussels again, then go right ahead, make my day, vote for Starmer. "But if you want to protect our democracy and our economy and keep this country strong abroad by spending 2.5% of our GDP on defence which Labour still refuses to commit to, then you know what to do, don't you, everybody? "There's only one thing to do - vote Conservative on Thursday my friends and I know you will. I know you will." Liberal Democrat deputy leader Daisy Cooper described Mr Johnson's appearance as "a desperate new low" for Mr Sunak's campaign. "This is an insult to everyone who made heart-breaking sacrifices during the pandemic," she said. "Rishi Sunak has reached a desperate new low, turning to a man who discredited the office of prime minister and lied to the country time after time. "It is time to boot out this tired and sleaze-ridden Conservative party, and elect Liberal Democrat MPs who will stand up for their communities."....
qsdcfqsdfcxqfqs/Why-Trumps-Sentencing-In-Hush-Money-Case-Was-Postponed-13-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:05:18Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:04:02Z"
--- language: - en --- [![Build Status](https://kosovapress.com/admin/wp-content/uploads/2024/07/52-Orari.jpg)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5433141541&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2342112254&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1153441323&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3135315421&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3322232213&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_255&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5352313555&Connector=https://unitedstatednews.com https://tempaste.com/RBTwiBnGUHS https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us3334213453&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2535344321&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2515111512&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5552511211&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5415455555&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5155253254&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5443511152&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3352522235&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5335343543&Connector=https://unitedstatednews.com In May, a jury found Trump guilty of 34 felony counts of falsifying business records to hide payments made to former adult film star Stormy Daniels. The delay in Donald Trump's sentencing is seen as a win by his camp as he is expected to be officially named as the GOP nominee for the upcoming presidential election later this month. According to reports, Trump's sentencing in his hush money case has been delayed after his legal team filed a request following the Supreme Court's decision on presidential immunity. The billionaire mogul was initially scheduled to be sentenced on July 11 after being found guilty of all 34 charges brought against him. After the Supreme Court's shocking ruling on the immunity of presidents and former presidents, Trump's team immediately filed for a delay, which the prosecution did not oppose. However, they noted that the defense's argument for a delay was without merit. This development is a major win for Trump and his supporters as it means the upcoming Republican National Convention, where he will be named the official GOP presidential candidate for the 2024 election, will proceed without a hitch. On social media, several have voiced concerns regarding the postponement of Trump's sentencing. The Supreme Court's ruling on presidential immunity has especially caused many to wonder if it would in any way prevent Trump from being sentenced come September. The Supreme Court ruled that Trump can not be prosecuted for actions or decisions taken within his constitutional powers as a U.S. president. "It doesn't void these convictions. The conspiracy started before he could take official acts, they specifically named Cohen his personal attorney and the purpose was to hide it from voters," one person noted on X. Another said, "It's funny that some MAGAs think signing personal checks in the Oval Office to reimburse your lawyer, who paid off a porn star Trump screwed, is somehow considered an 'official act.'" One more person commented, "Every day, I become more and more convinced that he will never actually face any consequences." Trump was very vocal about his historic guilty verdict in his criminal trial, branding it a "rigged decision." During a chat with the press outside the courtroom, he slammed the Biden administration, claiming they were the cause of his legal battles. "Our whole country is being rigged right now," Trump said, per The Blast. "This was done by the Biden administration in order to wound, to hurt an opponent. A political opponent." He continued, "And I think it's just a disgrace, and we'll keep fighting, we'll fight till the end, and we'll win because our country has gone to hell. We don't have the same country anymore, we have a divided mess ." In his rant outside the courtroom, Trump further claimed that America is in "decline" due to "terrorists" allegedly taking over the country. He said, "We are a nation in decline, serious decline of people pouring into our country right now. From prisons and from mental institutions. Terrorists and they're taking over our country. We have a country that's in big trouble, but this was a rigged decision right from day one, with a conflicted judge who should have never been allowed to try this case. Never!" "We will fight for our constitution. This is far from over. Thank you very much," Trump noted, concluding his statement. During an interview on Fox, the former president revealed that his New York hush money trial has been stressful for his wife, Melania, who was notably absent from court throughout Trump's trials. Speaking about the effect of his trial on the former first lady, Trump said, "She's fine, but I think it's very hard for her," adding that "in many ways, it's tougher on them [his family] than it is on me." The billionaire politician also told the news outlet that he has no issues going to jail, but he is "not sure the public would stand for it." "I think it would be tough for the public to take," he said, per BBC. "You know, at a certain point, there's a breaking point."....
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-130hr-v2
KasuleTrevor
"2024-07-03T01:04:38Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T01:04:35Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/premai-io_-_prem-1B-gguf
RichardErkhov
"2024-07-03T01:16:43Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T01:05:56Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) prem-1B - GGUF - Model creator: https://huggingface.co/premai-io/ - Original model: https://huggingface.co/premai-io/prem-1B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [prem-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q2_K.gguf) | Q2_K | 0.4GB | | [prem-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [prem-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.IQ3_S.gguf) | IQ3_S | 0.47GB | | [prem-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [prem-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.IQ3_M.gguf) | IQ3_M | 0.48GB | | [prem-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q3_K.gguf) | Q3_K | 0.51GB | | [prem-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [prem-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [prem-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [prem-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q4_0.gguf) | Q4_0 | 0.59GB | | [prem-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [prem-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [prem-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q4_K.gguf) | Q4_K | 0.62GB | | [prem-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [prem-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q4_1.gguf) | Q4_1 | 0.65GB | | [prem-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q5_0.gguf) | Q5_0 | 0.71GB | | [prem-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [prem-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q5_K.gguf) | Q5_K | 0.73GB | | [prem-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [prem-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q5_1.gguf) | Q5_1 | 0.77GB | | [prem-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q6_K.gguf) | Q6_K | 0.84GB | | [prem-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/premai-io_-_prem-1B-gguf/blob/main/prem-1B.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - cerebras/SlimPajama-627B - HuggingFaceH4/ultrachat_200k - hkust-nlp/deita-10k-v0 - Open-Orca/SlimOrca-Dedup - cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split - HuggingFaceH4/capybara - meta-math/MetaMathQA - argilla/ultrafeedback-binarized-preferences-cleaned - Intel/orca_dpo_pairs - alexredna/oasst2_dpo_pairs pipeline_tag: text-generation --- ## Model Details With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks. ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** https://premai.io/ - **Model type:** Llama - **Language(s) (NLP):** Python - **License:** Apache License 2.0 ## Uses The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue. ### Out-of-Scope Use The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Using `AutoModelForCausalLM` and `AutoTokenizer` ```py from transformers import AutoTokenizer, AutoModelForCausalLM # Load the model and tokenizer tokenizer = AutoTokenizer.from_pretrained("premai-io/prem-1B-chat") model = AutoModelForCausalLM.from_pretrained('premai-io/prem-1B-chat', torch_dtype=torch.bfloat16) model = model.to('cuda') # Setup terminators terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]] # Prepare the prompt messages = [ { "role": "system", "content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions." }, { 'role': 'user', 'content': 'Help me understand machine learning.' } ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) # Generate inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False) input_ids = inputs['input_ids'] input_ids = input_ids.to(model.device) res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators) generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip() print(generated_text) ``` Using pipelines: ```py import torch from transformers import pipeline # Load the pipeline pipe = pipeline("text-generation", model="premai-io/prem-1B-chat", torch_dtype=torch.bfloat16, device=0) # Prepare prompt messages = [ { "role": "system", "content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions." }, { 'role': 'user', 'content': 'Help me understand machine learning.' } ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) # Setup terminators terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]] # Generate outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators) print(outputs[0]["generated_text"][len(prompt):]) ``` ## Training Details ### Training Data Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/ ### Training Procedure Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/ #### Training Hyperparameters Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/ ## Evaluation ### Results |Model |Avg |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande| |------------------------|-----|-----|-----|---------|-----|-----|-----|----------| |prem-1B |42.64|24.74|57.40|42.01 |24.75|21.00|72.14|56.43 | |prem-1B-chat |41.76|24.48|53.32|40.28 |25.27|22.20|70.89|55.88 | |TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56 |24.72|25.80|74.21|60.29 | |opt-1.3b |42.94|23.37|57.44|41.49 |24.86|23.20|71.49|58.72 | |pythia-1b |40.71|24.31|56.90|37.72 |23.20|18.80|70.62|53.43 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f440d8f79c1ba4c353d0f6d/PqscXKPvnwvymNxqYAxjR.png) ## Environmental Impact - **Hardware Type:** H100 GPUs - **Hours used:** 8500 ### Model Architecture and Objective Llama based ### Compute Infrastructure 16-H100 GPUs #### Hardware H100 GPUs #### Software PyTorch, transformers, PyTorch Lightning ## Citation https://blog.premai.io/introducing-prem-1b/ ## Model Card Authors https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz ## Model Card Contact https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
qsdcfqsdfcxqfqs/Pregnant-pause-Elite-athletes-challenge-norms-and-perceptions-when-4e-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:10:07Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:08:54Z"
--- language: - en --- [![Build Status](https://www.westerntelegraph.co.uk/resources/images/18264647/)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5345334144&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4355431414&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_1343251153&Connector=https://unitedstatednews.com Biden last Talk : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1153441323&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3425442432&Connector=https://unitedstatednews.com Other Sources : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3551452343&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2121444525&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2252511535&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2421333225&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5315135332&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3135315421&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4141122115&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us3542322225&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1423544342&Connector=https://unitedstatednews.com https://tempaste.com/kkzEXOtgqC6 https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3352522235&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1234531333&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_134&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4221141213&Connector=https://unitedstatednews.com When you're an elite athlete, the stakes are high. But add pregnancy to the mix and the challenges can rise Β­- not only from mixed messages about the safety of training while pregnant, but also from a lack of support from coaches, health practitioners, and governing bodies. Now new research from the University of South Australia has found that contrary to common concerns, elite athletes often report fewer pregnancy-related complaints (compared to non-athletes) and often displayed improved athletic performance after giving birth. Specifically, the study found that elite female athletes: Lead researcher and UniSA Masters student Brooke McGregor says the findings challenge prevailing notions about the impact of motherhood on elite athletic performance. "For a long time, female athletes have felt unsure about how pregnancy and elite sports can coexist. They've heard mixed messages about safety and training and have felt pressure to conform to societal norms of motherhood. As a result, many had chosen to end their sporting careers," McGregor says. "But with prominent athletes such as Matildas football player Katrina Gorry proving that motherhood and elite sports can thrive together, it's time to revisit that status-quo. "In this research, we focussed on understanding the experiences of elite athletes during pregnancy and motherhood and identified any key gaps for future research to explore." "The good news is that elite athletes not only reported fewer pregnancy-related complaints than non-athletes, but that they also had similar or improved athletic performance levels post-pregnancy. "Elite athletes also prioritised quality over quantity in their training, which led to improved performance and reduced the risk of overtraining. Furthermore, many saw motherhood as a significantly positive experience that improved overall well-being and performance." Understanding the experiences of elite athletes during pregnancy and motherhood can inform the development of policies and practices within sports organisations, governing bodies, sponsors and coaches.....
jamesbehrmann/aeroai
jamesbehrmann
"2024-07-03T01:08:54Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-07-03T01:08:54Z"
--- license: mit ---
codegenning/generator-sc2-15b-cot
codegenning
"2024-07-03T01:12:06Z"
0
0
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-07-03T01:10:08Z"
Invalid username or password.
RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf
RichardErkhov
"2024-07-03T01:29:58Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T01:10:53Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Chunky-Lemon-Cookie-11B - GGUF - Model creator: https://huggingface.co/FallenMerick/ - Original model: https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Chunky-Lemon-Cookie-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q2_K.gguf) | Q2_K | 3.73GB | | [Chunky-Lemon-Cookie-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [Chunky-Lemon-Cookie-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.IQ3_S.gguf) | IQ3_S | 4.37GB | | [Chunky-Lemon-Cookie-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [Chunky-Lemon-Cookie-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.IQ3_M.gguf) | IQ3_M | 4.51GB | | [Chunky-Lemon-Cookie-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q3_K.gguf) | Q3_K | 4.84GB | | [Chunky-Lemon-Cookie-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [Chunky-Lemon-Cookie-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [Chunky-Lemon-Cookie-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [Chunky-Lemon-Cookie-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q4_0.gguf) | Q4_0 | 5.66GB | | [Chunky-Lemon-Cookie-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [Chunky-Lemon-Cookie-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [Chunky-Lemon-Cookie-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q4_K.gguf) | Q4_K | 6.02GB | | [Chunky-Lemon-Cookie-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [Chunky-Lemon-Cookie-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q4_1.gguf) | Q4_1 | 6.27GB | | [Chunky-Lemon-Cookie-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q5_0.gguf) | Q5_0 | 6.89GB | | [Chunky-Lemon-Cookie-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [Chunky-Lemon-Cookie-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q5_K.gguf) | Q5_K | 7.08GB | | [Chunky-Lemon-Cookie-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [Chunky-Lemon-Cookie-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q5_1.gguf) | Q5_1 | 7.51GB | | [Chunky-Lemon-Cookie-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q6_K.gguf) | Q6_K | 8.2GB | | [Chunky-Lemon-Cookie-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FallenMerick_-_Chunky-Lemon-Cookie-11B-gguf/blob/main/Chunky-Lemon-Cookie-11B.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- license: cc-by-4.0 language: - en base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay model-index: - name: Smart-Lemon-Cookie-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard --- ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png) # Chunky-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: float16 name: Mistral-11B --- slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: float16 name: Big-Lemon-Cookie-11B --- models: - model: Big-Lemon-Cookie-11B parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Mistral-11B dtype: float16 name: Chunky-Lemon-Cookie-11B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.23| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |86.55| |MMLU (5-Shot) |65.35| |TruthfulQA (0-shot) |61.59| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |58.45|
Arvnd013/Crn
Arvnd013
"2024-07-03T01:11:01Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-07-03T01:11:01Z"
--- license: mit ---
qsy71/none_quantization_medical_LLaMA3-8B-Chinese-Chat
qsy71
"2024-07-03T01:11:27Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:11:27Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
srmeier/sft_openassistant-guanaco
srmeier
"2024-07-03T01:11:33Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:11:33Z"
Entry not found
qsdcfqsdfcxqfqs/Shell-Pulls-Back-on-Big-Biofuel-Plant-gd-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:12:56Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:11:43Z"
--- language: - en --- [![Build Status](https://theblast.prod.media.wordpress.mattersmedia.io/brand-img/123/2560x1184/2024/05/31121306/Donald-Trump-1-scaled-e1717172002355.jpg?)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2323235443&Connector=https://unitedstatednews.com Source : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4123355211&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5453143314&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4125114145&Connector=https://unitedstatednews.com Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5534534353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1513434325&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2325442523&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4353233433&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_1154243311&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2442553355&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4431323241&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3551452343&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1332243522&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2112154144&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4143221543&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_144&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4223411532&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4513343442&Connector=https://unitedstatednews.com Sign up for smart news, insights, and analysis on the biggest financial stories of the day. Shell has lost its fondness for biofuels, though it isn't saying why. We think it's because the market really stinks, though it's still an important one. The oil giant announced on Tuesday that it's hitting pause on a major biofuel project in the Netherlands, one of the biggest in Europe. Shell said it's just taking a minute to think about the plant, which broke ground in 2021, in order to "address project delivery and ensure future competitiveness given current market conditions." Exactly what market conditions it means, it's being coy about. When Shell first signed off on the Rotterdam biofuel facility, it said the project would eventually produce enough renewable diesel equivalent to taking 1 million cars off Europe's roads every year. More than half of the Rotterdam plant's production was earmarked for sustainable aviation fuel, with the rest planned to make renewable diesel -- although Shell noted at the time it could rejig that equation to meet customer demand. But the Shell of 2024 is not quite as bullish on renewable energy as three years ago. In June 2023, it reversed a promise to steadily cut oil production in the run-up to 2030, and in March it announced a watering-down of its energy transition goals. Both the Financial Times and Wall Street Journal reported that Shell hitting the brakes coincides with a slowdown in Europe's biofuel market owing to a supply glut versus tepid demand: Geography Puzzle: Part of the soft demand for biodiesel in the US and Europe is down to the growing number of electric vehicles on the road, according to the International Energy Agency. "In the United States and Europe, large-scale electric vehicle growth contributes to declining gasoline demand," per the IEA's biofuel report. Globally, however, the IEA says biofuel has a pretty big role to play in the energy transition. "Biofuels remain the primary decarbonisation option, accounting for near 90% of avoided oil demand in 2028," it says, adding the biggest demand comes from Brazil, Indonesia, and India.....
IOAI2024colombia/first_model
IOAI2024colombia
"2024-07-03T01:13:06Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-07-03T01:11:48Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_Anthropic_HH_Golden-processed_randomSub0
Nutanix
"2024-07-03T01:12:25Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T01:11:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qsdcfqsdfcxqfqs/Its-a-very-strange-time-to-be-Bernie-Sanders-3e-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:13:18Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:12:05Z"
--- language: - en --- [![Build Status](https://www.capitalgazette.com/wp-content/uploads/2024/07/Biden_Excessive_Heat_80776.jpg?w=640)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4451414523&Connector=https://unitedstatednews.com Source : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4125114145&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1531335313&Connector=https://unitedstatednews.com Biden last Talk : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2343444552&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5224144435&Connector=https://unitedstatednews.com Other Sources : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1552332232&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1552332232&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4424154233&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1254415432&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5214555453&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5243553121&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4354431244&Connector=https://unitedstatednews.com https://tempaste.com/PwjQs2GmVsv https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1332243522&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3151355445&Connector=https://unitedstatednews.com https://tempaste.com/SYAVNNhj2lg https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5515442433&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_2331452241&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1234531333&Connector=https://unitedstatednews.com LA CROSSE, Wis. - Bernie Sanders arrived here on Saturday morning to talk hope into nervous Democrats. The Biden-Trump debate was "distressing in a number of respects," he admitted. Donald Trump was a "pathological liar," and Democrats needed to call him out on his record. The president did "not have a good night," but Americans weren't voting on "who's the best dancer, who's the best songwriter." When the Vermont senator wrapped up, Eric Leitzen lifted the sign he'd made for the event: RUN BERNIE RUN. "It's be a heck of a curveball to throw at Trump," explained Leitzen, 39, a candidate for the Minnesota state House. "Look at Bernie up there! He doesn't sound anything like [Joe] Biden sounded at that debate." Politically, it was an odd moment for Sanders. Biden was the moderate candidate who beat him for the 2020 nomination by winning over Democrats on electability grounds. This time, Sanders was the one making the case that Biden was both worth supporting and able to win -- even as many of the same pragmatic pundits who opposed his own 2020 run were demanding the president drop out. The president's shaky debate performance has terrified Democrats. Some wondered how he could bounce back from an off night. Others argued that he had already started to. Some started to talk about replacing him on the ticket. For Sanders-style progressives, who lost the 2020 primary to Biden and grew restless over Israel's war in Gaza, the crisis had no bottom. A portion were already on the fence about voting for him; the Wisconsin visit was about convincing them not to stay home or turn to a protest candidate. Now they were figuring out how to incorporate questions about Biden's health and competence into their calculation. Sanders, who shepherded some of Biden's most popular healthcare policies through the Senate, had made the case that Biden's domestic record was strong. in Eau Claire, he politely brushed off a Gaza ceasefire activist who asked him to rescind his endorsement. Our Revolution, the organizing group founded by Sanders after his 2016 campaign, conducted a quick member poll after the debate and found two-thirds of them demanding a new nominee. Left-leaning groups created to pressure the president on Gaza were now calling on him to quit. "Far too many have been killed under the watchful guise of a man who cannot remember what he has and hasn't seen, while requiring a teleprompter to form coherent sentences," declared the Abandon Biden campaign, which until the debate had been urging voters to reject the Democratic ticket in protest of the war.....
lampardrodgers/test
lampardrodgers
"2024-07-03T01:13:32Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:13:32Z"
Entry not found
ywangmy/sft-l38i-full-1e-6-256-4096-0.03-web10k
ywangmy
"2024-07-03T01:13:55Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:13:55Z"
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: sft-l38i-full-1e-6-256-4096-0.03-web10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-l38i-full-1e-6-256-4096-0.03-web10k This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the webinstruct-10k dataset. It achieves the following results on the evaluation set: - Loss: 0.8176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
lordspline/phi-pruned
lordspline
"2024-07-03T01:16:00Z"
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-03T01:14:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF
olileaf
"2024-07-03T01:14:48Z"
0
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
text-generation
"2024-07-03T01:14:26Z"
--- base_model: meta-llama/Meta-Llama-3-8B language: - en license: llama3 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display β€œBuilt with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include β€œLlama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a β€œNotice” text file distributed as a part of such copies: β€œMeta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright Β© Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN β€œAS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use β€œLlama 3” (the β€œMark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (β€œPolicy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software β€œbug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo olileaf/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048 ```
wanghaikuan/qwen2-1.5b-chat-mul_01
wanghaikuan
"2024-07-03T01:31:14Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-07-03T01:15:20Z"
Entry not found
RichardErkhov/OEvortex_-_HelpingAI-110M-gguf
RichardErkhov
"2024-07-03T01:19:40Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-07-03T01:17:27Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) HelpingAI-110M - GGUF - Model creator: https://huggingface.co/OEvortex/ - Original model: https://huggingface.co/OEvortex/HelpingAI-110M/ | Name | Quant method | Size | | ---- | ---- | ---- | | [HelpingAI-110M.Q2_K.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q2_K.gguf) | Q2_K | 0.05GB | | [HelpingAI-110M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.IQ3_XS.gguf) | IQ3_XS | 0.05GB | | [HelpingAI-110M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.IQ3_S.gguf) | IQ3_S | 0.05GB | | [HelpingAI-110M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q3_K_S.gguf) | Q3_K_S | 0.05GB | | [HelpingAI-110M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.IQ3_M.gguf) | IQ3_M | 0.06GB | | [HelpingAI-110M.Q3_K.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q3_K.gguf) | Q3_K | 0.06GB | | [HelpingAI-110M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q3_K_M.gguf) | Q3_K_M | 0.06GB | | [HelpingAI-110M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q3_K_L.gguf) | Q3_K_L | 0.06GB | | [HelpingAI-110M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.IQ4_XS.gguf) | IQ4_XS | 0.06GB | | [HelpingAI-110M.Q4_0.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q4_0.gguf) | Q4_0 | 0.06GB | | [HelpingAI-110M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.IQ4_NL.gguf) | IQ4_NL | 0.06GB | | [HelpingAI-110M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q4_K_S.gguf) | Q4_K_S | 0.06GB | | [HelpingAI-110M.Q4_K.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q4_K.gguf) | Q4_K | 0.07GB | | [HelpingAI-110M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q4_K_M.gguf) | Q4_K_M | 0.07GB | | [HelpingAI-110M.Q4_1.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q4_1.gguf) | Q4_1 | 0.07GB | | [HelpingAI-110M.Q5_0.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q5_0.gguf) | Q5_0 | 0.07GB | | [HelpingAI-110M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q5_K_S.gguf) | Q5_K_S | 0.07GB | | [HelpingAI-110M.Q5_K.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q5_K.gguf) | Q5_K | 0.08GB | | [HelpingAI-110M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q5_K_M.gguf) | Q5_K_M | 0.08GB | | [HelpingAI-110M.Q5_1.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q5_1.gguf) | Q5_1 | 0.08GB | | [HelpingAI-110M.Q6_K.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q6_K.gguf) | Q6_K | 0.08GB | | [HelpingAI-110M.Q8_0.gguf](https://huggingface.co/RichardErkhov/OEvortex_-_HelpingAI-110M-gguf/blob/main/HelpingAI-110M.Q8_0.gguf) | Q8_0 | 0.11GB | Original model description: --- language: - en license: other library_name: transformers tags: - Text-Generation - Transformers - HelpingAI datasets: - OEvortex/vortex-mini metrics: - speed license_name: hsul license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md model-index: - name: HelpingAI-110M results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 22.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 28.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 23.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.25 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.62 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M name: Open LLM Leaderboard --- 🌟 **HelpingAI-110M Model Card** 🌟 πŸ“Š **Datasets used:** - OEvortex/vortex-mini πŸ—£οΈ **Language:** - English (en) πŸ”’ **License:** HelpingAI Simplified Universal License (HSUL) 🧠 **Model Overview:** HelpingAI-110M is a very lite version of the HelpingAI model, trained on a 110M parameters. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OEvortex__HelpingAI-110M) | Metric |Value| |---------------------------------|----:| |Avg. |29.05| |AI2 Reasoning Challenge (25-Shot)|22.78| |HellaSwag (10-Shot) |28.02| |MMLU (5-Shot) |23.66| |TruthfulQA (0-shot) |48.25| |Winogrande (5-shot) |51.62| |GSM8k (5-shot) | 0.00|
arnabdhar/gpt2-fineweb
arnabdhar
"2024-07-03T01:17:29Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:17:29Z"
Entry not found
maxseats/SungBeom-whisper-small-ko-set23
maxseats
"2024-07-03T01:19:17Z"
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "speech-recognition", "ko", "dataset:maxseats/aihub-464-preprocessed-680GB-set-23", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-03T01:18:55Z"
--- language: ko tags: - whisper - speech-recognition datasets: - maxseats/aihub-464-preprocessed-680GB-set-23 metrics: - cer --- # Model Name : maxseats/SungBeom-whisper-small-ko-set22 # Description - νŒŒμΈνŠœλ‹ 데이터셋 : maxseats/aihub-464-preprocessed-680GB-set-23 # μ„€λͺ… - AI hub의 μ£Όμš” μ˜μ—­λ³„ 회의 μŒμ„± 데이터셋을 ν•™μŠ΅ μ€‘μ΄μ—μš”. - 680GB 쀑 set_0~22 데이터(230GB)κΉŒμ§€ νŒŒμΈνŠœλ‹ν•œ λͺ¨λΈμ„ λΆˆλŸ¬μ™€μ„œ, set_23 데이터(10GB)λ₯Ό ν•™μŠ΅ν•œ λͺ¨λΈμž…λ‹ˆλ‹€. - 링크 : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-23
qsdcfqsdfcxqfqs/Reform-Export-Robert-Stern-Joins-LA-Ethics-Commission-MyNewsLAcom-hf-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:21:08Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:19:52Z"
--- language: - en --- [![Build Status](https://i0.wp.com/mynewsla.com/wp-content/uploads/2021/10/MNLA-Facebook.jpg?fit=640%2C360&ssl=1)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4123355211&Connector=https://unitedstatednews.com Source : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4355113511&Connector=https://unitedstatednews.com Flash News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_1241533151&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5352244133&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2515111512&Connector=https://unitedstatednews.com Other Sources : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_5415455555&Connector=https://unitedstatednews.com https://www.sep.va.gov/sep/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=Games_Free_Generator_144&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1153351333&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2215122242&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2214223144&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1254415432&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4355431414&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5131114125&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2321131545&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5552511211&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2422212441&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1133455554&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_1241533151&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5415311252&Connector=https://unitedstatednews.com The Los Angeles City Council Tuesday confirmed reform expert Robert Stern as the newest member of the Ethics Commission, marking the first time in several months that the five-member body has had all its seats filled. The council voted 14-0, with Councilman Marqueece Harris-Dawson absent during the vote. Stern was nominated by Council President Paul Krekorian last week. A nationally recognized expert in the fields of campaign finance and government reform, Stern was the first general counsel of the California Fair Political Practices Commission -- the agency in charge of administering California's campaign disclosure, ethics and lobbying laws. He previously served as a president of the Council on Governmental Ethics Laws, an organization of local, state and federal agencies in the U.S. and Canada that regulate campaign finance, ethics, lobbying and election laws. "Bob Stern is one of the greatest champions of government reform in the history of California," Krekorian said in a statement when he announced Stern's nomination. "No one could be more qualified to serve on our city's Ethics Commission, or better equipped to help us restore the people's trust in our city government." For nearly 30 years, Stern served as president of the Center for Governmental Studies, a non-profit, nonpartisan organization founded in 1983 to provide policy research and to recommend improvements to political and government processes in California. He is also the co-author of numerous books, including the center's "Democracy by Initiative: Shaping California's Fourth Branch of Government." In 2022, reform advocacy group Common Cause named him its Democracy Hero of the Year. Stern thanked the council for "having the confidence" in him to serve on the commission. "In 1989, and 1990, I helped the City Council enact the law that established the Ethics Commission. I'm very proud that both the law and the commission are considered to be one of the best in the country," Stern said prior to the council's vote. He acknowledged that no law is perfect. "I want the public to have more confidence in you, L.A. city government, and other governmental bodies," Stern said. "At the same time, I want officials who intentionally break these laws to be brought to justice, whether at the federal, state or local level." "My hope is that we will see fewer and fewer of these actions as public officials realize there are consequences to violating the laws, and that hopefully the public will have more confidence in our government," he added. Established by city voters in 1990, the Ethics Commission serves to preserve the public trust and foster public confidence in city government and elections. Individuals appointed to the five-member board are nominated by the mayor, city attorney, controller, president of the City Council and president pro tempore of the council, with each official nominating one member. Nominations must be confirmed by a majority of the council.....
nm-testing/dbrx-instruct-FP8
nm-testing
"2024-07-03T01:21:10Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:21:10Z"
Entry not found
qsdcfqsdfcxqfqs/FDA-approves-Eli-Lillys-Alzheimers-drug-that-can-modestly-slow-disease-hc-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:24:11Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:22:56Z"
--- language: - en --- [![Build Status](https://scienmag.com/wp-content/uploads/2024/07/Pregnant-pause-Elite-athletes-challenge-norms-and-perceptions-when-expecting.png)]() read the full article here : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2442553355&Connector=https://unitedstatednews.com Source : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_3551452343&Connector=https://unitedstatednews.com Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com Biden last Talk : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3322423525&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3531555253&Connector=https://unitedstatednews.com Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4154553321&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4431323241&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_4113215354&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4244241324&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5534252421&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3135315421&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3352522235&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5243553121&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3435135553&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5332112353&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4244241324&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_2521422125&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us5131114125&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us4211422155&Connector=https://unitedstatednews.com The Food and Drug Administration approved Eli Lilly's Kisunla on Tuesday for mild or early cases of dementia caused by Alzheimer's. It's only the second drug that's been convincingly shown to delay cognitive decline in patients, following last year's full approval of a similar drug called Leqembi from Japanese drugmaker Eisai. The drug, also called donanemab, which will be sold under the brand name Kisunla, is a monoclonal antibody infusion given every four weeks. Only those with early or mild disease will be eligible for the new drug, and an even smaller subset are likely to undergo the multi-step process needed to get a prescription. Lilly's results were similar to those of Leqembi, with both drugs showing modest cognitive improvement in early-stage Alzheimer's patients. In a 1,700-patient study, Lilly found that patients receiving monthly IV infusions of donanemab declined about 35% more slowly than those on a placebo. Physicians who treat Alzheimer's say the approval is an important step after decades of failed experimental treatments. "I'm thrilled to have different options to help my patients," Dr. Suzanne Schindler, a neurologist at Washington University in St. Louis, told The Associated Press. "It's been difficult as a dementia specialist -- I diagnose my patients with Alzheimer's and then every year I see them get worse and they progress until they die." The delay seen with both drugs amounts to a matter of months -- about seven months, in the case of Lilly's drug. Patients and their families will have to weigh that benefit against the downsides, including regular IV infusions and potentially dangerous side effects like brain swelling. The highly anticipated Alzheimer's drug received support from federal health advisers last month, setting the stage for its likely approval. EARLIER: Alzheimer's drug that can slow disease gets backing from FDA advisers "I thought the evidence was very strong in the trial showing the effectiveness of the drug," said Dean Follmann, a statistician from the National Institutes of Health, to the Associated Press. Costs will vary by patient, based on how long they take the drug, Lilly said. The company also said a year's worth of therapy would cost $32,000 -- higher than the $26,500 price of a year's worth of Leqembi. An estimated 6.7 million Americans have Alzheimer's, according to the Alzheimer's Association. They said this number could grow to 13.8 million by 2060 barring the development of medical breakthroughs to prevent, slow or cure the disease.....
qsdcfqsdfcxqfqs/High-Heat-Continues-to-Build-Across-Southland-MyNewsLAcom-d1-updated
qsdcfqsdfcxqfqs
"2024-07-03T01:24:51Z"
0
0
null
[ "en", "region:us" ]
null
"2024-07-03T01:23:37Z"
--- language: - en --- [![Build Status](https://npr.brightspotcdn.com/dims4/default/20e1197/2147483647/strip/true/crop/2400x1260+0+170/resize/1200x630!/quality/90/?url=http%3A%2F%2Fnpr-brightspot.s3.amazonaws.com%2Fd5%2F01%2Fe377c43549df9493cc33056947d0%2Faustin-animal-center-4-1.jpg)]() read the full article here : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us1531335313&Connector=https://unitedstatednews.com Source : https://tempaste.com/8Lfqsp3qUQB Flash News : https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2345334154&Connector=https://unitedstatednews.com Biden last Talk : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us5144355523&Connector=https://unitedstatednews.com Russian Ukrain Breaking News : https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_2315354451&Connector=https://unitedstatednews.com Other Sources : https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_4145121412&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4244241324&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2321131545&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_4513343442&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us2554132423&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5224144435&Connector=https://unitedstatednews.com https://cicytex.juntaex.es/html/js/editor/ckeditor/editor/filemanager/browser/liferay/browser.html?id=nuevo_hackear_cuenta_1111534523&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us2135121341&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_5232513332&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo1_3323255331&Connector=https://unitedstatednews.com https://cms.nae.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account1_us1435445135&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=howtohack_account_us4354235442&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_3435135553&Connector=https://unitedstatednews.com https://dev.uc.apps.uri.edu/fckeditor/editor/filemanager/browser/default/browser.html?id=hackear_cuentasnuevo_5533431314&Connector=https://unitedstatednews.com Temperatures again pushed into the triple-digit range in parts of the Southland Tuesday as a heat wave continued building over the region, with hot conditions expected to peak by week's end but continue into early next week. An excessive heat warning will be in effect until 6 p.m. Monday for the Golden State (5) and Antelope Valley (14) freeway corridors, the western San Gabriel Mountains, the Antelope Valley foothills and the Antelope Valley, according to the National Weather Service. Temperatures in the warning area could reach as high as 115 degrees, forecasters said. An excessive heat warning will take effect at 11 a.m. Wednesday and continue through 6 p.m. Monday in the Santa Clarita Valley, the Santa Monica Mountains Recreational Area, Calabasas, San Fernando Valley and eastern San Gabriel Mountains, where temperatures up to 110 degrees are possible. The San Gabriel Valley will be under a less severe heat advisory from 11 a.m. Wednesday through 6 p.m. Sunday, but temperatures there are still expected to reach as high as 105. The Los Angeles coastal area stretching into downtown will be under a heat advisory from 11 a.m. Thursday through 6 p.m. Sunday, with temperatures topping out at 85 to 95 degrees. The high temperatures and low humidity will also create an extended period of elevated to critical fire danger in areas away from the coast, forecasters said. A fire weather watch will be in place from Thursday evening through Friday night in the western Antelope Valley foothills and the 5 Freeway corridor, where forecasters said the hot and dry conditions will be joined by northwest winds potentially gusting from 25 to 40 mph. "A significant heatwave will impact the region this week through early next week, with dangerously hot temperatures across much of the area," according to the NWS. "High temperatures by mid to late week are expected to reach 95 to 105 degrees in many areas away from the coast, with highs upwards of 105 to 115 over interior valleys and foothills, including the Antelope Valley. Very warm to hot conditions could extend closer to the coast by late this week." The heat was building thanks to a large upper high-pressure system moving in from the west. As the system settles in, most areas will see another 3 to 6 degrees of warming on Wednesday, then another 4 to 8 degrees in coastal and valley areas on Thursday. Valley and inland areas will likely have temperatures that are 10 to 15 degrees above normal by Thursday, according to the NWS. Friday is expected to be the hottest day of the week, with temperatures rising another 2 to 4 degrees, meaning highs of 110 to 115 in interior areas, 100 to 105 in the valleys, 90s along interior coastal areas and 80s at the beaches. Those temperatures are 6 to 12 degrees above normal for the coasts, and 12 to 18 degrees above normal for the valleys and interior areas. Forecasters said "only minimal cooling" is expected over the weekend, along increasing onshore flow should eventually cool things down along the coasts and slowly move into the valleys. But the high-pressure system is expected to persist, and the heat wave "may push deep into next week," according to the NWS. "The combination of these very hot temperatures, areas of low humidities and possible sundowner winds will lead to fire weather risks," forecasters said. In Orange County, a heat advisory will be in effect for the Santa Ana Mountains and foothills and Orange County inland areas from 11 a.m. Friday through 9 p.m. Saturday, with temperatures at or near triple-digit levels. Authorities reminded the public to never leave pets or children inside vehicles on days that are even a little warmer than normal, as locked cars can turn into death traps in mere minutes. The city and county of Los Angeles both operate cooling centers for people who need a place to escape the heat. To find a location, visit ready.lacounty.gov\/heat\/ or call 211.....
islam-hajosman/islam_Instruct
islam-hajosman
"2024-07-03T01:26:58Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-07-03T01:24:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gisang-lee/mistral-7b-qlora-arc-wandb-testt-all-r8-a16-e1
gisang-lee
"2024-07-03T01:25:23Z"
0
0
null
[ "region:us" ]
null
"2024-07-03T01:25:23Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]