modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
75.3M
| likes
int64 0
10.6k
| library_name
stringclasses 189
values | tags
sequencelengths 1
1.84k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Guilherme34/Samantha-mixtraldolphin-GGUF | Guilherme34 | "2024-04-15T01:23:47Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-15T01:03:56Z" | Entry not found |
ashishp-wiai/ClipArt_LoRA_90-2024-04-15 | ashishp-wiai | "2024-04-15T01:40:04Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-04-15T01:04:39Z" | Entry not found |
RZJournal/bloomz-560m_PROMPT_TUNING_CAUSAL_LM | RZJournal | "2024-04-15T02:09:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:04:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
heyllm234/sc16 | heyllm234 | "2024-04-15T01:07:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-15T01:04:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Raghu928/ChatShePT | Raghu928 | "2024-04-15T01:05:36Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:05:36Z" | ---
license: apache-2.0
---
|
liminerity/Bitnet-Mistral.0.2-70m | liminerity | "2024-04-15T01:47:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Mistral",
"1bit",
"bitnet",
"abideen",
"dataset:abideen/Cosmopedia-100k-pretrain",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:07:16Z" | ---
datasets:
- abideen/Cosmopedia-100k-pretrain
tags:
- Mistral
- 1bit
- bitnet
- abideen
---
"""this is my first attempt at converting a model float16 quantized model to 1.5bit. i used alpindale/Mistral-7B-v0.2-hf for the base model and \n
trained on: abideen/cosmopedia-100k-pretain dataset and used his google colab project to make this"""
#EXAMPLE INFERENCE CODE FROM ABIDEEN'S COLAB PROJECT
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import *
# Load a pretrained BitNet model
model = "liminerity/Bitnet-Mistral.0.2-70M"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model)
def activation_quant(x):
scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
y = (x * scale).round().clamp_(-128, 127)
y = y / scale
return y
def weight_quant(w):
scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
u = (w * scale).round().clamp_(-1, 1)
u = u / scale
return u
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
def convert_to_bitnet(model, copy_weights):
for name, module in model.named_modules():
# Replace linear layers with BitNet
if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
for child_name, child_module in module.named_children():
if isinstance(child_module, nn.Linear):
bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
if copy_weights:
bitlinear.weight = child_module.weight
if child_module.bias is not None:
bitlinear.bias = child_module.bias
setattr(module, child_name, bitlinear)
# Remove redundant input_layernorms
elif isinstance(module, LlamaDecoderLayer):
for child_name, child_module in module.named_children():
if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
setattr(module, child_name, nn.Identity().to(device="cuda:0"))
convert_to_bitnet(model, copy_weights=True)
model.to(device="cuda:0")
prompt = "What is Machine Learning?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generate_ids = model.generate(inputs.input_ids, max_length=50)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
``` |
DavidAU/PiVoT-0.1-Evil-a-Q6_K-GGUF | DavidAU | "2024-04-15T01:08:41Z" | 0 | 0 | null | [
"gguf",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"dataset:maywell/ko_wikidata_QA",
"dataset:kyujinpy/OpenOrca-KO",
"dataset:Anthropic/hh-rlhf",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | "2024-04-15T01:08:26Z" | ---
language:
- en
- ko
license: cc-by-sa-4.0
tags:
- not-for-all-audiences
- llama-cpp
- gguf-my-repo
datasets:
- maywell/ko_wikidata_QA
- kyujinpy/OpenOrca-KO
- Anthropic/hh-rlhf
pipeline_tag: text-generation
---
# DavidAU/PiVoT-0.1-Evil-a-Q6_K-GGUF
This model was converted to GGUF format from [`maywell/PiVoT-0.1-Evil-a`](https://huggingface.co/maywell/PiVoT-0.1-Evil-a) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/PiVoT-0.1-Evil-a) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/PiVoT-0.1-Evil-a-Q6_K-GGUF --model pivot-0.1-evil-a.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/PiVoT-0.1-Evil-a-Q6_K-GGUF --model pivot-0.1-evil-a.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pivot-0.1-evil-a.Q6_K.gguf -n 128
```
|
DavidAU/PiVoT-0.1-Starling-LM-RP-Q6_K-GGUF | DavidAU | "2024-04-15T01:09:54Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | "2024-04-15T01:09:35Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# DavidAU/PiVoT-0.1-Starling-LM-RP-Q6_K-GGUF
This model was converted to GGUF format from [`maywell/PiVoT-0.1-Starling-LM-RP`](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/PiVoT-0.1-Starling-LM-RP-Q6_K-GGUF --model pivot-0.1-starling-lm-rp.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/PiVoT-0.1-Starling-LM-RP-Q6_K-GGUF --model pivot-0.1-starling-lm-rp.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pivot-0.1-starling-lm-rp.Q6_K.gguf -n 128
```
|
SiriusW/multi_balanced_model | SiriusW | "2024-04-15T01:13:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-15T01:09:41Z" | Entry not found |
abhayesian/BobzillaV22 | abhayesian | "2024-04-15T01:10:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:09:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF | DavidAU | "2024-04-15T01:11:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"mixtral",
"solar",
"model-fusion",
"fusechat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:FuseAI/FuseChat-Mixture",
"base_model:openchat/openchat_3.5",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-15T01:10:44Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mistral
- mixtral
- solar
- model-fusion
- fusechat
- llama-cpp
- gguf-my-repo
base_model: openchat/openchat_3.5
datasets:
- FuseAI/FuseChat-Mixture
pipeline_tag: text-generation
model-index:
- name: FuseChat-7B-VaRM
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 8.22
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.67
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
name: Open LLM Leaderboard
---
# DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF
This model was converted to GGUF format from [`FuseAI/FuseChat-7B-VaRM`](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF --model fusechat-7b-varm.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF --model fusechat-7b-varm.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m fusechat-7b-varm.Q6_K.gguf -n 128
```
|
fangzhaoz/mistralv1_dora_r8_25e5_e3 | fangzhaoz | "2024-04-15T01:12:23Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:12:13Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_dora_r8_25e5_e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_dora_r8_25e5_e3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
aivoicemodelsbr23/Raul.Seixas | aivoicemodelsbr23 | "2024-04-15T01:16:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:12:18Z" | Entry not found |
bingbort/bigstral-12b-v0.2-32k-Q6_K-GGUF | bingbort | "2024-04-15T01:12:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:12:20Z" | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- alpindale/Mistral-7B-v0.2-hf
---
# bingbort/bigstral-12b-v0.2-32k-Q6_K-GGUF
This model was converted to GGUF format from [`abacusai/bigstral-12b-v0.2-32k`](https://huggingface.co/abacusai/bigstral-12b-v0.2-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abacusai/bigstral-12b-v0.2-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo bingbort/bigstral-12b-v0.2-32k-Q6_K-GGUF --model bigstral-12b-v0.2-32k.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo bingbort/bigstral-12b-v0.2-32k-Q6_K-GGUF --model bigstral-12b-v0.2-32k.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bigstral-12b-v0.2-32k.Q6_K.gguf -n 128
```
|
mradermacher/StableBeluga2-GGUF | mradermacher | "2024-04-15T01:40:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:12:24Z" | <!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/stabilityai/StableBeluga2
|
fangzhaoz/mistralv1_dora_r8_25e5_e3_merged | fangzhaoz | "2024-04-15T01:16:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:12:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mergekit-community/mergekit-slerp-mvmjnos | mergekit-community | "2024-04-15T01:12:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:12:38Z" | Invalid username or password. |
lattavia/mistral-finetuned-senior-V2 | lattavia | "2024-04-15T01:14:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:14:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gmkim/SOLAR-10.7B-open-korean-instructions-v1.0 | gmkim | "2024-04-15T01:15:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:15:20Z" | Entry not found |
kyryl-opens-ml/duckdb-text2sql-codellama | kyryl-opens-ml | "2024-04-15T02:23:13Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-04-15T01:18:16Z" | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
datasets:
- generator
model-index:
- name: duckdb-text2sql-codellama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# duckdb-text2sql-codellama
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 0.01
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.2 |
DavidAU/Tiny-Llama-2.2B-slerpx2-Q8_0-GGUF | DavidAU | "2024-04-15T01:19:46Z" | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Chuanming/Tiny-Llama-2.2B-slerp",
"llama-cpp",
"gguf-my-repo",
"base_model:Chuanming/Tiny-Llama-2.2B-slerp",
"region:us"
] | null | "2024-04-15T01:19:35Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- Chuanming/Tiny-Llama-2.2B-slerp
- llama-cpp
- gguf-my-repo
base_model:
- Chuanming/Tiny-Llama-2.2B-slerp
- Chuanming/Tiny-Llama-2.2B-slerp
---
# DavidAU/Tiny-Llama-2.2B-slerpx2-Q8_0-GGUF
This model was converted to GGUF format from [`aipib/Tiny-Llama-2.2B-slerpx2`](https://huggingface.co/aipib/Tiny-Llama-2.2B-slerpx2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aipib/Tiny-Llama-2.2B-slerpx2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-Llama-2.2B-slerpx2-Q8_0-GGUF --model tiny-llama-2.2b-slerpx2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-Llama-2.2B-slerpx2-Q8_0-GGUF --model tiny-llama-2.2b-slerpx2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-llama-2.2b-slerpx2.Q8_0.gguf -n 128
```
|
DavidAU/Tiny-Cowboy-1.1b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:20:17Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:20:05Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
widget:
- text: 'Howdy! What is best about the prairie, cowpoke?
'
example_title: Color of a Typical Cowboy Hat
---
# DavidAU/Tiny-Cowboy-1.1b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Tiny-Cowboy-1.1b-v0.1`](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Tiny-Cowboy-1.1b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-Cowboy-1.1b-v0.1-Q8_0-GGUF --model tiny-cowboy-1.1b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-Cowboy-1.1b-v0.1-Q8_0-GGUF --model tiny-cowboy-1.1b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-cowboy-1.1b-v0.1.Q8_0.gguf -n 128
```
|
DavidAU/Tiny-Viking-1.1b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:21:41Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:21:35Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
widget:
- text: 'Who are you?
'
example_title: Introduction
---
# DavidAU/Tiny-Viking-1.1b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Tiny-Viking-1.1b-v0.1`](https://huggingface.co/phanerozoic/Tiny-Viking-1.1b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Tiny-Viking-1.1b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-Viking-1.1b-v0.1-Q8_0-GGUF --model tiny-viking-1.1b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-Viking-1.1b-v0.1-Q8_0-GGUF --model tiny-viking-1.1b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-viking-1.1b-v0.1.Q8_0.gguf -n 128
```
|
DavidAU/Tiny-Pirate-1.1b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:22:28Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:22:15Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
widget:
- text: 'What is best in life?
'
example_title: Healthy Eating Tips
---
# DavidAU/Tiny-Pirate-1.1b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Tiny-Pirate-1.1b-v0.1`](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-Pirate-1.1b-v0.1-Q8_0-GGUF --model tiny-pirate-1.1b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-Pirate-1.1b-v0.1-Q8_0-GGUF --model tiny-pirate-1.1b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-pirate-1.1b-v0.1.Q8_0.gguf -n 128
```
|
mergekit-community/mergekit-slerp-dnqylrm | mergekit-community | "2024-04-15T01:22:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:22:21Z" | Invalid username or password. |
DavidAU/Tiny-Knight-1.1b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:22:52Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:22:45Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
widget:
- text: 'Hail and well met! Pray, what kind of food do ye enjoy supping upon?
'
example_title: The Code of Chivalry
---
# DavidAU/Tiny-Knight-1.1b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Tiny-Knight-1.1b-v0.1`](https://huggingface.co/phanerozoic/Tiny-Knight-1.1b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Tiny-Knight-1.1b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-Knight-1.1b-v0.1-Q8_0-GGUF --model tiny-knight-1.1b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-Knight-1.1b-v0.1-Q8_0-GGUF --model tiny-knight-1.1b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-knight-1.1b-v0.1.Q8_0.gguf -n 128
```
|
d-d-o-s/s9-c | d-d-o-s | "2024-04-15T01:22:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:22:48Z" | Entry not found |
WesPro/MisHumHypNichLimaLora | WesPro | "2024-04-15T01:22:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:22:53Z" | Entry not found |
sdadasfgdfgfdg/baldis_basics | sdadasfgdfgfdg | "2024-04-15T01:25:21Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-15T01:23:37Z" | ---
license: openrail
---
|
MusicBox27/SweetDemeanor | MusicBox27 | "2024-04-15T01:26:54Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-15T01:23:37Z" | ---
license: openrail
---
|
DavidAU/Mistral-Pirate-7b-v0.3-Q8_0-GGUF | DavidAU | "2024-04-15T01:24:08Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:23:48Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Pirate-7b-v0.3-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Pirate-7b-v0.3`](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Pirate-7b-v0.3-Q8_0-GGUF --model mistral-pirate-7b-v0.3.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Pirate-7b-v0.3-Q8_0-GGUF --model mistral-pirate-7b-v0.3.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-pirate-7b-v0.3.Q8_0.gguf -n 128
```
|
hsiuping/finetuning-amazon-model | hsiuping | "2024-04-15T02:38:41Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-15T01:24:02Z" | Entry not found |
DavidAU/Mistral-Astronomy-7b-v0.2-Q8_0-GGUF | DavidAU | "2024-04-15T01:24:53Z" | 0 | 0 | null | [
"gguf",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"en",
"region:us"
] | null | "2024-04-15T01:24:31Z" | ---
language:
- en
tags:
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Astronomy-7b-v0.2-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Astronomy-7b-v0.2`](https://huggingface.co/phanerozoic/Mistral-Astronomy-7b-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Astronomy-7b-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Astronomy-7b-v0.2-Q8_0-GGUF --model mistral-astronomy-7b-v0.2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Astronomy-7b-v0.2-Q8_0-GGUF --model mistral-astronomy-7b-v0.2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-astronomy-7b-v0.2.Q8_0.gguf -n 128
```
|
NiharGupte/swin-tiny-patch4-window7-224-finetuned-student_six_classes | NiharGupte | "2024-04-15T01:31:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-15T01:25:11Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-student_six_classes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9512578616352201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-student_six_classes
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1187
- Accuracy: 0.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.619 | 0.94 | 11 | 1.1587 | 0.4984 |
| 0.841 | 1.96 | 23 | 0.5082 | 0.7689 |
| 0.4154 | 2.98 | 35 | 0.2849 | 0.8868 |
| 0.3476 | 4.0 | 47 | 0.2089 | 0.9418 |
| 0.2414 | 4.94 | 58 | 0.1575 | 0.9450 |
| 0.2128 | 5.96 | 70 | 0.1226 | 0.9497 |
| 0.1783 | 6.98 | 82 | 0.1203 | 0.9481 |
| 0.167 | 8.0 | 94 | 0.1169 | 0.9528 |
| 0.1723 | 8.94 | 105 | 0.1184 | 0.9513 |
| 0.1838 | 9.36 | 110 | 0.1187 | 0.9513 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DavidAU/Mistral-Cowboy-7b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:25:34Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:25:15Z" | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Cowboy-7b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Cowboy-7b-v0.1`](https://huggingface.co/phanerozoic/Mistral-Cowboy-7b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Cowboy-7b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Cowboy-7b-v0.1-Q8_0-GGUF --model mistral-cowboy-7b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Cowboy-7b-v0.1-Q8_0-GGUF --model mistral-cowboy-7b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-cowboy-7b-v0.1.Q8_0.gguf -n 128
```
|
bhugxer/ddpm-coleaf-v3 | bhugxer | "2024-04-15T02:48:20Z" | 0 | 0 | diffusers | [
"diffusers",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2024-04-15T01:25:32Z" | ---
license: mit
---
|
alnawaisheh/my-code.emails | alnawaisheh | "2024-04-15T02:28:47Z" | 0 | 0 | keras | [
"keras",
"code",
"text-classification",
"en",
"dataset:alnawaisheh/mo-emails.data",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-04-15T01:25:50Z" | ---
license: apache-2.0
datasets:
- alnawaisheh/mo-emails.data
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
library_name: keras
pipeline_tag: text-classification
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidAU/Mistral-Pirate-7b-v2-Q8_0-GGUF | DavidAU | "2024-04-15T01:26:18Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:25:57Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Pirate-7b-v2-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Pirate-7b-v2`](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Pirate-7b-v2-Q8_0-GGUF --model mistral-pirate-7b-v2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Pirate-7b-v2-Q8_0-GGUF --model mistral-pirate-7b-v2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-pirate-7b-v2.Q8_0.gguf -n 128
```
|
DavidAU/Mistral-Darwin-7b-v0.1-Q8_0-GGUF | DavidAU | "2024-04-15T01:27:00Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:26:40Z" | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Darwin-7b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Darwin-7b-v0.1`](https://huggingface.co/phanerozoic/Mistral-Darwin-7b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Darwin-7b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Darwin-7b-v0.1-Q8_0-GGUF --model mistral-darwin-7b-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Darwin-7b-v0.1-Q8_0-GGUF --model mistral-darwin-7b-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-darwin-7b-v0.1.Q8_0.gguf -n 128
```
|
ShenaoZ/0.001_ablation_iter_3 | ShenaoZ | "2024-04-15T02:40:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:26:59Z" | ---
license: mit
base_model: ShenaoZ/0.001_ablation_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_ablation_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_iter_3
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_iter_2](https://huggingface.co/ShenaoZ/0.001_ablation_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
aloobun/BunMaska-1.8B | aloobun | "2024-04-15T01:28:51Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:27:01Z" | Invalid username or password. |
Feebo37/ppo-SnowballTarget | Feebo37 | "2024-04-15T01:30:04Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-04-15T01:29:21Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Feebo37/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DavidAU/Mistral-Cowboy-7b-v0.1-Q6_K-GGUF | DavidAU | "2024-04-15T01:29:53Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:29:27Z" | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Cowboy-7b-v0.1-Q6_K-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Cowboy-7b-v0.1`](https://huggingface.co/phanerozoic/Mistral-Cowboy-7b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Cowboy-7b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Cowboy-7b-v0.1-Q6_K-GGUF --model mistral-cowboy-7b-v0.1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Cowboy-7b-v0.1-Q6_K-GGUF --model mistral-cowboy-7b-v0.1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-cowboy-7b-v0.1.Q6_K.gguf -n 128
```
|
K00B404/Gem_Stral_Zephyr_4x-7B_Python_Ties | K00B404 | "2024-04-15T01:29:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:29:29Z" | Invalid username or password. |
DavidAU/Mistral-Pirate-7b-v2-Q6_K-GGUF | DavidAU | "2024-04-15T01:30:59Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:30:32Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Pirate-7b-v2-Q6_K-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Pirate-7b-v2`](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Pirate-7b-v2-Q6_K-GGUF --model mistral-pirate-7b-v2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Pirate-7b-v2-Q6_K-GGUF --model mistral-pirate-7b-v2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-pirate-7b-v2.Q6_K.gguf -n 128
```
|
ikozlov/MobileDiffusionTiny | ikozlov | "2024-04-15T01:51:08Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-15T01:31:37Z" | ---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
DavidAU/Mistral-Darwin-7b-v0.1-Q6_K-GGUF | DavidAU | "2024-04-15T01:31:53Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:31:38Z" | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Darwin-7b-v0.1-Q6_K-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Darwin-7b-v0.1`](https://huggingface.co/phanerozoic/Mistral-Darwin-7b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Darwin-7b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Darwin-7b-v0.1-Q6_K-GGUF --model mistral-darwin-7b-v0.1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Darwin-7b-v0.1-Q6_K-GGUF --model mistral-darwin-7b-v0.1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-darwin-7b-v0.1.Q6_K.gguf -n 128
```
|
ShenaoZ/0.0001_ablation_iter_3 | ShenaoZ | "2024-04-15T02:46:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_ablation_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:32:19Z" | ---
license: mit
base_model: ShenaoZ/0.0001_ablation_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_ablation_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_ablation_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0001_ablation_iter_2](https://huggingface.co/ShenaoZ/0.0001_ablation_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Abhinay123/wav2vec2_vedas2_epoch_1_step_1399 | Abhinay123 | "2024-04-15T01:33:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-15T01:32:27Z" | Invalid username or password. |
DavidAU/Mistral-Pirate-7b-v0.3-Q6_K-GGUF | DavidAU | "2024-04-15T01:32:47Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-15T01:32:31Z" | ---
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Pirate-7b-v0.3-Q6_K-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Pirate-7b-v0.3`](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Pirate-7b-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Pirate-7b-v0.3-Q6_K-GGUF --model mistral-pirate-7b-v0.3.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Pirate-7b-v0.3-Q6_K-GGUF --model mistral-pirate-7b-v0.3.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-pirate-7b-v0.3.Q6_K.gguf -n 128
```
|
mrcreeper5/ema-tailsv2 | mrcreeper5 | "2024-04-15T02:19:46Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-15T01:32:40Z" | ---
license: openrail
---
|
DavidAU/Mistral-Astronomy-7b-v0.2-Q6_K-GGUF | DavidAU | "2024-04-15T01:33:38Z" | 0 | 0 | null | [
"gguf",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"en",
"region:us"
] | null | "2024-04-15T01:33:24Z" | ---
language:
- en
tags:
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# DavidAU/Mistral-Astronomy-7b-v0.2-Q6_K-GGUF
This model was converted to GGUF format from [`phanerozoic/Mistral-Astronomy-7b-v0.2`](https://huggingface.co/phanerozoic/Mistral-Astronomy-7b-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/phanerozoic/Mistral-Astronomy-7b-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-Astronomy-7b-v0.2-Q6_K-GGUF --model mistral-astronomy-7b-v0.2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-Astronomy-7b-v0.2-Q6_K-GGUF --model mistral-astronomy-7b-v0.2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-astronomy-7b-v0.2.Q6_K.gguf -n 128
```
|
ShenaoZ/0.0_ablation_iter_3 | ShenaoZ | "2024-04-15T02:46:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0_ablation_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:33:48Z" | Invalid username or password. |
DavidAU/TinyLlama-con-creative-writing-v0.2-Q8_0-GGUF | DavidAU | "2024-04-15T01:35:34Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | "2024-04-15T01:35:29Z" | ---
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/TinyLlama-con-creative-writing-v0.2-Q8_0-GGUF
This model was converted to GGUF format from [`mogaio/TinyLlama-con-creative-writing-v0.2`](https://huggingface.co/mogaio/TinyLlama-con-creative-writing-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mogaio/TinyLlama-con-creative-writing-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-con-creative-writing-v0.2-Q8_0-GGUF --model tinyllama-con-creative-writing-v0.2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-con-creative-writing-v0.2-Q8_0-GGUF --model tinyllama-con-creative-writing-v0.2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-con-creative-writing-v0.2.Q8_0.gguf -n 128
```
|
csicar/summarize-mistral | csicar | "2024-04-15T01:36:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:35:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
suneeln-duke/dukebot-qac-v1 | suneeln-duke | "2024-04-15T01:36:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:36:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidAU/TinyLlama-con-brainstorming-v0.2-Q8_0-GGUF | DavidAU | "2024-04-15T01:36:55Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | "2024-04-15T01:36:39Z" | ---
tags:
- llama-cpp
- gguf-my-repo
---
# DavidAU/TinyLlama-con-brainstorming-v0.2-Q8_0-GGUF
This model was converted to GGUF format from [`mogaio/TinyLlama-con-brainstorming-v0.2`](https://huggingface.co/mogaio/TinyLlama-con-brainstorming-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mogaio/TinyLlama-con-brainstorming-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-con-brainstorming-v0.2-Q8_0-GGUF --model tinyllama-con-brainstorming-v0.2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-con-brainstorming-v0.2-Q8_0-GGUF --model tinyllama-con-brainstorming-v0.2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-con-brainstorming-v0.2.Q8_0.gguf -n 128
```
|
suneeln-duke/dukebot-qac-v1-merged | suneeln-duke | "2024-04-15T02:01:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | "2024-04-15T01:36:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF | DavidAU | "2024-04-15T01:38:10Z" | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:bigscience/bloom-1b7",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-04-15T01:37:49Z" | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: bigscience/bloom-1b7
model-index:
- name: Bloom-1b7-creative-writing-IT
results: []
---
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF
This model was converted to GGUF format from [`alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline`](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF --model bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF --model bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -n 128
```
|
alexyhc/flan-t5-large-ds | alexyhc | "2024-04-15T02:14:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-15T01:38:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coffie3/s29 | coffie3 | "2024-04-15T01:41:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-15T01:39:26Z" | Entry not found |
chienhsiung/test3 | chienhsiung | "2024-04-15T01:39:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:39:53Z" | Entry not found |
DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF | DavidAU | "2024-04-15T01:40:08Z" | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:bigscience/bloom-1b7",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-04-15T01:40:00Z" | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: bigscience/bloom-1b7
model-index:
- name: Bloom-1b7-creative-writing-IT
results: []
---
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF
This model was converted to GGUF format from [`alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline`](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF --model bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF --model bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -n 128
```
|
ashishp-wiai/ClipArt_LoRA_100-2024-04-15 | ashishp-wiai | "2024-04-15T02:48:50Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-04-15T01:40:42Z" | Entry not found |
DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF | DavidAU | "2024-04-15T01:41:42Z" | 0 | 0 | null | [
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:41:04Z" | ---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- llama-cpp
- gguf-my-repo
---
# DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF
This model was converted to GGUF format from [`FPHam/Writing_Partner_Mistral_7B`](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF --model writing_partner_mistral_7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF --model writing_partner_mistral_7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m writing_partner_mistral_7b.Q6_K.gguf -n 128
```
|
NoahZe/Frank | NoahZe | "2024-04-15T01:52:09Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-15T01:41:33Z" | ---
license: openrail
---
|
idiotDeveloper/KoreanTelephone_Mini_dataset | idiotDeveloper | "2024-04-15T01:41:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:41:58Z" | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- idiotDeveloper/KoreanTelephone_Mini_dataset
model-index:
- name: idiotDeveloper/KoreanTelephone_Mini_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idiotDeveloper/KoreanTelephone_Mini_dataset
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the KoreanTelephone_Mini_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mooo16/gemini-1.5-pro-gemma-rewrite-1024 | mooo16 | "2024-04-15T02:05:09Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-04-15T01:42:06Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: gemini-1.5-pro-gemma-rewrite-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemini-1.5-pro-gemma-rewrite-1024
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
danielm2402/gemma-2b-data-scince-basic | danielm2402 | "2024-04-15T01:50:06Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:42:08Z" | ---
license: apache-2.0
---
|
Fizzarolli/lust-7b | Fizzarolli | "2024-04-15T02:21:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"conversational",
"trl",
"unsloth",
"en",
"dataset:Fizzarolli/rpguild_processed",
"dataset:Fizzarolli/bluemoon_processeed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:42:29Z" | ---
license: apache-2.0
datasets:
- Fizzarolli/rpguild_processed
- Fizzarolli/bluemoon_processeed
language:
- en
library_name: transformers
tags:
- roleplay
- conversational
- trl
- unsloth
---
# lust-7b
experimental rp model.
## prompt format
this one's a bit funky.
```
<|description|>Character
Character is blah blah blah</s>
<|description|>Character 2
Character 2 is blah blah blah (optional to make more than one)</s>
<|narrator|>
Describe what you want to happen in the scenario (I dont even know if this works)
<|message|>Character
Character does blah blah blah</s>
<|message|>Character 2
Character 2 does blah blah blah</s>
<|message|>Character
[start model generation here!]
```
sillytavern templates: TODO
## quants
gguf: https://huggingface.co/Fizzarolli/lust-7b-GGUF (mostly still todo) |
spow12/Visual-novel-transcriptor | spow12 | "2024-04-15T02:39:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ja",
"en",
"dataset:reazon-research/reazonspeech",
"dataset:joujiboi/japanese-anime-speech",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-15T01:43:07Z" | ---
library_name: transformers
datasets:
- reazon-research/reazonspeech
- joujiboi/japanese-anime-speech
language:
- ja
- en
metrics:
- cer
pipeline_tag: automatic-speech-recognition
---
# Model Card for Model ID
![image](./cover_image.jpeg)
<!-- Generated using cagliostrolab/animagine-xl-3.0 -->
<!--Prompt: 1girl, black long hair, suit, headphone, write down, upper body, indoor, night, masterpiece, best quality -->
Fine tunned ASR model from [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2).
This model aimed to transcribe japanese audio especially visual novel.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** Seq2Seq
- **Language(s) (NLP):** japanese
- **Finetuned from model :** [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2).
## Uses
```python
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
import librosa
processor = AutoProcessor.from_pretrained('spow12/Visual-novel-transcriptor', language="ja", task="transcribe")
model = AutoModelForSpeechSeq2Seq.from_pretrained('spow12/Visual-novel-transcriptor').cuda()
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="ja", task="transcribe")
data, _ = librosa.load(wav_path, sr=16000)
input_features = processor(data, sampling_rate=16000, return_tensors="pt").input_features.cuda()
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription[0])
```
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
## Use & Credit
This model is currently available for non-commercial use only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons).
## Citation
```bibtex
@misc {Visual-novel-transcriptor,
author = { {YoungWoo Nam} },
title = { Visual-novel-transcriptor },
year = 2024,
url = { https://huggingface.co/spow12/Visual-novel-transcriptor },
publisher = { Hugging Face }
}
```
|
frcp/jobtalks_llama_v1 | frcp | "2024-04-15T01:43:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:43:20Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF | andreass123 | "2024-04-15T01:44:23Z" | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:43:57Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: microsoft/phi-2
model-index:
- name: yanolja/EEVE-Korean-2.8B-v1.0
results: []
---
# andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-2.8b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-2.8b-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-2.8b-v1.0.Q8_0.gguf -n 128
```
|
DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF | DavidAU | "2024-04-15T01:45:12Z" | 0 | 0 | null | [
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:44:46Z" | ---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- llama-cpp
- gguf-my-repo
---
# DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF
This model was converted to GGUF format from [`FPHam/Writing_Partner_Mistral_7B`](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF --model writing_partner_mistral_7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF --model writing_partner_mistral_7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m writing_partner_mistral_7b.Q8_0.gguf -n 128
```
|
daenielkim-66/a2c-PandaReachDense-v3 | daenielkim-66 | "2024-04-15T01:44:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:44:57Z" | Entry not found |
chandc/whisper-small-Cantonese | chandc | "2024-04-15T01:45:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:45:12Z" | Entry not found |
sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-100-lnoise-0.1 | sayakpaul | "2024-04-15T01:45:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:45:27Z" | Entry not found |
agitohere/sft-microsoft-phi2-on-dialogsum | agitohere | "2024-04-15T01:45:45Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2024-04-15T01:45:40Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: sft-microsoft-phi2-on-dialogsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-microsoft-phi2-on-dialogsum
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4203 | 5.0 | 50 | 1.3966 |
| 1.2814 | 10.0 | 100 | 1.3639 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.1 |
mradermacher/mistral-7b-orpo-v3.0-GGUF | mradermacher | "2024-04-15T02:14:49Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-15T01:45:47Z" | ---
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/distilabel-capybara-dpo-7k-binarized
- HuggingFaceH4/OpenHermesPreferences-10k
exported_from: orpo-explorers/mistral-7b-orpo-v3.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- orpo
- generated_from_trainer
- trl
- orpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/orpo-explorers/mistral-7b-orpo-v3.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
K00B404/Gem_Stral_Zephyr_4x_7B_Python_Ties | K00B404 | "2024-04-15T01:46:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:46:13Z" | Invalid username or password. |
BWangila/ppo-SnowballTarget | BWangila | "2024-04-15T01:46:37Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-04-15T01:46:29Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BWangila/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Erfan-Shayegani/llama2-lora_Unlearned_GA_Accelerate_2 | Erfan-Shayegani | "2024-04-15T01:47:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-15T01:47:38Z" | Temporary Redirect. Redirecting to /Erfan-Shayegani/llama2-lora_Unlearned_bad_weight_5e-2/resolve/main/README.md |
niranjanramarajar/Tamil-llama2-v1 | niranjanramarajar | "2024-04-15T02:02:08Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:48:12Z" | ---
license: llama2
---
|
andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF | andreass123 | "2024-04-15T01:54:38Z" | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-2.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:48:34Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: yanolja/EEVE-Korean-2.8B-v1.0
model-index:
- name: yanolja/EEVE-Korean-Instruct-2.8B-v1.0
results: []
---
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -n 128
```
|
dataautogpt3/vae | dataautogpt3 | "2024-04-15T01:51:03Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:49:01Z" | ---
license: apache-2.0
---
|
leimu/22 | leimu | "2024-04-15T01:49:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:49:22Z" | Entry not found |
Zritze/imdb-spoiler-distilbert1 | Zritze | "2024-04-15T02:26:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-15T01:49:53Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: imdb-spoiler-distilbert1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-spoiler-distilbert1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6459
- F1: 0.5053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5039 | 1.0 | 2870 | 0.4781 | 0.4480 |
| 0.4304 | 2.0 | 5740 | 0.4947 | 0.5229 |
| 0.335 | 3.0 | 8610 | 0.6459 | 0.5053 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ashwanth18/a2c-PandaReachDense-v3 | ashwanth18 | "2024-04-15T01:54:40Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-15T01:50:24Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harikrishnad1997/emotion_tweet_t5-base_2024-04-15 | harikrishnad1997 | "2024-04-15T01:51:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | "2024-04-15T01:50:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/SauerkrautLM-Qwen-32b-bpw2.25 | blockblockblock | "2024-04-15T01:54:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sft",
"dpo",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-15T01:51:19Z" | ---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE
language:
- de
- en
tags:
- sft
- dpo
---
![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/04/SauerkrautLM-Qwen-32b.png "SauerkrautLM-Qwen-32b")
## VAGO solutions SauerkrautLM-Qwen-32b
Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)!
The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
- Finetuned with **SFT**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Qwen-32b
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Qwen-32b**
- **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
- **Language(s):** German, English
- **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model for 2 epochs on 160k data samples with SFT.
- Afterwards we applied DPO for 1 epoch with 110k data.
- LaserRMT version coming soon
**We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
**English:**
```
<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
**German:**
```
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Example output of german language:
```
Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden.
Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab.
Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille.
In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte.
```
## Evaluation
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **73.11** |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 82.32 |
| MMLU (5-shot) | 74.40|
| TruthfulQA (0-shot) | 61.03 |
| Winogrande (5-shot) | 82.16 |
| GSM8K (5-shot) | 79.53 |
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community |
JaimeWang/bert_classification | JaimeWang | "2024-04-15T01:52:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:52:04Z" | Entry not found |
kaya11213/rf_lol | kaya11213 | "2024-04-15T02:06:32Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:52:16Z" | ---
license: apache-2.0
---
|
oneandahalfcats/19534 | oneandahalfcats | "2024-04-15T01:52:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:52:40Z" | Entry not found |
ArmurAI/Pentest-AI-linux | ArmurAI | "2024-04-15T01:52:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:52:44Z" | Entry not found |
kokohandoko/sentiment-kepolisian | kokohandoko | "2024-04-15T01:53:03Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:53:01Z" | ---
license: apache-2.0
---
|
andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF | andreass123 | "2024-04-15T01:53:21Z" | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-2.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T01:53:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: yanolja/EEVE-Korean-2.8B-v1.0
model-index:
- name: yanolja/EEVE-Korean-Instruct-2.8B-v1.0
results: []
---
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -n 128
```
|
elyelysienne/soliris | elyelysienne | "2024-04-15T01:55:37Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-15T01:53:41Z" | ---
license: openrail
---
|
leimu/23 | leimu | "2024-04-15T01:54:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T01:54:13Z" | Entry not found |
BrandonM001/bert-finetuned-ner-accelerate1 | BrandonM001 | "2024-04-15T02:06:50Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-15T01:54:13Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-accelerate1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-accelerate1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0660
- Precision: 0.9330
- Recall: 0.9512
- F1: 0.9420
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0377 | 1.0 | 1756 | 0.0631 | 0.9229 | 0.9392 | 0.9310 | 0.9844 |
| 0.0199 | 2.0 | 3512 | 0.0668 | 0.9343 | 0.9451 | 0.9397 | 0.9858 |
| 0.0095 | 3.0 | 5268 | 0.0660 | 0.9330 | 0.9512 | 0.9420 | 0.9869 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SiriusW/TSC_classification_model | SiriusW | "2024-04-15T02:00:14Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-15T01:54:16Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: TSC_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_classification_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0442
- Precision: 0.8034
- Recall: 0.7769
- F1: 0.7899
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0448 | 0.4732 | 0.4380 | 0.4549 | 0.9866 |
| No log | 2.0 | 96 | 0.0389 | 0.5349 | 0.5702 | 0.552 | 0.9902 |
| No log | 3.0 | 144 | 0.0346 | 0.7154 | 0.7273 | 0.7213 | 0.9932 |
| No log | 4.0 | 192 | 0.0355 | 0.7611 | 0.7107 | 0.7350 | 0.9937 |
| No log | 5.0 | 240 | 0.0375 | 0.7603 | 0.7603 | 0.7603 | 0.9939 |
| No log | 6.0 | 288 | 0.0376 | 0.7478 | 0.7107 | 0.7288 | 0.9937 |
| No log | 7.0 | 336 | 0.0414 | 0.7699 | 0.7190 | 0.7436 | 0.9939 |
| No log | 8.0 | 384 | 0.0427 | 0.7778 | 0.7521 | 0.7647 | 0.9942 |
| No log | 9.0 | 432 | 0.0432 | 0.8120 | 0.7851 | 0.7983 | 0.9947 |
| No log | 10.0 | 480 | 0.0438 | 0.7983 | 0.7851 | 0.7917 | 0.9947 |
| 0.0095 | 11.0 | 528 | 0.0441 | 0.8034 | 0.7769 | 0.7899 | 0.9944 |
| 0.0095 | 12.0 | 576 | 0.0442 | 0.8034 | 0.7769 | 0.7899 | 0.9944 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|