modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ClementineBleuze/scibert_prefix_cont_lr_SEP | ClementineBleuze | "2024-07-02T20:26:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T19:20:40Z" | ---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: scibert_prefix_cont_lr_SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_prefix_cont_lr_SEP
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
- F1 Weighted: 0.8956
- F1 Samples: 0.9038
- F1 Macro: 0.7729
- F1 Micro: 0.9002
- Accuracy: 0.8775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Weighted | F1 Samples | F1 Macro | F1 Micro | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:-----------:|:----------:|:--------:|:--------:|:--------:|
| 0.2307 | 0.3381 | 500 | 0.1728 | 0.7448 | 0.7167 | 0.5879 | 0.7609 | 0.6949 |
| 0.1493 | 0.6761 | 1000 | 0.1271 | 0.8106 | 0.8056 | 0.6216 | 0.8264 | 0.7855 |
| 0.1288 | 1.0142 | 1500 | 0.1158 | 0.8380 | 0.8425 | 0.6843 | 0.8482 | 0.8187 |
| 0.101 | 1.3523 | 2000 | 0.1011 | 0.8626 | 0.8607 | 0.7143 | 0.8690 | 0.8369 |
| 0.0955 | 1.6903 | 2500 | 0.1058 | 0.8573 | 0.8624 | 0.7100 | 0.8651 | 0.8342 |
| 0.0913 | 2.0284 | 3000 | 0.0956 | 0.8735 | 0.8801 | 0.7224 | 0.8804 | 0.8505 |
| 0.0647 | 2.3665 | 3500 | 0.1066 | 0.8613 | 0.8708 | 0.7012 | 0.8683 | 0.8430 |
| 0.066 | 2.7045 | 4000 | 0.0938 | 0.8796 | 0.8877 | 0.7381 | 0.8860 | 0.8599 |
| 0.0617 | 3.0426 | 4500 | 0.0844 | 0.8922 | 0.8993 | 0.7559 | 0.8975 | 0.8742 |
| 0.0422 | 3.3807 | 5000 | 0.0921 | 0.8956 | 0.9038 | 0.7729 | 0.9002 | 0.8775 |
| 0.0422 | 3.7187 | 5500 | 0.0959 | 0.8900 | 0.8979 | 0.7744 | 0.8928 | 0.8674 |
| 0.0439 | 4.0568 | 6000 | 0.0951 | 0.8934 | 0.8983 | 0.8196 | 0.8938 | 0.8701 |
| 0.0297 | 4.3949 | 6500 | 0.0997 | 0.8922 | 0.8981 | 0.7957 | 0.8944 | 0.8714 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
iamerichedlin/DreamBooth | iamerichedlin | "2024-07-02T19:21:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:21:01Z" | Entry not found |
Ctc8/AutograderV2 | Ctc8 | "2024-07-02T19:21:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:21:10Z" | Entry not found |
neural-commons/upscaling-opt-logits-v1 | neural-commons | "2024-07-02T19:21:18Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:21:17Z" | Entry not found |
starnet/08-star21-07-02 | starnet | "2024-07-02T19:28:59Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:21:24Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Litzy0619/MIS0630T6 | Litzy0619 | "2024-07-02T21:15:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:21:28Z" | Entry not found |
Yachna398/code-llama-7b-text-to-sql | Yachna398 | "2024-07-02T19:22:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:22:39Z" | Entry not found |
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10-AWQ | XavierSpycy | "2024-07-02T19:28:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.13372",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-02T19:22:41Z" | ---
license: apache-2.0
---
# Meta-Llama-3-8B-Instruct-zh-10k: A Llama🦙 which speaks Chinese / 一只说中文的羊驼🦙
## Model Details / 模型细节
This model, <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u>, was fine-tuned from the original [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) due to its underperformance in Chinese. Utilizing the LoRa technology within the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) utilities, this model was adapted to better handle Chinese through three epochs on three corpora: `alpaca_zh`, `alpaca_gpt4_zh`, and `oaast_sft_zh`, amounting to approximately 10,000 examples. This is reflected in the `10k` in its name.
由于原模型[Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)在中文上表现欠佳,于是该模型 <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u> 微调自此。在[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)工具下,利用LoRa 技术,通过`alpaca_zh`、`alpaca_gpt4_zh`和`oaast_sft_zh`三个语料库上、经过三个训练轮次,我们将该模型调整得更好地掌握了中文。三个语料库共计约10,000个样本,这也是其名字中的 `10k` 的由来。
For efficient inference, the model was converted to the gguf format using [llama.cpp](https://github.com/ggerganov/llama.cpp) and underwent quantization, resulting in a compact model size of about 3.18 GB, suitable for distribution across various devices.
为了高效的推理,使用 [llama.cpp](https://github.com/ggerganov/llama.cpp),我们将该模型转化为了gguf格式并量化,从而得到了一个压缩到约 3.18 GB 大小的模型,适合分发在各类设备上。
### LoRa Hardware / LoRa 硬件
- RTX 4090D x 1
> [!NOTE]
> The complete fine-tuning process took approximately 12 hours. / 完整微调过程花费约12小时。
Additional fine-tuning configurations are avaiable at [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) or [Llama3Ops](https://github.com/XavierSpycy/llama-ops).
更多微调配置可以在我的个人仓库 [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) 或 [Llama3Ops](https://github.com/XavierSpycy/llama-ops) 获得。
### Other Models / 其他模型
- <u>LLaMA-Factory</u>
- [Meta-Llama-3-8B-Instruct-zh-10k](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k)
- <u>llama.cpp</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-GGUF](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GGUF)
- <u>AutoGPTQ</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-GPTQ](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GPTQ)
### Model Developer / 模型开发者
- **Pretraining**: Meta
- **Fine-tuning**: [XavierSpycy @ GitHub ](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗](https://huggingface.co/XavierSpycy)
- **预训练**: Meta
- **微调**: [XavierSpycy @ GitHub](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗 ](https://huggingface.co/XavierSpycy)
### Usage / 用法
This model can be utilized like the original <u>Meta-Llama3</u> but offers enhanced performance in Chinese.
我们能够像原版的<u>Meta-Llama3</u>一样使用该模型,而它提供了提升后的中文能力。
```python
# !pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "你好,你是谁?"
messages = [
{"role": "system", "content": "你是一个乐于助人的助手。"},
{"role": "user", "content": prompt}]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
# 我是一个人工智能助手,旨在帮助用户解决问题和完成任务。
# 我是一个虚拟的人工智能助手,能够通过自然语言处理技术理解用户的需求并为用户提供帮助。
```
Further details about the deployment are available in the GitHub repository [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops).
更多关于部署的细节可以在我的个人仓库 [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops) 获得。
## Ethical Considerations, Safety & Risks / 伦理考量、安全性和危险
Please refer to [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations) for more information. Key points include bias monitoring, responsible usage guidelines, and transparency in model limitations.
请参考 [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations),以获取更多细节。关键点包括偏见监控、负责任的使用指南和模型限制的透明度。
## Limitations / 局限性
- The comprehensive abilities of the model have not been fully tested.
- While it performs smoothly in Chinese conversations, further benchmarks are required to evaluate its full capabilities. The quality and quantity of the Chinese corpora used may also limit model outputs.
- Additionally, catastrophic forgetting in the fine-tuned model has not been evaluated.
- 该模型的全面的能力尚未全部测试。
- 尽管它在中文对话中表现流畅,但需要更多的测评以评估其完整的能力。中文语料库的质量和数量可能都会对模型输出有所制约。
- 另外,微调模型中的灾难性遗忘尚未评估。
## Acknowledgements / 致谢
We thank Meta for their open-source contributions, which have greatly benefited the developer community, and acknowledge the collaborative efforts of developers in enhancing this community.
我们感谢 Meta 的开源贡献,这极大地帮助了开发者社区,同时,也感谢致力于提升社区的开发者们的努力。
## References / 参考资料
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}}
@inproceedings{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
year={2024},
url={http://arxiv.org/abs/2403.13372}}
``` |
starnet/21-star-07-02-01 | starnet | "2024-07-02T19:26:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:22:42Z" | Entry not found |
sampurnayanda/my-pet-cat-sam | sampurnayanda | "2024-07-02T19:28:30Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-07-02T19:22:44Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-sam Dreambooth model trained by sampurnayanda following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2241025006
Sample pictures of this concept:
![0](https://huggingface.co/sampurnayanda/my-pet-cat-sam/resolve/main/sample_images/sam_(5).jpg)
|
TroyDoesAI/Mini | TroyDoesAI | "2024-07-02T22:36:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:23:02Z" | ---
license: cc-by-nd-4.0
---
|
maxseats/SungBeom-whisper-small-ko-set20 | maxseats | "2024-07-02T19:23:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"ko",
"dataset:maxseats/aihub-464-preprocessed-680GB-set-20",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T19:23:37Z" |
---
language: ko
tags:
- whisper
- speech-recognition
datasets:
- maxseats/aihub-464-preprocessed-680GB-set-20
metrics:
- cer
---
# Model Name : maxseats/SungBeom-whisper-small-ko-set19
# Description
- 파인튜닝 데이터셋 : maxseats/aihub-464-preprocessed-680GB-set-20
# 설명
- AI hub의 주요 영역별 회의 음성 데이터셋을 학습 중이에요.
- 680GB 중 set_0~19 데이터(200GB)까지 파인튜닝한 모델을 불러와서, set_20 데이터(10GB)를 학습한 모델입니다.
- 링크 : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-20
|
SonicInGug/Leedle-Leedle-Leedle-Lee | SonicInGug | "2024-07-02T19:25:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:24:53Z" | Entry not found |
juanpablomesa/all-mpnet-base-v2-bioasq-1epoch-batch32-100steps | juanpablomesa | "2024-07-02T19:25:18Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-02T19:25:02Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: sentence transformers/all mpnet base v2
type: sentence-transformers/all-mpnet-base-v2
metrics:
- type: cosine_accuracy@1
value: 0.8486562942008486
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9363507779349364
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9476661951909476
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.958981612446959
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8486562942008486
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31211692597831214
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1895332390381895
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09589816124469587
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8486562942008486
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9363507779349364
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9476661951909476
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.958981612446959
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9104527449456198
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.894245751105723
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8956968198991456
name: Cosine Map@100
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/all-mpnet-base-v2-bioasq-1epoch-batch32-100steps")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `sentence-transformers/all-mpnet-base-v2`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8487 |
| cosine_accuracy@3 | 0.9364 |
| cosine_accuracy@5 | 0.9477 |
| cosine_accuracy@10 | 0.959 |
| cosine_precision@1 | 0.8487 |
| cosine_precision@3 | 0.3121 |
| cosine_precision@5 | 0.1895 |
| cosine_precision@10 | 0.0959 |
| cosine_recall@1 | 0.8487 |
| cosine_recall@3 | 0.9364 |
| cosine_recall@5 | 0.9477 |
| cosine_recall@10 | 0.959 |
| cosine_ndcg@10 | 0.9105 |
| cosine_mrr@10 | 0.8942 |
| **cosine_map@100** | **0.8957** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.14 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sentence-transformers/all-mpnet-base-v2_cosine_map@100 |
|:------:|:----:|:-------------:|:------------------------------------------------------:|
| 0 | 0 | - | 0.8367 |
| 0.7937 | 100 | 0.1153 | 0.8957 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ItchyChin/tamil-llama-7b-20240702 | ItchyChin | "2024-07-02T19:28:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:25:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wdli/llama3-instruct_depression_2 | wdli | "2024-07-02T19:28:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:25:42Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** wdli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
The model is trained on reddit_depression_dataset, The epoch = 1.
The training is in dialog format, but the user's input is ignored.
For example
```python
def formatting_prompts_func(examples):
texts_dataset = examples['text']
formatted_prompts = []
for text in texts_dataset:
dialog = [
{"role": "system", "content": "You are a patient undergoing depression."},
# {"role": "user", "content": ""},
{"role": "assistant", "content": text}
]
formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
formatted_prompts.append(formatted_prompt)
return {"text": formatted_prompts}
``` |
Litzy0619/blimp-anaphor_gender_agreement_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:25:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:25:50Z" | Entry not found |
CarmenRe/esm2_t6_8M_UR50D-finetuned-localization | CarmenRe | "2024-07-02T19:47:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T19:25:58Z" | Entry not found |
alenatz/bert-biocause-trainer-oversample | alenatz | "2024-07-02T19:38:32Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T19:26:01Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: bert-biocause-trainer-oversample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-biocause-trainer-oversample
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4503
- Accuracy: 0.8199
- F1: 0.6028
- Recall: 0.5346
- Precision: 0.6911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.5982 | 0.07 | 25 | 0.5728 | 0.7637 | 0.1503 | 0.0818 | 0.9286 |
| 0.6258 | 0.14 | 50 | 0.6959 | 0.5482 | 0.5027 | 0.8931 | 0.3498 |
| 0.5442 | 0.22 | 75 | 0.5258 | 0.7749 | 0.5270 | 0.4906 | 0.5693 |
| 0.5752 | 0.29 | 100 | 0.4511 | 0.7878 | 0.4590 | 0.3522 | 0.6588 |
| 0.5428 | 0.36 | 125 | 0.4674 | 0.8071 | 0.5238 | 0.4151 | 0.7097 |
| 0.531 | 0.43 | 150 | 0.5982 | 0.6511 | 0.5562 | 0.8553 | 0.4121 |
| 0.4607 | 0.5 | 175 | 0.4654 | 0.8151 | 0.5344 | 0.4151 | 0.75 |
| 0.4932 | 0.58 | 200 | 0.4532 | 0.8135 | 0.5167 | 0.3899 | 0.7654 |
| 0.393 | 0.65 | 225 | 0.4812 | 0.7797 | 0.6226 | 0.7107 | 0.5539 |
| 0.427 | 0.72 | 250 | 0.4590 | 0.8151 | 0.6440 | 0.6541 | 0.6341 |
| 0.4661 | 0.79 | 275 | 0.4516 | 0.8312 | 0.6688 | 0.6667 | 0.6709 |
| 0.3976 | 0.86 | 300 | 0.4505 | 0.8232 | 0.6207 | 0.5660 | 0.6870 |
| 0.4464 | 0.94 | 325 | 0.4450 | 0.8199 | 0.6028 | 0.5346 | 0.6911 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.1
- Datasets 2.19.1
- Tokenizers 0.15.1
|
RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf | RichardErkhov | "2024-07-02T19:50:48Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T19:26:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyUltra-4x1.1B-Base-Alpha - GGUF
- Model creator: https://huggingface.co/indischepartij/
- Original model: https://huggingface.co/indischepartij/TinyUltra-4x1.1B-Base-Alpha/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyUltra-4x1.1B-Base-Alpha.Q2_K.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q2_K.gguf) | Q2_K | 1.17GB |
| [TinyUltra-4x1.1B-Base-Alpha.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.IQ3_XS.gguf) | IQ3_XS | 1.31GB |
| [TinyUltra-4x1.1B-Base-Alpha.IQ3_S.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.IQ3_S.gguf) | IQ3_S | 1.38GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q3_K_S.gguf) | Q3_K_S | 1.38GB |
| [TinyUltra-4x1.1B-Base-Alpha.IQ3_M.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.IQ3_M.gguf) | IQ3_M | 1.4GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q3_K.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q3_K.gguf) | Q3_K | 1.52GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q3_K_M.gguf) | Q3_K_M | 1.52GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q3_K_L.gguf) | Q3_K_L | 1.65GB |
| [TinyUltra-4x1.1B-Base-Alpha.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q4_0.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q4_0.gguf) | Q4_0 | 1.79GB |
| [TinyUltra-4x1.1B-Base-Alpha.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.IQ4_NL.gguf) | IQ4_NL | 1.8GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q4_K.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q4_K.gguf) | Q4_K | 1.9GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q4_K_M.gguf) | Q4_K_M | 1.9GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q4_1.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q4_1.gguf) | Q4_1 | 1.98GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q5_0.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q5_0.gguf) | Q5_0 | 2.18GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q5_K_S.gguf) | Q5_K_S | 2.18GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q5_K.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q5_K.gguf) | Q5_K | 2.23GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q5_K_M.gguf) | Q5_K_M | 2.23GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q5_1.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q5_1.gguf) | Q5_1 | 2.37GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q6_K.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q6_K.gguf) | Q6_K | 2.59GB |
| [TinyUltra-4x1.1B-Base-Alpha.Q8_0.gguf](https://huggingface.co/RichardErkhov/indischepartij_-_TinyUltra-4x1.1B-Base-Alpha-gguf/blob/main/TinyUltra-4x1.1B-Base-Alpha.Q8_0.gguf) | Q8_0 | 3.35GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- moe
- merge
- mergekit
model-index:
- name: TinyUltra-4x1.1B-Base-Alpha
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.59
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
name: Open LLM Leaderboard
widget:
- example_title: Pirate!
messages:
- role: system
content: You are a pirate chatbot who always responds with Arr!
- role: user
content: "There's a llama on my lawn, how can I get rid of him?"
output:
text: >-
Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare
sight, but I've got a plan that might help ye get rid of 'im. Ye'll need
to gather some carrots and hay, and then lure the llama away with the
promise of a tasty treat. Once he's gone, ye can clean up yer lawn and
enjoy the peace and quiet once again. But beware, me hearty, for there
may be more llamas where that one came from! Arr!
---
![image/jpeg](https://i.imgur.com/rx3ckCc.jpeg)
# TinyUltra-4x1.1B-Base-Alpha
TinyUltra-4x1.1B-Base-Alpha is a Mixure of Experts (MoE) made with the following models using MergeKit:
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1)
* [cognitivecomputations/TinyDolphin-2.8.1-1.1b](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b)
* [Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test](https://huggingface.co/Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test)
# Modelfile/Prompt format
```markdown
SYSTEM You are a TinyUltra, helpful and lovely AI assistant.
TEMPLATE <|system|> {{ .System }}</s> <|user|> {{ .Prompt }}</s> <|assistant|>
PARAMETER stop <|system|>
PARAMETER stop <|user|>
PARAMETER stop <|assistant|>
PARAMETER stop </s>
```
## 🧩 Configuration
```yaml
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
gate_mode: hidden
dtype: float16
experts:
- source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
positive_prompts:
- "Help me debug this code."
- "Rewrite this function in Python."
- "Optimize this C# script."
- "Implement this feature using JavaScript."
- "Convert this HTML structure into a more efficient design."
- "Assist me with writing a program that"
- source_model: vihangd/DopeyTinyLlama-1.1B-v1
positive_prompts:
- "How do you"
- "Explain the concept of"
- "Give an overview of"
- "Compare and contrast between"
- "Provide information about"
- "Help me understand"
- "Summarize"
- "Make a recommendation on"
- "Answer this question"
- source_model: cognitivecomputations/TinyDolphin-2.8.1-1.1b
positive_prompts:
- "Write a program to solve this problem"
- "Modify this function to improve its performance"
- "Refactor this code to enhance readability"
- "Create a custom function for this specific use case"
- "Optimize this algorithm to reduce computational complexity"
- "Implement this feature by extending existing codebase"
- "Integrate this API call into the application"
- "Help me troubleshoot and fix this bug"
- "Review and test this code snippet before deployment"
- "Analyze this error log to identify potential issues"
- "Generate a set of unit tests for this module"
- "Evaluate different approaches to solving this problem"
- "Do a web search for"
- "Use the plugin to"
- source_model: Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
positive_prompts:
- "add these numbers"
- "whats 2+2"
- "subtraction"
- "division"
- "multiplication"
- "addition"
- "I need help with a math problem"
- "Solve for x"
- "Add these two numbers together: 4 + 3 = 7"
- "Multiply 5 by 6: 5 * 6 = 30"
- "Divide 8 by 2: 8 / 2 = 4"
- "Find the remainder when 9 is divided by 3: 9 % 3 = 0"
- "Calculate the square root of 16: sqrt(16) = 4"
- "Simplify the expression (a+b)/(c-d): (a+b)/(c-d)"
- "Factor out the common factor of 2 from 4x + 6y: 2(2x + 3y)"
- "Solve for x in the equation 3x - 7 = 2x + 5: x = 12"
- "Graph the line y = 2x + 3"
- "Approximate pi to three decimal places: 3.142"
- "Find the derivative of f(x) = sin(x): f'(x) = cos(x)"
- "Integrate g(x) = x^2 over the interval [0, 1]: g(1) - g(0) = 1/3"
- "Calculate the determinant of the matrix A = [[2, 3], [4, 5]]: det(A) = 2*5 - 3*4 = -2"
- "Solve the system of equations Ax = b: x = [-5, 10]"
- "Calculate the sum of the first n natural numbers using the formula Sn = n*(n+1)/2: sum(n=1 to 5) = 15"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/TinyUltra-4x1.1B-Base-Alpha"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
GGUF: https://huggingface.co/indischepartij/TinyUltra-4x1.1B-Base-Alpha-GGUF
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__TinyUltra-4x1.1B-Base-Alpha)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.94|
|AI2 Reasoning Challenge (25-Shot)|34.90|
|HellaSwag (10-Shot) |61.42|
|MMLU (5-Shot) |25.42|
|TruthfulQA (0-shot) |37.59|
|Winogrande (5-shot) |65.75|
|GSM8k (5-shot) | 2.58|
|
Magpie-Align/Llama-3-8B-Instruct-Mix-MagPO-8vs70 | Magpie-Align | "2024-07-03T00:33:21Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:Magpie-Align/Magpie-PO-8Bvs70B-73K",
"base_model:Magpie-Align/Llama-3-8B-Instruct-Magpie-Mix-600K",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:26:36Z" | Invalid username or password. |
CarmenRe/esm2_t6_8M_UR50D-finetuned-secondary-structure | CarmenRe | "2024-07-02T19:27:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:27:02Z" | Entry not found |
whizzzzkid/whizzzzkid_437_2 | whizzzzkid | "2024-07-02T19:27:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:27:33Z" | Entry not found |
Ahmad0067/llama-3-8b-Instruct-NO-4bit-Prescription_Specialist_Synth_data_Phase_1_and_2 | Ahmad0067 | "2024-07-02T19:27:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T19:27:52Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/llama-3-8b-Instruct
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
gaelafoxus/ModelsPonyXL | gaelafoxus | "2024-07-02T21:30:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:29:06Z" | Entry not found |
RichardErkhov/PipableAI_-_pip-SQL-1B-gguf | RichardErkhov | "2024-07-02T19:40:57Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T19:29:14Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pip-SQL-1B - GGUF
- Model creator: https://huggingface.co/PipableAI/
- Original model: https://huggingface.co/PipableAI/pip-SQL-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pip-SQL-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q2_K.gguf) | Q2_K | 0.52GB |
| [pip-SQL-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [pip-SQL-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [pip-SQL-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [pip-SQL-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [pip-SQL-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q3_K.gguf) | Q3_K | 0.66GB |
| [pip-SQL-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [pip-SQL-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [pip-SQL-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [pip-SQL-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q4_0.gguf) | Q4_0 | 0.72GB |
| [pip-SQL-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [pip-SQL-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [pip-SQL-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q4_K.gguf) | Q4_K | 0.81GB |
| [pip-SQL-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [pip-SQL-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q4_1.gguf) | Q4_1 | 0.8GB |
| [pip-SQL-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q5_0.gguf) | Q5_0 | 0.87GB |
| [pip-SQL-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [pip-SQL-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q5_K.gguf) | Q5_K | 0.93GB |
| [pip-SQL-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [pip-SQL-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q5_1.gguf) | Q5_1 | 0.95GB |
| [pip-SQL-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q6_K.gguf) | Q6_K | 1.09GB |
| [pip-SQL-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-SQL-1B-gguf/blob/main/pip-SQL-1B.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
widget:
- text: "<schema>CREATE TABLE radio(age VARCHAR, radio_id VARCHAR, frequency VARCHAR, wavelength VARCHAR); CREATE TABLE radio_faults(radio_id VARCHAR, fault_description VARCHAR)</schema><question>Get the radio id and defect descriptions of radios that have wavelength greater than 30 ?</question><sql>"
example_title: "example1"
- text: "<schema>CREATE TABLE system(JobID: String,GID: String, UID: String, Start:Time(yyyy/mm/dd), End: Time,ElapsedRaw: Time, CPUTimeRAW: Time,NCPUS: Number,NNodes: Number, NodeList: List, State:String, Timelimit: Time);</schema><question>Get UID and job id for Jobs that started on Jan 20 , 2023</question><sql>"
example_title: "example2"
- text: "<schema>CREATE TABLE department (Department_ID number, Name text, Creation text, Ranking number, Budget_in_Billions number, Num_Employees number) which has Department_ID as primary key abd CREATE TABLE head (head_ID number, name text, born_state text, age number) which has head_ID as primary key and CREATE TABLE management (department_ID number, head_ID number, temporary_acting text) which has department_ID as primary key</schema><question>"
example_title: "example3"
tags:
- code
- sql
- text2sql
- instruction_tuned
- jax
- pytorch
- 1b
- expert
datasets:
- PipableAI/spider-bird
---
# Pipable’s pipSQL
Please refer to https://huggingface.co/PipableAI/pipSQL-1.3b for our state of the art model, that gives better performance than chatgpt and claude on sql tasks on a lot of benchmarks.
Pipable’s pipSQL is a model distilled from llama 1b to generate sql queries given prompt and schema.
We used a unique pipeline which involved the model working on two objectives alternatively ----
1. Maximizing the log prob of all tokens in the sequence (including the prompt tokens)
2. Minimizng the difference between the true value and the predicted maximum value of the output tokens i.e generated tokens for the sql query slice of the entire sequence.
## License
The model's new weights along with all other assets involved with it are open sourced under mit license.
## How to Use
```python
text = """<schema>{schema}</schema>
<question>{question}</question>
<sql>"""
```
pytorch
```python
from transformers import AutoModelForCasualLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("PipableAI/pipSQL1b")
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pipSQL1b")
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0])
```
flax
```python
from transformers import FlaxAutoModelForCasualLM, AutoTokenizer
model = FlaxAutoModelForCausalLM.from_pretrained("PipableAI/pipSQL1b" , from_pt=True)
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pipSQL1b")
```
## The PipableAI team
Avi Kothari, Pratham Gupta, Ritvik Aryan Kalra, Rohan Bhatial, Soham Acharya
|
miradalabs/chat-gu-1 | miradalabs | "2024-07-02T19:29:31Z" | 0 | 1 | null | [
"license:other",
"region:us"
] | null | "2024-07-02T19:29:31Z" | ---
license: other
license_name: mirada-ai-open
license_link: LICENSE
---
|
alinehsn/llama-3-8b-chat-MS | alinehsn | "2024-07-02T19:30:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:29:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/09-star21-07-02 | starnet | "2024-07-02T19:37:22Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:29:59Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
panaschristou/llama2-grid-reconfiguration-1epoch-RP-AP | panaschristou | "2024-07-02T19:34:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:30:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy0619/blimp-anaphor_number_agreement_0.003_32_5_6 | Litzy0619 | "2024-07-02T19:31:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:31:37Z" | Entry not found |
Naytzav/zapatos | Naytzav | "2024-07-02T19:33:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:33:31Z" | Entry not found |
TheRealheavy/MaxSchreck | TheRealheavy | "2024-07-02T19:34:56Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-07-02T19:33:33Z" | ---
license: openrail
---
|
talha8665/my-dreambooth-project | talha8665 | "2024-07-02T19:33:39Z" | 0 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-07-02T19:33:33Z" |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of talha
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - talha8665/my-dreambooth-project
<Gallery />
## Model description
These are talha8665/my-dreambooth-project LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of talha to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](talha8665/my-dreambooth-project/tree/main) them in the Files & versions tab.
|
juanpablomesa/all-mpnet-base-v2-bioasq-1epoc | juanpablomesa | "2024-07-02T19:35:18Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-02T19:35:06Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/all-mpnet-base-v2-bioasq-1epoc")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.14 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
tahaman/DamagedCarModelNew | tahaman | "2024-07-02T20:11:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-02T19:35:53Z" | ---
license: apache-2.0
---
|
manbeast3b/ZZZZZZZZdriver141 | manbeast3b | "2024-07-02T19:39:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:37:00Z" | Entry not found |
starnet/10-star21-07-02 | starnet | "2024-07-02T19:45:56Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:38:23Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DenisDziganchuk/HeliumBot | DenisDziganchuk | "2024-07-02T19:38:53Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T19:38:53Z" | ---
license: apache-2.0
---
|
Anupamamit14/AItest | Anupamamit14 | "2024-07-02T20:12:54Z" | 0 | 0 | espnet | [
"espnet",
"finance",
"automatic-speech-recognition",
"hi",
"en",
"sa",
"dataset:HuggingFaceFW/fineweb",
"dataset:HuggingFaceFW/fineweb-edu",
"license:llama3",
"region:us"
] | automatic-speech-recognition | "2024-07-02T19:38:54Z" | ---
license: llama3
datasets:
- HuggingFaceFW/fineweb
- HuggingFaceFW/fineweb-edu
language:
- hi
- en
- sa
metrics:
- accuracy
library_name: espnet
pipeline_tag: automatic-speech-recognition
tags:
- finance
--- |
bobbyw/v3b_summarizer | bobbyw | "2024-07-02T19:49:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:40:24Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** bobbyw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bobbyw/v3_summarizer | bobbyw | "2024-07-02T19:46:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:40:29Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** bobbyw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf | RichardErkhov | "2024-07-02T19:50:02Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T19:42:34Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
smol_llama-4x220M-MoE - GGUF
- Model creator: https://huggingface.co/Isotonic/
- Original model: https://huggingface.co/Isotonic/smol_llama-4x220M-MoE/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol_llama-4x220M-MoE.Q2_K.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q2_K.gguf) | Q2_K | 0.22GB |
| [smol_llama-4x220M-MoE.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.IQ3_XS.gguf) | IQ3_XS | 0.24GB |
| [smol_llama-4x220M-MoE.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.IQ3_S.gguf) | IQ3_S | 0.25GB |
| [smol_llama-4x220M-MoE.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q3_K_S.gguf) | Q3_K_S | 0.25GB |
| [smol_llama-4x220M-MoE.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.IQ3_M.gguf) | IQ3_M | 0.25GB |
| [smol_llama-4x220M-MoE.Q3_K.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q3_K.gguf) | Q3_K | 0.27GB |
| [smol_llama-4x220M-MoE.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q3_K_M.gguf) | Q3_K_M | 0.27GB |
| [smol_llama-4x220M-MoE.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q3_K_L.gguf) | Q3_K_L | 0.29GB |
| [smol_llama-4x220M-MoE.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.IQ4_XS.gguf) | IQ4_XS | 0.31GB |
| [smol_llama-4x220M-MoE.Q4_0.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q4_0.gguf) | Q4_0 | 0.32GB |
| [smol_llama-4x220M-MoE.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.IQ4_NL.gguf) | IQ4_NL | 0.32GB |
| [smol_llama-4x220M-MoE.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q4_K_S.gguf) | Q4_K_S | 0.32GB |
| [smol_llama-4x220M-MoE.Q4_K.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q4_K.gguf) | Q4_K | 0.34GB |
| [smol_llama-4x220M-MoE.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q4_K_M.gguf) | Q4_K_M | 0.34GB |
| [smol_llama-4x220M-MoE.Q4_1.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q4_1.gguf) | Q4_1 | 0.35GB |
| [smol_llama-4x220M-MoE.Q5_0.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q5_0.gguf) | Q5_0 | 0.39GB |
| [smol_llama-4x220M-MoE.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q5_K_S.gguf) | Q5_K_S | 0.39GB |
| [smol_llama-4x220M-MoE.Q5_K.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q5_K.gguf) | Q5_K | 0.4GB |
| [smol_llama-4x220M-MoE.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q5_K_M.gguf) | Q5_K_M | 0.4GB |
| [smol_llama-4x220M-MoE.Q5_1.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q5_1.gguf) | Q5_1 | 0.42GB |
| [smol_llama-4x220M-MoE.Q6_K.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q6_K.gguf) | Q6_K | 0.46GB |
| [smol_llama-4x220M-MoE.Q8_0.gguf](https://huggingface.co/RichardErkhov/Isotonic_-_smol_llama-4x220M-MoE-gguf/blob/main/smol_llama-4x220M-MoE.Q8_0.gguf) | Q8_0 | 0.59GB |
Original model description:
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- BEE-spoke-data/smol_llama-220M-openhermes
- BEE-spoke-data/beecoder-220M-python
- BEE-spoke-data/zephyr-220m-sft-full
- BEE-spoke-data/zephyr-220m-dpo-full
- text-generation
datasets:
- JeanKaddour/minipile
- pszemraj/simple_wikipedia_LM
- mattymchen/refinedweb-3m
- HuggingFaceH4/ultrachat_200k
- teknium/openhermes
- HuggingFaceH4/ultrafeedback_binarized
- EleutherAI/proof-pile-2
- bigcode/the-stack-smol-xl
pipeline_tag: text-generation
---
🌟 Buying me coffee is a direct way to show support for this project.
<a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
# smol_llama-4x220M-MoE
smol_llama-4x220M-MoE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [BEE-spoke-data/smol_llama-220M-openhermes](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes)
* [BEE-spoke-data/beecoder-220M-python](https://huggingface.co/BEE-spoke-data/beecoder-220M-python)
* [BEE-spoke-data/zephyr-220m-sft-full](https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full)
* [BEE-spoke-data/zephyr-220m-dpo-full](https://huggingface.co/BEE-spoke-data/zephyr-220m-dpo-full)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Isotonic/smol_llama-4x220M-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🧩 Configuration
```yamlbase_model: BEE-spoke-data/smol_llama-220M-openhermes
experts:
- source_model: BEE-spoke-data/smol_llama-220M-openhermes
positive_prompts:
- "reasoning"
- "logic"
- "problem-solving"
- "critical thinking"
- "analysis"
- "synthesis"
- "evaluation"
- "decision-making"
- "judgment"
- "insight"
- source_model: BEE-spoke-data/beecoder-220M-python
positive_prompts:
- "program"
- "software"
- "develop"
- "build"
- "create"
- "design"
- "implement"
- "debug"
- "test"
- "code"
- "python"
- "programming"
- "algorithm"
- "function"
- source_model: BEE-spoke-data/zephyr-220m-sft-full
positive_prompts:
- "storytelling"
- "narrative"
- "fiction"
- "creative writing"
- "plot"
- "characters"
- "dialogue"
- "setting"
- "emotion"
- "imagination"
- "scene"
- "story"
- "character"
- source_model: BEE-spoke-data/zephyr-220m-dpo-full
positive_prompts:
- "chat"
- "conversation"
- "dialogue"
- "discuss"
- "ask questions"
- "share thoughts"
- "explore ideas"
- "learn new things"
- "personal assistant"
- "friendly helper"
```
|
HiroseKoichi/L3-8B-Lunar-Stheno | HiroseKoichi | "2024-07-02T23:26:45Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nsfw",
"not-for-all-audiences",
"llama-3",
"text-generation-inference",
"mergekit",
"merge",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T19:43:29Z" | ---
license: llama3
library_name: transformers
tags:
- nsfw
- not-for-all-audiences
- llama-3
- text-generation-inference
- mergekit
- merge
---
# L3-8B-Lunar-Stheno
L3-8B-Lunaris-v1 is definitely a significant improvement over L3-8B-Stheno-v3.2 in terms of situational awareness and prose, but it's not without issues: the response length can sometimes be very long, causing it to go on a rant; it tends to not take direct action, saying that it will do something but never actually doing it; and its performance outside of roleplay took a hit.
This merge fixes all of those issues, and I'm genuinely impressed with the results. While I did use a SLERP merge to create this model, there was no blending of the models; all I did was replace L3-8B-Stheno-v3.2's weights with L3-8B-Lunaris-v1's.
# Details
- **License**: [llama3](https://llama.meta.com/llama3/license/)
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
- **Context Size**: 8K
## Models Used
- [L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
- [L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
## Merge Config
```yaml
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: Sao10K/L3-8B-Lunaris-v1
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value: 0
- filter: mlp
value: 1
- value: 0
dtype: bfloat16
``` |
Ak1104/prompt2 | Ak1104 | "2024-07-02T20:47:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-07-02T19:44:13Z" | Entry not found |
hafidber/videomae-base-finetuned-Risky-situations | hafidber | "2024-07-02T19:58:26Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-07-02T19:44:51Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-Risky-situations
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-Risky-situations
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6847
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 125
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4618 | 0.2 | 25 | 0.0103 | 1.0 |
| 0.3548 | 1.2 | 50 | 0.0029 | 1.0 |
| 0.2182 | 2.2 | 75 | 0.0009 | 1.0 |
| 0.2513 | 3.2 | 100 | 0.0033 | 1.0 |
| 0.0021 | 4.2 | 125 | 0.0008 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
adamo1139/Yi-1.5-9B-32K-uninstruct1-0702 | adamo1139 | "2024-07-02T19:57:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:45:56Z" | ---
license: apache-2.0
---
|
adamo1139/Yi-1.5-9B-32K-uninstruct1-0702-LoRA | adamo1139 | "2024-07-02T19:59:31Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T19:46:12Z" | ---
license: apache-2.0
---
|
CheccoCando/my_model | CheccoCando | "2024-07-02T19:47:02Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-02T19:46:16Z" | Entry not found |
starnet/11-star21-07-02 | starnet | "2024-07-02T19:54:07Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:46:47Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
juanpablomesa/all-mpnet-base-v2-bioasq-1epoc-batch32-100 | juanpablomesa | "2024-07-02T19:47:18Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-02T19:46:57Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE small finetuned BIOASQ
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: sentence transformers/all mpnet base v2
type: sentence-transformers/all-mpnet-base-v2
metrics:
- type: cosine_accuracy@1
value: 0.8458274398868458
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9349363507779349
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9476661951909476
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9603960396039604
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8458274398868458
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3116454502593117
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1895332390381895
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09603960396039603
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8458274398868458
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9349363507779349
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9476661951909476
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9603960396039604
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9092722406676912
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8922532049123282
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8936600133157465
name: Cosine Map@100
---
# BGE small finetuned BIOASQ
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/all-mpnet-base-v2-bioasq-1epoc-batch32-100")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `sentence-transformers/all-mpnet-base-v2`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8458 |
| cosine_accuracy@3 | 0.9349 |
| cosine_accuracy@5 | 0.9477 |
| cosine_accuracy@10 | 0.9604 |
| cosine_precision@1 | 0.8458 |
| cosine_precision@3 | 0.3116 |
| cosine_precision@5 | 0.1895 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.8458 |
| cosine_recall@3 | 0.9349 |
| cosine_recall@5 | 0.9477 |
| cosine_recall@10 | 0.9604 |
| cosine_ndcg@10 | 0.9093 |
| cosine_mrr@10 | 0.8923 |
| **cosine_map@100** | **0.8937** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.14 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sentence-transformers/all-mpnet-base-v2_cosine_map@100 |
|:------:|:----:|:-------------:|:------------------------------------------------------:|
| 0.7937 | 100 | 0.1155 | 0.8937 |
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
rafaelamwlo/llama | rafaelamwlo | "2024-07-02T19:50:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T19:49:00Z" | Entry not found |
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_Anthropic_HH_Golden-processed_sub | Nutanix | "2024-07-02T19:53:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T19:53:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
krittapol/Mumnim2_beta | krittapol | "2024-07-02T20:25:52Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T19:53:46Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/13-star21-07-02 | starnet | "2024-07-02T20:02:25Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T19:55:00Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
juanpablomesa/all-mpnet-base-v2-bioasq-matryoshka | juanpablomesa | "2024-07-02T19:58:10Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-02T19:57:48Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Extensive messenger RNA editing generates transcript and protein
diversity in genes involved in neural excitability, as previously described, as
well as in genes participating in a broad range of other cellular functions. '
sentences:
- Do cephalopods use RNA editing less frequently than other species?
- GV1001 vaccine targets which enzyme?
- Which event results in the acetylation of S6K1?
- source_sentence: Yes, exposure to household furry pets influences the gut microbiota
of infants.
sentences:
- Can pets affect infant microbiomed?
- What is the mode of action of Thiazovivin?
- What are the effects of CAMK4 inhibition?
- source_sentence: "In children with heart failure evidence of the effect of enalapril\
\ is empirical. Enalapril was clinically safe and effective in 50% to 80% of for\
\ children with cardiac failure secondary to congenital heart malformations before\
\ and after cardiac surgery, impaired ventricular function , valvar regurgitation,\
\ congestive cardiomyopathy, , arterial hypertension, life-threatening arrhythmias\
\ coexisting with circulatory insufficiency. \nACE inhibitors have shown a transient\
\ beneficial effect on heart failure due to anticancer drugs and possibly a beneficial\
\ effect in muscular dystrophy-associated cardiomyopathy, which deserves further\
\ studies."
sentences:
- Which receptors can be evaluated with the [18F]altanserin?
- In what proportion of children with heart failure has Enalapril been shown to
be safe and effective?
- Which major signaling pathways are regulated by RIP1?
- source_sentence: Cellular senescence-associated heterochromatic foci (SAHFS) are
a novel type of chromatin condensation involving alterations of linker histone
H1 and linker DNA-binding proteins. SAHFS can be formed by a variety of cell types,
but their mechanism of action remains unclear.
sentences:
- What is the relationship between the X chromosome and a neutrophil drumstick?
- Which microRNAs are involved in exercise adaptation?
- How are SAHFS created?
- source_sentence: Multicluster Pcdh diversity is required for mouse olfactory neural
circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins
are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although
deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss
of all three clusters (tricluster deletion) led to a severe axonal arborization
defect and loss of self-avoidance.
sentences:
- What are the effects of the deletion of all three Pcdh clusters (tricluster deletion)
in mice?
- what is the role of MEF-2 in cardiomyocyte differentiation?
- How many periods of regulatory innovation led to the evolution of vertebrates?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.8373408769448374
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9306930693069307
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9448373408769448
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.958981612446959
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8373408769448374
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31023102310231027
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18896746817538893
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09589816124469587
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8373408769448374
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9306930693069307
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9448373408769448
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.958981612446959
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9038566618329213
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8855380436002787
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8867903631779396
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.8373408769448374
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9335219236209336
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9462517680339463
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9603960396039604
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8373408769448374
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31117397454031115
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18925035360678924
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09603960396039603
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8373408769448374
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9335219236209336
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9462517680339463
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9603960396039604
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9045496377971035
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8860549830493253
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8870969130410834
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.8288543140028288
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9222065063649222
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.942008486562942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9533239038189534
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8288543140028288
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3074021687883074
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18840169731258838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09533239038189532
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8288543140028288
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9222065063649222
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.942008486562942
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9533239038189534
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8963408137245359
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8774370804427385
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8786914503856871
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.809052333804809
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8995756718528995
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9207920792079208
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9405940594059405
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.809052333804809
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29985855728429983
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18415841584158416
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09405940594059406
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.809052333804809
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8995756718528995
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9207920792079208
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9405940594059405
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8794609712523561
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8593930311398488
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8608652296821839
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.7694483734087695
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8613861386138614
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8868458274398868
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9080622347949081
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7694483734087695
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2871287128712871
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17736916548797735
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09080622347949079
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7694483734087695
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8613861386138614
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8868458274398868
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9080622347949081
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.841605620432732
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8200012348173592
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8223782042287946
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/all-mpnet-base-v2-bioasq-matryoshka")
# Run inference
sentences = [
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'How many periods of regulatory innovation led to the evolution of vertebrates?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8373 |
| cosine_accuracy@3 | 0.9307 |
| cosine_accuracy@5 | 0.9448 |
| cosine_accuracy@10 | 0.959 |
| cosine_precision@1 | 0.8373 |
| cosine_precision@3 | 0.3102 |
| cosine_precision@5 | 0.189 |
| cosine_precision@10 | 0.0959 |
| cosine_recall@1 | 0.8373 |
| cosine_recall@3 | 0.9307 |
| cosine_recall@5 | 0.9448 |
| cosine_recall@10 | 0.959 |
| cosine_ndcg@10 | 0.9039 |
| cosine_mrr@10 | 0.8855 |
| **cosine_map@100** | **0.8868** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8373 |
| cosine_accuracy@3 | 0.9335 |
| cosine_accuracy@5 | 0.9463 |
| cosine_accuracy@10 | 0.9604 |
| cosine_precision@1 | 0.8373 |
| cosine_precision@3 | 0.3112 |
| cosine_precision@5 | 0.1893 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.8373 |
| cosine_recall@3 | 0.9335 |
| cosine_recall@5 | 0.9463 |
| cosine_recall@10 | 0.9604 |
| cosine_ndcg@10 | 0.9045 |
| cosine_mrr@10 | 0.8861 |
| **cosine_map@100** | **0.8871** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8289 |
| cosine_accuracy@3 | 0.9222 |
| cosine_accuracy@5 | 0.942 |
| cosine_accuracy@10 | 0.9533 |
| cosine_precision@1 | 0.8289 |
| cosine_precision@3 | 0.3074 |
| cosine_precision@5 | 0.1884 |
| cosine_precision@10 | 0.0953 |
| cosine_recall@1 | 0.8289 |
| cosine_recall@3 | 0.9222 |
| cosine_recall@5 | 0.942 |
| cosine_recall@10 | 0.9533 |
| cosine_ndcg@10 | 0.8963 |
| cosine_mrr@10 | 0.8774 |
| **cosine_map@100** | **0.8787** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8091 |
| cosine_accuracy@3 | 0.8996 |
| cosine_accuracy@5 | 0.9208 |
| cosine_accuracy@10 | 0.9406 |
| cosine_precision@1 | 0.8091 |
| cosine_precision@3 | 0.2999 |
| cosine_precision@5 | 0.1842 |
| cosine_precision@10 | 0.0941 |
| cosine_recall@1 | 0.8091 |
| cosine_recall@3 | 0.8996 |
| cosine_recall@5 | 0.9208 |
| cosine_recall@10 | 0.9406 |
| cosine_ndcg@10 | 0.8795 |
| cosine_mrr@10 | 0.8594 |
| **cosine_map@100** | **0.8609** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7694 |
| cosine_accuracy@3 | 0.8614 |
| cosine_accuracy@5 | 0.8868 |
| cosine_accuracy@10 | 0.9081 |
| cosine_precision@1 | 0.7694 |
| cosine_precision@3 | 0.2871 |
| cosine_precision@5 | 0.1774 |
| cosine_precision@10 | 0.0908 |
| cosine_recall@1 | 0.7694 |
| cosine_recall@3 | 0.8614 |
| cosine_recall@5 | 0.8868 |
| cosine_recall@10 | 0.9081 |
| cosine_ndcg@10 | 0.8416 |
| cosine_mrr@10 | 0.82 |
| **cosine_map@100** | **0.8224** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,012 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 63.14 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> | <code>What is the implication of histone lysine methylation in medulloblastoma?</code> |
| <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> | <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> |
| <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> | <code>What is the association between cell phone use and glioblastoma?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.8889 | 7 | - | 0.8540 | 0.8752 | 0.8825 | 0.8050 | 0.8864 |
| 1.2698 | 10 | 1.2032 | - | - | - | - | - |
| 1.9048 | 15 | - | 0.8569 | 0.8775 | 0.8850 | 0.8169 | 0.8840 |
| 2.5397 | 20 | 0.5051 | - | - | - | - | - |
| **2.9206** | **23** | **-** | **0.861** | **0.8794** | **0.8866** | **0.8242** | **0.8858** |
| 3.5556 | 28 | - | 0.8609 | 0.8787 | 0.8871 | 0.8224 | 0.8868 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BishoyMalak/Twitter_spam_classifier | BishoyMalak | "2024-07-02T20:31:53Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:stevhliu/my_awesome_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T19:58:48Z" | ---
license: apache-2.0
base_model: stevhliu/my_awesome_model
tags:
- generated_from_keras_callback
model-index:
- name: BishoyMalak/Twitter_spam_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BishoyMalak/Twitter_spam_classifier
This model is a fine-tuned version of [stevhliu/my_awesome_model](https://huggingface.co/stevhliu/my_awesome_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0032
- Validation Loss: 0.0286
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1390, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0021 | 0.0286 | 0 |
| 0.0024 | 0.0286 | 1 |
| 0.0022 | 0.0286 | 2 |
| 0.0021 | 0.0286 | 3 |
| 0.0032 | 0.0286 | 4 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
KYAGABA/wav2vec2-large-xls-r-300m-googlefluers-luo-10hr-v2 | KYAGABA | "2024-07-02T22:44:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-02T19:59:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ItchyChin/tamil-llama-7b-20240703 | ItchyChin | "2024-07-02T20:03:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T20:00:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/antiven0m_-_finch-gguf | RichardErkhov | "2024-07-02T22:57:28Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-02T20:00:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
finch - GGUF
- Model creator: https://huggingface.co/antiven0m/
- Original model: https://huggingface.co/antiven0m/finch/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [finch.Q2_K.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q2_K.gguf) | Q2_K | 2.53GB |
| [finch.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [finch.IQ3_S.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [finch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [finch.IQ3_M.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [finch.Q3_K.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q3_K.gguf) | Q3_K | 3.28GB |
| [finch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [finch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [finch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [finch.Q4_0.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q4_0.gguf) | Q4_0 | 3.83GB |
| [finch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [finch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [finch.Q4_K.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q4_K.gguf) | Q4_K | 4.07GB |
| [finch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [finch.Q4_1.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q4_1.gguf) | Q4_1 | 4.24GB |
| [finch.Q5_0.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q5_0.gguf) | Q5_0 | 4.65GB |
| [finch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [finch.Q5_K.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q5_K.gguf) | Q5_K | 4.78GB |
| [finch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [finch.Q5_1.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q5_1.gguf) | Q5_1 | 5.07GB |
| [finch.Q6_K.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q6_K.gguf) | Q6_K | 5.53GB |
| [finch.Q8_0.gguf](https://huggingface.co/RichardErkhov/antiven0m_-_finch-gguf/blob/main/finch.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
model-index:
- name: finch
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.96
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch
name: Open LLM Leaderboard
---
<head> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css"> </head> <style> body { font-family: "Helvetica Neue", Arial, sans-serif; background: radial-gradient(circle, #ffb347, #ffa92d, #ff9f14, #ff9500, #f08b00); color: #fff; line-height: 1.6; } .container { max-width: 800px; margin: 0 auto; padding: 40px; background-color: rgba(255, 255, 255, 0.1); border-radius: 10px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); } .header { text-align: center; margin-bottom: 40px; } .title { font-size: 48px; font-weight: bold; text-transform: uppercase; letter-spacing: 2px; color: #fff; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); margin-bottom: 10px; } .subtitle { font-size: 24px; font-style: italic; color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); margin-bottom: 20px; } .gif { text-align: center; margin-bottom: 40px; } .gif img { max-width: 100%; height: auto; border-radius: 10px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.3); } .info-section { margin-bottom: 40px; } .section-title { font-size: 32px; font-weight: bold; color: #e6f7ff; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); margin-bottom: 20px; position: relative; padding-left: 30px; } .section-title::before { content: ""; position: absolute; left: 0; top: 50%; transform: translateY(-50%); width: 20px; height: 20px; background-color: #e6f7ff; border-radius: 50%; box-shadow: 0 0 10px rgba(0, 0, 0, 0.3); } .info-item { background-color: rgba(255, 255, 255, 0.1); padding: 20px; border-radius: 10px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.2); margin-bottom: 20px; } .info-item h3 { font-size: 24px; font-weight: bold; color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); margin-bottom: 10px; } .info-item p { font-size: 18px; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); line-height: 1.4; } .info-item pre { background-color: rgba(0, 0, 0, 0.2); padding: 20px; border-radius: 10px; font-family: monospace; font-size: 16px; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); overflow-x: auto; } .info-item a { color: #e6f7ff; text-decoration: none; border-bottom: 1px dashed #e6f7ff; transition: border-bottom 0.3s ease; } .info-item a:hover { border-bottom: 1px solid #e6f7ff; } .info-item table { width: 100%; border-collapse: collapse; box-shadow: 0 0 10px rgba(0, 0, 0, 0.2); } .info-item th, .info-item td { padding: 10px; text-align: left; border: 1px solid rgba(255, 255, 255, 0.2); } .info-item th { background-color: rgba(0, 0, 0, 0.2); font-weight: bold; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); } .info-item td { color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); } </style> <div class="container"> <div class="header"> <h1 class="title">Finch 7B Merge</h1> <p class="subtitle">A SLERP merge of two powerful 7B language models</p> </div> <div class="gif"> <img src="https://i.imgur.com/Da14544.gif" alt="Finch GIF"> </div> <div class="info-section"> <h2 class="section-title">Description</h2> <div class="info-item"> <p>Finch is a 7B language model created by merging <a href="https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo">macadeliccc/WestLake-7B-v2-laser-truthy-dpo</a> and <a href="https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B">SanjiWatsuki/Kunoichi-DPO-v2-7B</a> using the SLERP method.</p> </div> </div> <div class="info-section"> <h2 class="section-title">Quantized Models</h2> <div class="info-item"> <p>Quantized versions of Finch are available:</p> <ul> <li><a href="https://huggingface.co/antiven0m/finch-6bpw-exl2">6bpw EXL2 Quant</a></li> <li><a href="https://huggingface.co/antiven0m/finch-gguf">GGUF Quants</a></li> </ul> </div> </div> <div class="info-section"> <h2 class="section-title">Recommended Settings</h2> <div class="info-item"> <p>For best results, use the <b>ChatML</b> format with the following sampler settings:</p> <pre>Temperature: 1.2 Min P: 0.2 Smoothing Factor: 0.2</pre> </div> </div> <div class="info-section"> <h2 class="section-title">Mergekit Configuration</h2> <div class="info-item"> <pre>base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo - layer_range: [0, 32] model: SanjiWatsuki/Kunoichi-DPO-v2-7B</pre> </div> </div> <div class="info-section"> <h2 class="section-title">Evaluation Results</h2> <div class="info-item"> <p>Finch's performance on the Open LLM Leaderboard:</p> <table> <tr><th>Metric</th><th>Value</th></tr> <tr><td>Avg.</td><td>73.78</td></tr> <tr><td>AI2 Reasoning Challenge (25-Shot)</td><td>71.59</td></tr> <tr><td>HellaSwag (10-Shot)</td><td>87.87</td></tr> <tr><td>MMLU (5-Shot)</td><td>64.81</td></tr> <tr><td>TruthfulQA (0-shot)</td><td>67.96</td></tr> <tr><td>Winogrande (5-shot)</td><td>84.14</td></tr> <tr><td>GSM8k (5-shot)</td><td>66.34</td></tr> </table> <p>Detailed results: <a href="https://huggingface.co/datasets/open-llm-leaderboard/details_antiven0m__finch">https://huggingface.co/datasets/open-llm-leaderboard/details_antiven0m__finch</a></p> </div> </div> </div>
|
KuanP/sdl-contrastive-continual-ckpt_2024-07-02_15-51-35_fold_1 | KuanP | "2024-07-02T20:01:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:01:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/14-star21-07-02 | starnet | "2024-07-02T20:10:48Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T20:03:23Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tctrautman/20240702-kibbe-prod-classification-prompt | tctrautman | "2024-07-02T20:04:22Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T20:04:19Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: 20240702-kibbe-prod-classification-prompt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dubs/Kibbe-Prod/runs/c3cpa8k3)
# 20240702-kibbe-prod-classification-prompt
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4016 | 0.5005 | 515 | 0.0572 |
| 0.5119 | 1.0010 | 1030 | 0.0342 |
| 0.5455 | 1.5015 | 1545 | 0.0353 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
talhaturab/my_model_ha | talhaturab | "2024-07-02T20:04:49Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-07-02T20:04:19Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of a maxttcat cat
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - talhaturab/my_model_ha
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a maxttcat cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
fahdsoliman/bart_lfqa_naits_subset_mixed | fahdsoliman | "2024-07-02T20:05:20Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-07-02T20:05:20Z" | ---
license: mit
---
|
mradermacher/Mistral-11B-AirOmniMix-GGUF | mradermacher | "2024-07-02T21:30:44Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Mistral-11B-AirOmniMix",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:06:18Z" | ---
base_model: NeverSleep/Mistral-11B-AirOmniMix
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/Mistral-11B-AirOmniMix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-11B-AirOmniMix-GGUF/resolve/main/Mistral-11B-AirOmniMix.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zainabsa99/cyber | Zainabsa99 | "2024-07-02T20:09:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:09:31Z" | Entry not found |
joshbz/ppo-LunarLander-v2 | joshbz | "2024-07-02T20:10:58Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-07-02T20:10:37Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.04 +/- 11.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
megamp15/gum_stain_colab | megamp15 | "2024-07-02T20:10:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:10:42Z" | Entry not found |
Zainabsa99/Llama-2-7b-cyber-finetune | Zainabsa99 | "2024-07-02T20:19:13Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T20:11:52Z" | Entry not found |
ahenestrosa/xlm-roberta-base-finetuned-panx-de | ahenestrosa | "2024-07-02T20:12:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:12:27Z" | Entry not found |
yuvraj108c/h100-tensorrt-engines-10.1.0 | yuvraj108c | "2024-07-02T20:18:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:12:35Z" | Entry not found |
krittapol/numnim3_beta_16bit_GGUF | krittapol | "2024-07-02T20:51:51Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-07-02T20:12:45Z" | Entry not found |
RAY2L/pythia-410m-deduped-SimPOW-2 | RAY2L | "2024-07-02T20:21:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | "2024-07-02T20:14:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
haripritam/Qwen2-0.5B-fncl-adapaters-4 | haripritam | "2024-07-02T20:18:04Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:haripritam/Qwen2-0.5B-fncl-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:18:01Z" | Temporary Redirect. Redirecting to /haripritam/Qwen2-fncl-adapaters-4/resolve/main/README.md |
TifTifR/orpo-passive-Llama-3-8B-Instruct | TifTifR | "2024-07-02T22:24:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:18:41Z" | ---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** TifTifR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
starnet/16-star21-07-02 | starnet | "2024-07-02T20:27:32Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T20:19:48Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Gustav0-Freind/gemma-2-27b-Q6_K-GGUF | Gustav0-Freind | "2024-07-02T20:21:32Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-27b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-02T20:20:30Z" | ---
base_model: google/gemma-2-27b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gustav0-Freind/gemma-2-27b-Q6_K-GGUF
This model was converted to GGUF format from [`google/gemma-2-27b`](https://huggingface.co/google/gemma-2-27b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-27b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Gustav0-Freind/gemma-2-27b-Q6_K-GGUF --hf-file gemma-2-27b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Gustav0-Freind/gemma-2-27b-Q6_K-GGUF --hf-file gemma-2-27b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Gustav0-Freind/gemma-2-27b-Q6_K-GGUF --hf-file gemma-2-27b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Gustav0-Freind/gemma-2-27b-Q6_K-GGUF --hf-file gemma-2-27b-q6_k.gguf -c 2048
```
|
Maxivi/mm | Maxivi | "2024-07-02T20:34:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:24:18Z" | Entry not found |
aj2816/Alfred.py | aj2816 | "2024-07-02T20:45:54Z" | 0 | 0 | null | [
"en",
"license:llama3",
"region:us"
] | null | "2024-07-02T20:24:29Z" | ---
license: llama3
language:
- en
--- |
gsar78/Gemma_guanaco_4bit_exp_peft | gsar78 | "2024-07-02T20:25:53Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b",
"region:us"
] | null | "2024-07-02T20:25:46Z" | ---
base_model: google/gemma-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
jssaluja/finetuned_m | jssaluja | "2024-07-02T20:32:03Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T20:26:12Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuned_m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_m
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7253
- F1 Score: 1.0
- Accuracy Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 0.733 | 1.0 | 51 | 0.6310 | 1.0 | 1.0 |
| 0.5161 | 2.0 | 102 | 0.6253 | 1.0 | 1.0 |
| 0.3918 | 3.0 | 153 | 0.6059 | 1.0 | 1.0 |
| 0.2932 | 4.0 | 204 | 0.5987 | 1.0 | 1.0 |
| 0.2298 | 5.0 | 255 | 0.6131 | 1.0 | 1.0 |
| 0.1717 | 6.0 | 306 | 0.6428 | 1.0 | 1.0 |
| 0.1399 | 7.0 | 357 | 0.7044 | 1.0 | 1.0 |
| 0.1093 | 8.0 | 408 | 0.6916 | 1.0 | 1.0 |
| 0.0913 | 9.0 | 459 | 0.7303 | 1.0 | 1.0 |
| 0.0834 | 10.0 | 510 | 0.7253 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Shamoryo/Ripe-Woman | Shamoryo | "2024-07-02T20:30:17Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-07-02T20:26:27Z" | ---
license: mit
---
|
Zhubaiwei77/ppo-Huggy | Zhubaiwei77 | "2024-07-02T20:27:22Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-07-02T20:27:15Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Zhubaiwei77/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ClementineBleuze/scibert_prefix_cont_ll_SEP | ClementineBleuze | "2024-07-02T21:48:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-02T20:27:18Z" | ---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: scibert_prefix_cont_ll_SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_prefix_cont_ll_SEP
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0769
- F1 Weighted: 0.9112
- F1 Samples: 0.9155
- F1 Macro: 0.8184
- F1 Micro: 0.9121
- Accuracy: 0.8863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Weighted | F1 Samples | F1 Macro | F1 Micro | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:-----------:|:----------:|:--------:|:--------:|:--------:|
| 0.2213 | 0.3381 | 500 | 0.1392 | 0.8151 | 0.8223 | 0.6081 | 0.8355 | 0.8018 |
| 0.1377 | 0.6761 | 1000 | 0.1129 | 0.8523 | 0.8584 | 0.6889 | 0.8645 | 0.8342 |
| 0.1214 | 1.0142 | 1500 | 0.1103 | 0.8504 | 0.8552 | 0.6955 | 0.8613 | 0.8302 |
| 0.0921 | 1.3523 | 2000 | 0.0961 | 0.8656 | 0.8655 | 0.7111 | 0.8740 | 0.8390 |
| 0.0863 | 1.6903 | 2500 | 0.0900 | 0.8789 | 0.8810 | 0.7281 | 0.8847 | 0.8545 |
| 0.0825 | 2.0284 | 3000 | 0.0959 | 0.8764 | 0.8844 | 0.7323 | 0.8826 | 0.8532 |
| 0.0567 | 2.3665 | 3500 | 0.0856 | 0.8879 | 0.8951 | 0.7454 | 0.8922 | 0.8633 |
| 0.061 | 2.7045 | 4000 | 0.0952 | 0.8802 | 0.8827 | 0.7397 | 0.8856 | 0.8586 |
| 0.0532 | 3.0426 | 4500 | 0.0839 | 0.8979 | 0.9058 | 0.7639 | 0.9031 | 0.8775 |
| 0.0361 | 3.3807 | 5000 | 0.0831 | 0.9007 | 0.9113 | 0.7791 | 0.9045 | 0.8769 |
| 0.0369 | 3.7187 | 5500 | 0.0833 | 0.9018 | 0.9094 | 0.7880 | 0.9031 | 0.8775 |
| 0.0392 | 4.0568 | 6000 | 0.0826 | 0.9062 | 0.9108 | 0.8180 | 0.9081 | 0.8823 |
| 0.027 | 4.3949 | 6500 | 0.0769 | 0.9112 | 0.9155 | 0.8184 | 0.9121 | 0.8863 |
| 0.0251 | 4.7329 | 7000 | 0.0868 | 0.8996 | 0.9061 | 0.7693 | 0.9018 | 0.8714 |
| 0.0255 | 5.0710 | 7500 | 0.0867 | 0.9083 | 0.9147 | 0.8048 | 0.9115 | 0.8870 |
| 0.0212 | 5.4091 | 8000 | 0.0834 | 0.9100 | 0.9161 | 0.8209 | 0.9116 | 0.8850 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
esyoon/step2_rm-Llama2-7b-2024-07-02-23-59-53 | esyoon | "2024-07-02T20:28:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:28:03Z" | Entry not found |
Moriacrafter/LLaMA3-8B-4bit_DepressionDetection | Moriacrafter | "2024-07-02T20:32:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T20:28:12Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tinsae/Florence-Fish-4 | Tinsae | "2024-07-02T20:29:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-02T20:28:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
starnet/17-star21-07-02 | starnet | "2024-07-02T20:35:53Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T20:28:22Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
GremlinsUnited/SDXL | GremlinsUnited | "2024-07-02T20:58:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:30:29Z" | Entry not found |
vgangal101/marian-finetuned-kde4-en-to-fr | vgangal101 | "2024-07-02T20:31:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:31:51Z" | Entry not found |
handraise-dev/qaharoldv1-expediaexp1 | handraise-dev | "2024-07-02T22:17:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-07-02T20:32:08Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: qaharoldv1-expediaexp1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qaharoldv1-expediaexp1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Rouge1: 0.8306
- Rouge2: 0.6847
- Rougel: 0.8108
- Gen Len: 71.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:-------:|
| No log | 0.48 | 100 | 0.5590 | 0.7686 | 0.5678 | 0.7358 | 71.1 |
| No log | 0.95 | 200 | 0.4939 | 0.7919 | 0.6075 | 0.7615 | 71.1 |
| No log | 1.43 | 300 | 0.4513 | 0.8007 | 0.619 | 0.7713 | 71.1 |
| No log | 1.9 | 400 | 0.4188 | 0.8075 | 0.6419 | 0.7842 | 71.1 |
| No log | 2.38 | 500 | 0.4230 | 0.8126 | 0.6559 | 0.7916 | 71.1 |
| No log | 2.86 | 600 | 0.4149 | 0.8186 | 0.6683 | 0.8014 | 71.1 |
| No log | 3.33 | 700 | 0.4090 | 0.8155 | 0.6579 | 0.796 | 71.1 |
| No log | 3.81 | 800 | 0.4066 | 0.8236 | 0.6645 | 0.8013 | 71.1 |
| No log | 4.29 | 900 | 0.4030 | 0.8253 | 0.6683 | 0.8025 | 71.1 |
| No log | 4.76 | 1000 | 0.4037 | 0.821 | 0.6733 | 0.8033 | 71.1 |
| No log | 5.24 | 1100 | 0.4066 | 0.8196 | 0.6665 | 0.8003 | 71.1 |
| No log | 5.71 | 1200 | 0.4065 | 0.8248 | 0.6663 | 0.8026 | 71.1 |
| No log | 6.19 | 1300 | 0.4216 | 0.8281 | 0.6858 | 0.8107 | 71.1 |
| No log | 6.67 | 1400 | 0.3972 | 0.832 | 0.6872 | 0.813 | 71.1 |
| No log | 7.14 | 1500 | 0.4047 | 0.8298 | 0.6843 | 0.8111 | 71.1 |
| No log | 7.62 | 1600 | 0.4083 | 0.8293 | 0.6867 | 0.8113 | 71.1 |
| No log | 8.1 | 1700 | 0.4071 | 0.8304 | 0.6835 | 0.8096 | 71.1 |
| No log | 8.57 | 1800 | 0.4080 | 0.8308 | 0.6871 | 0.8118 | 71.1 |
| No log | 9.05 | 1900 | 0.4098 | 0.8311 | 0.6867 | 0.8113 | 71.1 |
| No log | 9.52 | 2000 | 0.4145 | 0.8299 | 0.6839 | 0.8102 | 71.1 |
| No log | 10.0 | 2100 | 0.4137 | 0.8306 | 0.6847 | 0.8108 | 71.1 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.2+cu121
- Datasets 2.2.1
- Tokenizers 0.15.2
|
MrOvkill/mamba_370m_dolphin_8k | MrOvkill | "2024-07-02T20:32:23Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:32:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF | larenspear | "2024-07-02T20:36:32Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-34B-Chat",
"license:apache-2.0",
"region:us"
] | null | "2024-07-02T20:33:55Z" | ---
base_model: 01-ai/Yi-1.5-34B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-34B-Chat`](https://huggingface.co/01-ai/Yi-1.5-34B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-34B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF --hf-file yi-1.5-34b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF --hf-file yi-1.5-34b-chat-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF --hf-file yi-1.5-34b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/Yi-1.5-34B-Chat-Q8_0-GGUF --hf-file yi-1.5-34b-chat-q8_0.gguf -c 2048
```
|
danielkosyra/polynomial_2000_9e-4_16b_w0.08 | danielkosyra | "2024-07-02T20:34:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T20:34:39Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: polynomial_2000_9e-4_16b_w0.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polynomial_2000_9e-4_16b_w0.08
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.2835 | 0.7930 | 250 | 4.7865 |
| 4.0709 | 1.5860 | 500 | 3.4963 |
| 3.2714 | 2.3791 | 750 | 3.1497 |
| 2.9694 | 3.1721 | 1000 | 2.9941 |
| 2.7664 | 3.9651 | 1250 | 2.8964 |
| 2.5655 | 4.7581 | 1500 | 2.8429 |
| 2.4287 | 5.5511 | 1750 | 2.8091 |
| 2.3137 | 6.3442 | 2000 | 2.7902 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
starnet/18-star21-07-02 | starnet | "2024-07-02T20:44:07Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | null | "2024-07-02T20:36:46Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
iamalexcaspian/DarwinWatterson-TAWOG-KwesiBoakye | iamalexcaspian | "2024-07-02T22:05:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-07-02T20:36:55Z" | Entry not found |
wdli/llama3-instruct_depression_3 | wdli | "2024-07-02T20:42:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T20:40:05Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** wdli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
The model is trained on reddit_depression_dataset, The epoch = 2.
The training is in dialog format, but the user's input is ignored.
For example
```python
def formatting_prompts_func(examples):
texts_dataset = examples['text']
formatted_prompts = []
for text in texts_dataset:
dialog = [
{"role": "system", "content": "You are a patient undergoing depression."},
# {"role": "user", "content": ""},
{"role": "assistant", "content": text}
]
formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
formatted_prompts.append(formatted_prompt)
return {"text": formatted_prompts}
``` |
haiefff/anime-nsfw-or-not | haiefff | "2024-07-02T21:00:28Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"onnx",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:haiefff/anime-nsfw-or-not",
"base_model:google/vit-base-patch16-224",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-02T20:40:38Z" |
---
tags:
- autotrain
- image-classification
base_model: google/vit-base-patch16-224
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- haiefff/anime-nsfw-or-not
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
|
gsar78/Gemma_guanaco_4bit_exp_peft_merged | gsar78 | "2024-07-02T21:13:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-02T20:40:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |