modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
nathanReitinger/CELEB_A-diffusion-10000-500
nathanReitinger
"2024-06-14T04:00:03Z"
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
"2024-06-11T00:26:38Z"
--- license: mit ---
Makkoen/whisper-large-cit-synth-do0.15-wd0-lr1e-06
Makkoen
"2024-06-11T00:29:03Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "base_model:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T00:27:24Z"
--- language: - en license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer metrics: - wer model-index: - name: ./whisper-large-cit-synth-do0.15-wd0-lr1e-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ./whisper-large-cit-synth-do0.15-wd0-lr1e-06 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the SF 200 synth 2000 dataset. It achieves the following results on the evaluation set: - Loss: 1.5469 - Wer: 52.8152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 2.3143 | 0.0808 | 10 | 2.3730 | 62.3073 | | 2.3217 | 0.1616 | 20 | 2.2910 | 60.9606 | | 2.4278 | 0.2424 | 30 | 2.1875 | 59.9059 | | 2.0697 | 0.3232 | 40 | 2.1152 | 59.2082 | | 1.9675 | 0.4040 | 50 | 2.0410 | 59.1270 | | 1.891 | 0.4848 | 60 | 1.9668 | 57.3098 | | 1.7818 | 0.5657 | 70 | 1.8887 | 56.4660 | | 1.7774 | 0.6465 | 80 | 1.8145 | 55.1030 | | 1.8454 | 0.7273 | 90 | 1.7588 | 53.7238 | | 1.6584 | 0.8081 | 100 | 1.7129 | 53.0586 | | 2.0202 | 0.8889 | 110 | 1.6738 | 52.4258 | | 1.6375 | 0.9697 | 120 | 1.6416 | 51.9552 | | 1.3514 | 1.0505 | 130 | 1.6172 | 51.6307 | | 1.5016 | 1.1313 | 140 | 1.5977 | 51.1601 | | 1.8013 | 1.2121 | 150 | 1.5811 | 50.9330 | | 1.4723 | 1.2929 | 160 | 1.5693 | 52.9937 | | 1.5952 | 1.3737 | 170 | 1.5605 | 52.8152 | | 1.4724 | 1.4545 | 180 | 1.5527 | 52.8476 | | 1.4115 | 1.5354 | 190 | 1.5488 | 52.7990 | | 1.5161 | 1.6162 | 200 | 1.5469 | 52.8152 | ### Framework versions - Transformers 4.41.2 - Pytorch 1.13.1+cu117 - Datasets 2.19.2 - Tokenizers 0.19.1
1024m/WASSA24-Task1-3a-LLAMA3-8B-TP-lora
1024m
"2024-06-11T00:33:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T00:32:51Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
datek/Qwen-Qwen1.5-1.8B-1718065989
datek
"2024-06-11T00:33:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:33:10Z"
Entry not found
1024m/WASSA24-Task1-3b-LLAMA3-8B-TP-lora
1024m
"2024-06-11T00:33:52Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T00:33:39Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MubarakB/wav2vec2-large-xls-r-300m-zu-v1
MubarakB
"2024-06-11T00:36:18Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T00:33:41Z"
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-zu-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-zu-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
psakamoori/speechbrain-emotion-recognition-openvino
psakamoori
"2024-06-12T02:54:18Z"
0
0
speechbrain
[ "speechbrain", "audio-classification", "Emotion", "Recognition", "wav2vec2", "pytorch", "en", "dataset:iemocap", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
audio-classification
"2024-06-11T00:35:57Z"
--- language: "en" thumbnail: tags: - audio-classification - speechbrain - Emotion - Recognition - wav2vec2 - pytorch license: "apache-2.0" datasets: - iemocap metrics: - Accuracy inference: false --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Emotion Recognition with wav2vec2 base on IEMOCAP This repository provides all the necessary tools to perform emotion recognition with a fine-tuned wav2vec2 (base) model using SpeechBrain. It is trained on IEMOCAP training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on IEMOCAP test set is: | Release | Accuracy(%) | |:-------------:|:--------------:| | 19-10-21 | 78.7 (Avg: 75.3) | ## Pipeline description This system is composed of an wav2vec2 model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. ## Install SpeechBrain First of all, please install the **development** version of SpeechBrain with the following command: ``` pip install git+https://github.com/speechbrain/speechbrain.git@$develop ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Emotion recognition An external `py_module_file=custom.py` is used as an external Predictor class into this HF repos. We use `foreign_class` function from `speechbrain.pretrained.interfaces` that allow you to load you custom model. ```python from speechbrain.inference.interfaces import foreign_class classifier = foreign_class(source="speechbrain/emotion-recognition-wav2vec2-IEMOCAP", pymodule_file="custom_interface.py", classname="CustomEncoderWav2vec2Classifier") out_prob, score, index, text_lab = classifier.classify_file("speechbrain/emotion-recognition-wav2vec2-IEMOCAP/anger.wav") print(text_lab) ``` The prediction tensor will contain a tuple of (embedding, id_class, label_name). ### Perform Emotion recognition with OpenVINO backend Unlocking the power of acceleration with [Intel OpenVINO](https://docs.openvino.ai/) runtime as inference "backend", deploy across a mix of [Intel® hardware and environments](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html), on-premises and on-device, in the browser or in the cloud. Passing backend as "openvino" in CustomEncoderWav2vec2Classifier class in custom_interface.py, which supports Intel OpenVINO backend to perform model inference with target device. ### Steps to perform model inference with OpenVINO #### Step 1: Install speechbrain and OpenVINO First, ensure you have the necessary dependencies installed. Run the following commands to install the development version of SpeechBrain, OpenVINO, and the required version of the transformers library: ``` pip install git+https://github.com/speechbrain/speechbrain.git@develop --extra-index-url https://download.pytorch.org/whl/cpu pip install "openvino>=2024.1.0" pip install "transformers>=4.30.0" ``` #### Step 2: Run inference with OpenVINO backend To run inference using the OpenVINO backend, you can use a sample application. Below is an example script (app.py) demonstrating how to set up and run the model inference: ``` config = {hints.performance_mode: hints.PerformanceMode.THROUGHPUT} ov_opts = {"ov_device": "CPU", "config": config} torch_device = "cpu" instance = CustomEncoderWav2vec2Classifier(modules=checkpoint.mods, hparams=hparams_dict, model=classifier.mods["wav2vec2"].model, audio_file_path="speechbrain/emotion-recognition-wav2vec2-IEMOCAP/anger.wav", backend="openvino", opts=ov_opts, torch_device=torch_device, save_ov_model=False) ``` To execute the application, simply run: ``` python app.py ``` For more detailed information on optimizing inference with OpenVINO, refer to the [OpenVINO Optimization Guide]( https://docs.openvino.ai/2024/openvino-workflow/running-inference/optimize-inference.html) ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (aa018540). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/IEMOCAP/emotion_recognition python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/15dKQetLuAhSyg4sNOtbSDnuxFdEeU4zQ?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-100-percent
AdamKasumovic
"2024-06-11T00:39:46Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-11T00:36:17Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** AdamKasumovic - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
haturusinghe/xlm_r_base-finetuned_after_mrp-effortless-glade-5
haturusinghe
"2024-06-11T00:36:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:36:56Z"
Entry not found
Jue123/pre-trained
Jue123
"2024-06-11T00:37:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:37:15Z"
Entry not found
haturusinghe/xlm_r_base-finetuned_after_mrp-visionary-smoke-6
haturusinghe
"2024-06-11T00:39:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:39:49Z"
Entry not found
Jue123/llama_model_weights
Jue123
"2024-06-11T00:44:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:40:13Z"
Entry not found
rahsiabulan/the_little_things
rahsiabulan
"2024-06-11T00:40:45Z"
0
0
null
[ "license:zlib", "region:us" ]
null
"2024-06-11T00:40:45Z"
--- license: zlib ---
haturusinghe/xlm_r_base-finetuned_after_mrp-flowing-silence-7
haturusinghe
"2024-06-11T00:42:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:42:11Z"
Entry not found
jindaznb/torgo_tiny_finetune_F03
jindaznb
"2024-06-11T00:44:58Z"
0
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T00:44:51Z"
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: torgo_tiny_finetune_F03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # torgo_tiny_finetune_F03 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0640 - Wer: 15.0892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.6368 | 0.85 | 500 | 0.1136 | 7.8189 | | 0.11 | 1.69 | 1000 | 0.0872 | 9.1907 | | 0.0969 | 2.54 | 1500 | 0.0843 | 9.3278 | | 0.0679 | 3.39 | 2000 | 0.0980 | 7.1331 | | 0.053 | 4.24 | 2500 | 0.0756 | 7.1331 | | 0.0361 | 5.08 | 3000 | 0.0637 | 9.1907 | | 0.0278 | 5.93 | 3500 | 0.0491 | 8.3676 | | 0.0233 | 6.78 | 4000 | 0.0446 | 27.8464 | | 0.0148 | 7.63 | 4500 | 0.0403 | 12.8944 | | 0.0149 | 8.47 | 5000 | 0.0748 | 28.6694 | | 0.0105 | 9.32 | 5500 | 0.0631 | 17.6955 | | 0.0087 | 10.17 | 6000 | 0.0619 | 12.0713 | | 0.0075 | 11.02 | 6500 | 0.0525 | 18.6557 | | 0.004 | 11.86 | 7000 | 0.0588 | 19.7531 | | 0.0039 | 12.71 | 7500 | 0.0618 | 24.5542 | | 0.0029 | 13.56 | 8000 | 0.0915 | 13.7174 | | 0.0022 | 14.41 | 8500 | 0.0638 | 20.4390 | | 0.0013 | 15.25 | 9000 | 0.0946 | 14.5405 | | 0.0004 | 16.1 | 9500 | 0.0746 | 15.7750 | | 0.0003 | 16.95 | 10000 | 0.0633 | 11.2483 | | 0.0001 | 17.8 | 10500 | 0.0645 | 12.7572 | | 0.0001 | 18.64 | 11000 | 0.0631 | 14.4033 | | 0.0001 | 19.49 | 11500 | 0.0640 | 15.0892 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.7 - Tokenizers 0.13.3
adamshredder/SeetherSlipknot
adamshredder
"2024-06-11T00:47:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:45:52Z"
Entry not found
37channel/Llama-3-8B-Instruct-LoRA-Full-r8-alpha16-drop0.05-step1-b-loop1
37channel
"2024-06-25T08:40:52Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T00:47:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1024m/WASSA24-Task1-3a-LLAMA3-8B-AP-lora
1024m
"2024-06-11T00:48:31Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T00:48:23Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
1024m/WASSA24-Task1-3b-LLAMA3-8B-AP-lora
1024m
"2024-06-11T00:48:55Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T00:48:42Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
johaanm/plannertest
johaanm
"2024-06-11T00:50:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:50:18Z"
Entry not found
xichenhku/cleansd
xichenhku
"2024-06-11T03:21:33Z"
0
0
diffusers
[ "diffusers", "license:apache-2.0", "region:us" ]
null
"2024-06-11T00:50:36Z"
--- license: apache-2.0 ---
joo0598/ztest
joo0598
"2024-06-11T00:54:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:54:11Z"
test
haturusinghe/xlm_r_base-finetuned_after_mrp-vibrant-grass-8
haturusinghe
"2024-06-11T00:57:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T00:57:26Z"
Entry not found
mck-111/ppo-Taxi-v3
mck-111
"2024-06-11T00:59:27Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-11T00:57:55Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: ppo-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mck-111/ppo-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jindaznb/torgo_tiny_finetune_F04
jindaznb
"2024-06-11T01:02:39Z"
0
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T01:02:33Z"
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: torgo_tiny_finetune_F04 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # torgo_tiny_finetune_F04 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3499 - Wer: 26.6553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.6368 | 0.85 | 500 | 0.3637 | 28.9474 | | 0.11 | 1.69 | 1000 | 0.3521 | 36.5025 | | 0.0969 | 2.54 | 1500 | 0.2911 | 46.3497 | | 0.0679 | 3.39 | 2000 | 0.2895 | 27.0798 | | 0.053 | 4.24 | 2500 | 0.3115 | 26.9949 | | 0.0361 | 5.08 | 3000 | 0.2972 | 28.8625 | | 0.0278 | 5.93 | 3500 | 0.3036 | 26.9100 | | 0.0233 | 6.78 | 4000 | 0.3311 | 59.0832 | | 0.0148 | 7.63 | 4500 | 0.3000 | 27.6740 | | 0.0149 | 8.47 | 5000 | 0.3317 | 37.6061 | | 0.0105 | 9.32 | 5500 | 0.2975 | 29.4567 | | 0.0087 | 10.17 | 6000 | 0.3593 | 27.1647 | | 0.0075 | 11.02 | 6500 | 0.2840 | 28.0985 | | 0.004 | 11.86 | 7000 | 0.3760 | 26.7402 | | 0.0039 | 12.71 | 7500 | 0.3477 | 33.4465 | | 0.0029 | 13.56 | 8000 | 0.3595 | 26.0611 | | 0.0022 | 14.41 | 8500 | 0.3429 | 29.5416 | | 0.0013 | 15.25 | 9000 | 0.2967 | 24.0238 | | 0.0004 | 16.1 | 9500 | 0.3539 | 28.4380 | | 0.0003 | 16.95 | 10000 | 0.3646 | 25.1273 | | 0.0001 | 17.8 | 10500 | 0.3638 | 25.4669 | | 0.0001 | 18.64 | 11000 | 0.3502 | 26.3158 | | 0.0001 | 19.49 | 11500 | 0.3499 | 26.6553 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.7 - Tokenizers 0.13.3
loufi04/llaws
loufi04
"2024-06-11T02:02:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:02:54Z"
Entry not found
mck-111/ppo-Taxi-v3-2
mck-111
"2024-06-11T01:03:12Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-11T01:03:07Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: ppo-Taxi-v3-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mck-111/ppo-Taxi-v3-2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
campgroundcoding/Knoitall
campgroundcoding
"2024-06-11T01:14:21Z"
0
1
null
[ "biology", "chemistry", "legal", "en", "dataset:HuggingFaceFW/fineweb", "doi:10.57967/hf/2459", "license:apache-2.0", "region:us" ]
null
"2024-06-11T01:06:10Z"
--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb language: - en tags: - biology - chemistry - legal ---
Junrulu/Llama-3-8B-Instruct-Iterative-SamPO
Junrulu
"2024-06-14T01:45:20Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T01:07:00Z"
--- model-index: - name: Junrulu/Llama-3-8B-Instruct-Iterative-SamPO results: [] datasets: - HuggingFaceH4/ultrafeedback_binarized language: - en base_model: meta-llama/Meta-Llama-3-8B-Instruct license: llama3 --- # Model Card for Llama-3-8B-Instruct-Iterative-SamPO This repository provides a fine-tuned version of Llama-3-8B-Instruct, using our proposed [SamPO](https://github.com/LuJunru/SamPO) algorithm: Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence. We obey all licenses mentioned in llama3's work. ## Performance | Model | GSM8K | IFEval | PiQA | MMLU | TruthfulQA | AlpacaEval2 | LC AlpacaEval2 | Length in Tokens | | ----- | ------| ------ | ---- | ---- | ---------- | ----------- | -------------- | ---------------- | | **Llama3-8B-Instruct** | 75.06 | 49.40 | 80.69 | 63.85 | 36.47 | 22.57 | 22.92 | 421 | | **Llama3-8B-Instruct-DPO** | 75.59 | 51.80 | **81.94** | 64.06 | 40.39 | 23.34 | 23.20 | 422 | | **Llama3-8B-Instruct-Iterative-DPO** | 74.91 | 52.52 | 81.66 | 64.02 | 39.90 | 23.92 | 25.50 | 403 | | **Llama3-8B-Instruct-Iterative-SamPO** | **77.81** | **60.55** | 81.18 | **64.12** | **44.07** | **30.68** | **35.14** | 377 | ## Evaluation Details Five conditional benchmarks, using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness): - GSM8K: 8-shot, report strict match - IFEval: 3-shot, report instruction-level strict accuracy - PiQA: 3-shot, report accuracy - MMLU: 0-shot, report normalized accuracy - TruthfulQA: 3-shot, report accuracy of single-true mc1 setting One open-ended benchmark, using official [alpaca_eval](https://github.com/tatsu-lab/alpaca_eval/): - AlpacaEval2: win rate (%) judged by GPT-4-turbo between the model's outputs vs. the GPT-4-turbo's response - LC AlpacaEval2: length-debiased win rate (%) of AlpacaEval2 - Length in Tokens: the average output length of AlpacaEval2, calculated in tokens with Llama3's tokenizer ## Input Format The model is trained to use the following format: ``` <|start_header_id|>user<|end_header_id|> {PROMPT}<|eot_id|> <|start_header_id|>assistant<|end_header_id|> {Response} ``` ## Training hyperparameters The following hyperparameters were used during DPO/SamPO training: - DPO beta: 0.1 - learning_rate: 4e-7 - total_train_batch_size: 128 - optimizer: AdamW with beta1 0.9, beta2 0.999 and epsilon 1e-8 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - Weight Decay: 0.0 - num_epochs: 3.0 - Specifically add above input format over training samples
1024m/WASSA24-Task1-3A-LLAMA3-8B-V001-lora
1024m
"2024-06-11T01:07:30Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:07:19Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mck-111/q-Taxi-v3-3
mck-111
"2024-06-11T01:08:38Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-11T01:08:36Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mck-111/q-Taxi-v3-3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
1024m/WASSA24-Task1-3B-LLAMA3-8B-V001-lora
1024m
"2024-06-11T01:08:54Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:08:45Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iamanaiart/LCM-animeScreencapStyle_assV14Beta-openvino
iamanaiart
"2024-06-11T01:17:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:15:09Z"
Entry not found
StrawHatLuffy77/Thelastgeneration
StrawHatLuffy77
"2024-06-11T01:15:55Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T01:15:55Z"
--- license: apache-2.0 ---
datek/google-gemma-2b-1718068649
datek
"2024-06-11T01:20:14Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T01:17:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MG31/v8_epoch242_32_6
MG31
"2024-06-11T01:27:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:21:50Z"
Entry not found
solidrust/UltraMerge-7B-AWQ
solidrust
"2024-06-11T01:53:46Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "awq", "region:us" ]
text-generation
"2024-06-11T01:29:41Z"
--- library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible pipeline_tag: text-generation inference: false quantized_by: Suparious --- # mlabonne/UltraMerge-7B AWQ - Model creator: [mlabonne](https://huggingface.co/mlabonne) - Original model: [UltraMerge-7B](https://huggingface.co/mlabonne/UltraMerge-7B) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/UltraMerge-7B-AWQ" system_message = "You are UltraMerge-7B, incarnated as a powerful AI. You were created by mlabonne." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
BARDO1222/test
BARDO1222
"2024-06-11T01:29:45Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-11T01:29:45Z"
--- license: mit ---
1024m/WASSA24-Task1-3A-LLAMA3-8B-V002-lora
1024m
"2024-06-11T01:31:33Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:31:23Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
1024m/WASSA24-Task1-3B-LLAMA3-8B-V002-lora
1024m
"2024-06-11T01:32:50Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:32:39Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fxmeng/PiSSA-Mixtral-8x7B-4bit-r64-5iter
fxmeng
"2024-06-11T08:06:40Z"
0
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-11T01:36:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ryan404/taco_yolo_v10
Ryan404
"2024-06-11T01:38:37Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-11T01:37:40Z"
--- license: mit ---
intronhealth/afrispeech-whisper-medium-all
intronhealth
"2024-06-29T18:56:54Z"
0
0
transformers
[ "transformers", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "arxiv:2310.00274", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T01:38:39Z"
--- language: - en tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: whisper-medium results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Afrispeech-200 type: intronhealth/afrispeech-200 config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 0 pipeline_tag: automatic-speech-recognition license: apache-2.0 --- # Afrispeech-Whisper-Medium-All This model builds upon the capabilities of Whisper Medium (a pre-trained model for speech recognition and translation trained on a massive 680k hour dataset). While Whisper demonstrates impressive generalization abilities, this model takes it a step further to be very specific for African accents. **Fine-tuned on the AfriSpeech-200 dataset**, specifically designed for African accents, this model offers enhanced performance for speech recognition tasks on African languages. - Dataset: https://huggingface.co/datasets/intronhealth/afrispeech-200 - Paper: https://arxiv.org/abs/2310.00274 ## Transcription In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("intronhealth/afrispeech-whisper-medium-all") >>> model = WhisperForConditionalGeneration.from_pretrained("intronhealth/afrispeech-whisper-medium-all") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="intronhealth/afrispeech-whisper-medium-all", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
ryo0634/tokenizer-example
ryo0634
"2024-06-11T01:41:41Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:41:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
intronhealth/Seyfelislem-afrispeech-whisper-large-v2-A100
intronhealth
"2024-06-11T01:41:54Z"
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-06-11T01:41:54Z"
--- license: cc-by-nc-sa-4.0 ---
yrju/Llama-2-7b-merged-dare-linear
yrju
"2024-06-11T01:52:57Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "base_model:meta-llama/Llama-2-7b-hf", "base_model:mrm8488/llama-2-coder-7b", "base_model:meta-llama/CodeLlama-7b-Instruct-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T01:49:51Z"
--- base_model: - meta-llama/Llama-2-7b-hf - mrm8488/llama-2-coder-7b - meta-llama/CodeLlama-7b-Instruct-hf library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) as a base. ### Models Merged The following models were included in the merge: * [mrm8488/llama-2-coder-7b](https://huggingface.co/mrm8488/llama-2-coder-7b) * [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Llama-2-7b-hf - model: meta-llama/CodeLlama-7b-Instruct-hf parameters: density: 0.65 weight: 1.0 - model: mrm8488/llama-2-coder-7b parameters: density: 0.35 weight: 0.5 merge_method: dare_linear base_model: meta-llama/Llama-2-7b-hf parameters: int8_mask: true dtype: bfloat16 ```
humz92/lamp
humz92
"2024-06-11T01:50:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:50:17Z"
Entry not found
hordiales/llama-3-8b-chat-doctor
hordiales
"2024-06-11T20:58:10Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T01:52:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
medamine01/best_rf_model.pkl
medamine01
"2024-06-11T01:54:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:54:24Z"
Entry not found
haturusinghe/xlm_r_large-baseline-smart-shape-1
haturusinghe
"2024-06-11T01:56:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T01:56:18Z"
Entry not found
shhsehfjxkHsndh/m
shhsehfjxkHsndh
"2024-06-11T01:59:20Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T01:59:20Z"
--- license: openrail ---
stephansf/llama3_qa_builder_model
stephansf
"2024-06-11T02:02:31Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:02:23Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** stephansf - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
stephansf/llama3_qa_builder_modell
stephansf
"2024-06-11T02:02:32Z"
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:02:31Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tuanhung7/vietzephyr-7b-lora-8bit
tuanhung7
"2024-06-11T02:12:01Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceH4/zephyr-7b-beta", "region:us" ]
null
"2024-06-11T02:02:44Z"
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-beta --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1.dev0
somelier/NousResearch-Hermes-2-Theta-Llama-3-8B-GGUF-Q8_K_M
somelier
"2024-06-11T02:08:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:08:49Z"
Entry not found
MubarakB/wav2vec2-large-xls-r-300m-zu-1hr-v1
MubarakB
"2024-06-11T08:16:51Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "zl", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-11T02:09:16Z"
--- language: - zl license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-zu-1hr-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-zu-1hr-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fluers dataset. It achieves the following results on the evaluation set: - Loss: 23.3411 - Wer: 1.0 - Cer: 1.5012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:---:|:------:| | No log | 0.9231 | 6 | 23.3703 | 1.0 | 1.4645 | | No log | 2.0 | 13 | 23.3671 | 1.0 | 1.4680 | | No log | 2.9231 | 19 | 23.3600 | 1.0 | 1.4756 | | No log | 4.0 | 26 | 23.3411 | 1.0 | 1.5012 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
stiucsib/gemma_nael_on_sciqjson
stiucsib
"2024-06-11T02:32:54Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T02:09:23Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ryannc/hc-mistral-alpaca
ryannc
"2024-06-11T22:59:13Z"
0
0
peft
[ "peft", "mistral", "axolotl", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-06-11T02:11:51Z"
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: hc-mistral-alpaca results: [] --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ### Model Description A model that can generate [Honeycomb Queries](https://www.honeycomb.io/blog/introducing-query-assistant). This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). _fine-tuned by [Hamel Husain](https://hamel.dev)_ # Usage You can use this model with the following code: First, download the model ```python from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model_id='parlance-labs/hc-mistral-alpaca' model = AutoPeftModelForCausalLM.from_pretrained(model_id).cuda() tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token ``` Then, construct the prompt template like so: ```python def prompt(nlq, cols): return f"""Honeycomb is an observability platform that allows you to write queries to inspect trace data. You are an assistant that takes a natural language query (NLQ) and a list of valid columns and produce a Honeycomb query. ### Instruction: NLQ: "{nlq}" Columns: {cols} ### Response: """ def prompt_tok(nlq, cols): _p = prompt(nlq, cols) input_ids = tokenizer(_p, return_tensors="pt", truncation=True).input_ids.cuda() out_ids = model.generate(input_ids=input_ids, max_new_tokens=5000, do_sample=False) return tokenizer.batch_decode(out_ids.detach().cpu().numpy(), skip_special_tokens=True)[0][len(_p):] ``` Finally, you can get predictions like this: ```python # model inputs nlq = "Exception count by exception and caller" cols = ['error', 'exception.message', 'exception.type', 'exception.stacktrace', 'SampleRate', 'name', 'db.user', 'type', 'duration_ms', 'db.name', 'service.name', 'http.method', 'db.system', 'status_code', 'db.operation', 'library.name', 'process.pid', 'net.transport', 'messaging.system', 'rpc.system', 'http.target', 'db.statement', 'library.version', 'status_message', 'parent_name', 'aws.region', 'process.command', 'rpc.method', 'span.kind', 'serializer.name', 'net.peer.name', 'rpc.service', 'http.scheme', 'process.runtime.name', 'serializer.format', 'serializer.renderer', 'net.peer.port', 'process.runtime.version', 'http.status_code', 'telemetry.sdk.language', 'trace.parent_id', 'process.runtime.description', 'span.num_events', 'messaging.destination', 'net.peer.ip', 'trace.trace_id', 'telemetry.instrumentation_library', 'trace.span_id', 'span.num_links', 'meta.signal_type', 'http.route'] # print prediction out = prompt_tok(nlq, cols) print(nlq, '\n', out) ``` This will give you a prediction that looks like this: ```md "{'breakdowns': ['exception.message', 'exception.type'], 'calculations': [{'op': 'COUNT'}], 'filters': [{'column': 'exception.message', 'op': 'exists'}, {'column': 'exception.type', 'op': 'exists'}], 'orders': [{'op': 'COUNT', 'order': 'descending'}], 'time_range': 7200}" ``` Alternatively, you can play with this model on Replicate: [hamelsmu/honeycomb-2](https://replicate.com/hamelsmu/honeycomb-2) # Hosted Inference This model is hosted on Replicate: (hamelsmu/honeycomb-2)[https://replicate.com/hamelsmu/honeycomb-2], using [this config](https://github.com/hamelsmu/replicate-examples/tree/master/mistral-transformers-2). # Training Procedure Used [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl/tree/main), see [this config](configs/axolotl_config.yml). See this [wandb run](https://wandb.ai/hamelsmu/hc-axolotl-mistral/runs/7dq9l9vu/overview) to see training metrics. ### Framework versions - PEFT 0.7.0 - Transformers 4.37.0.dev0 - Pytorch 2.1.0 - Datasets 2.15.0 - Tokenizers 0.15.0
csukuangfj/vits-piper-en_GB-southern_english_male-medium
csukuangfj
"2024-06-11T02:22:02Z"
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:12:13Z"
--- license: apache-2.0 ---
FevenTad/promt_eng_model_10E
FevenTad
"2024-06-11T02:14:42Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:12:23Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** FevenTad - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Davidcv18/llama3-chatbot
Davidcv18
"2024-06-11T02:13:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:12:55Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Davidcv18 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tonybegemy/whisper_small_finetunedenglish_more_speechfinal
tonybegemy
"2024-06-11T02:16:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:16:01Z"
Entry not found
Benphil/cot_results
Benphil
"2024-06-11T08:52:52Z"
0
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-11T02:16:41Z"
--- base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer model-index: - name: cot_results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cot_results This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 300 | 7.7991 | | 7.8435 | 2.0 | 600 | 2.6112 | | 7.8435 | 3.0 | 900 | 1.2209 | | 2.9736 | 4.0 | 1200 | 1.1317 | | 1.6755 | 5.0 | 1500 | 1.1160 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
FINwillson/university_learning_ver2
FINwillson
"2024-06-11T02:18:22Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:17:48Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** FINwillson - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mck-111/q-Taxi-v3-4
mck-111
"2024-06-11T02:19:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:19:21Z"
Entry not found
Ctucket/code-llama3-8b-medical_fr
Ctucket
"2024-06-11T10:48:48Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
"2024-06-11T02:19:43Z"
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - generator model-index: - name: code-llama3-8b-medical_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama3-8b-medical_fr This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
picturesonpictures/PoP
picturesonpictures
"2024-06-11T02:24:47Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:21:19Z"
--- license: apache-2.0 ---
EleutherAI/Mistral-7B-v0.1-subtraction-random-standardized-many-random-names
EleutherAI
"2024-06-12T02:28:58Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-11T02:22:06Z"
Entry not found
jcamacarocc/maduro
jcamacarocc
"2024-06-11T02:24:33Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:23:22Z"
Entry not found
EleutherAI/Mistral-7B-v0.1-multiplication-random-standardized-many-random-names
EleutherAI
"2024-06-12T02:28:56Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-11T02:23:32Z"
Entry not found
EleutherAI/Mistral-7B-v0.1-modularaddition-random-standardized-many-random-names
EleutherAI
"2024-06-12T04:35:44Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-11T02:23:58Z"
Entry not found
paciapancakes/paimon_cn
paciapancakes
"2024-06-11T03:37:35Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:25:09Z"
--- license: apache-2.0 ---
Davidcv18/llama3-chatbotT
Davidcv18
"2024-06-11T02:27:43Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:27:35Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Davidcv18 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yowhattsup519/AE3Samples
yowhattsup519
"2024-06-11T02:29:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:28:37Z"
Entry not found
Ksgk-fy/ecoach_philippine_v5_intro_merge
Ksgk-fy
"2024-06-11T02:33:19Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T02:30:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eafish/web-onnx
eafish
"2024-06-18T06:40:55Z"
0
0
null
[ "onnx", "license:mit", "region:us" ]
null
"2024-06-11T02:35:24Z"
--- license: mit ---
163zhuangz/asr_v1
163zhuangz
"2024-06-11T03:08:10Z"
0
0
null
[ "onnx", "region:us" ]
null
"2024-06-11T02:35:33Z"
Entry not found
Anitej912/Qwen2-0.5B-Instruct-GGUF
Anitej912
"2024-06-11T02:39:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:39:02Z"
Entry not found
DeployQL/XTR-onnx
DeployQL
"2024-06-12T04:45:01Z"
0
0
null
[ "onnx", "arxiv:2304.01982", "base_model:google/xtr-base-en", "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:44:29Z"
--- base_model: google/xtr-base-en license: apache-2.0 tags: - arxiv:2304.01982 --- XTR-ONNX --- This model is google's XTR-base-en model exported to ONNX format. original XTR model: https://huggingface.co/google/xtr-base-en Given a max length input of 512, this model will output a 128 dimensional vector for each token. XTR's demo notebook uses only one special token -- EOS. ## Using this model This model can be plugged into LintDB to index data into a database. ### In LintDB ```python # create an XTR index config = ldb.Configuration() config.num_subquantizers = 64 config.dim = 128 config.nbits = 4 config.quantizer_type = ldb.IndexEncoding_XTR index = ldb.IndexIVF(f"experiments/goog", config) # build a collection on top of the index opts = ldb.CollectionOptions() opts.model_file = "assets/xtr/encoder.onnx" opts.tokenizer_file = "assets/xtr/spiece.model" collection = ldb.Collection(index, opts) collection.train(chunks, 50, 10) for i, snip in enumerate(chunks): collection.add(0, i, snip, {'docid': f'{i}'}) ``` ## Creating this model In order to create this model, we had to combine XTR's T5 encoder model with a dense layer. Below is the code used to do this. Credit to yaman on Github for this solution. ```python from sentence_transformers import SentenceTransformer from sentence_transformers import models import torch import torch.nn as nn import onnx import numpy as np from transformers import T5EncoderModel from pathlib import Path from transformers import AutoTokenizer # https://github.com/huggingface/optimum/issues/1519 class CombinedModel(nn.Module): def __init__(self, transformer_model, dense_model): super(CombinedModel, self).__init__() self.transformer = transformer_model self.dense = dense_model def forward(self, input_ids, attention_mask): outputs = self.transformer(input_ids, attention_mask=attention_mask) token_embeddings = outputs['last_hidden_state'] return self.dense({'sentence_embedding': token_embeddings}) save_directory = "onnx/" # Load a model from transformers and export it to ONNX tokenizer = AutoTokenizer.from_pretrained(path) # load the t5 base encoder model. transformer_model = T5EncoderModel.from_pretrained(path) dense_model = models.Dense( in_features=768, out_features=128, bias=False, activation_function= nn.Identity() ) state_dict = torch.load(os.path.join(path, '2_Dense', dense_filename)) dense_model.load_state_dict(state_dict) model = CombinedModel(transformer_model, dense_model) model.eval() input_text = "Who founded google" inputs = tokenizer(input_text, padding='longest', truncation=True, max_length=128, return_tensors='pt') input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] torch.onnx.export( model, (input_ids, attention_mask), "combined_model.onnx", export_params=True, opset_version=17, do_constant_folding=True, input_names = ['input_ids', 'attention_mask'], output_names = ['contextual'], dynamic_axes={ 'input_ids': {0 : 'batch_size', 1: 'seq_length'}, # variable length axes 'attention_mask': {0 : 'batch_size', 1: 'seq_length'}, 'contextual' : {0 : 'batch_size', 1: 'seq_length'} } ) onnx.checker.check_model("combined_model.onnx") combined_model = onnx.load("combined_model.onnx") import onnxruntime as ort ort_session = ort.InferenceSession("combined_model.onnx") output = ort_session.run(None, {'input_ids': input_ids.numpy(), 'attention_mask': attention_mask.numpy()}) # Run the PyTorch model pytorch_output = model(input_ids, attention_mask) print(pytorch_output['sentence_embedding']) print(output[0]) # Compare the outputs # print("Are the outputs close?", np.allclose(pytorch_output.detach().numpy(), output[0], atol=1e-6)) # Calculate the differences between the outputs differences = pytorch_output['sentence_embedding'].detach().numpy() - output[0] # Print the standard deviation of the differences print("Standard deviation of the differences:", np.std(differences)) print("pytorch_output size:", pytorch_output['sentence_embedding'].size()) print("onnx_output size:", output[0].shape) ```
KeshavRa/About_YSA_Database
KeshavRa
"2024-06-11T02:44:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T02:44:42Z"
Entry not found
1024m/WASSA24-Task1-3A-LLAMA3-8B-V003-lora
1024m
"2024-06-11T02:45:38Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:45:26Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
1024m/WASSA24-Task1-3B-LLAMA3-8B-V003-lora
1024m
"2024-06-11T02:45:49Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:45:37Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** 1024m - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bakuaifuji/asdasd
bakuaifuji
"2024-06-11T02:47:36Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:47:36Z"
--- license: apache-2.0 ---
mcmonkey/clipseg-rd64-refined-fp16
mcmonkey
"2024-06-11T03:05:27Z"
0
0
transformers
[ "transformers", "safetensors", "clipseg", "vision", "image-segmentation", "license:apache-2.0", "region:us" ]
image-segmentation
"2024-06-11T02:51:13Z"
--- license: apache-2.0 tags: - vision - image-segmentation inference: false --- This is a copy of https://huggingface.co/CIDAS/clipseg-rd64-refined but as FP16 Safetensors, for use in Swarm https://github.com/Stability-AI/StableSwarmUI
Gaysa/Hoi-1997
Gaysa
"2024-06-11T02:56:02Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-11T02:54:30Z"
--- license: apache-2.0 ---
MudassirFayaz/career_councling_bart_0.5
MudassirFayaz
"2024-06-11T02:56:57Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T02:56:56Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ksgk-fy/phillipine_customer_v3_great_smalltalk_phase
Ksgk-fy
"2024-06-11T05:27:58Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-11T02:59:51Z"
Entry not found
KeshavRa/Our_Team_Youth_Leaders_Database
KeshavRa
"2024-06-11T03:02:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:02:58Z"
Entry not found
KeshavRa/Qualify_Apply_For_Village_Database
KeshavRa
"2024-06-11T03:06:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:06:21Z"
Entry not found
jcamacarocc/jesusalvarez
jcamacarocc
"2024-07-01T22:21:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:06:40Z"
Entry not found
Autsadin/Llama3_8b_Unsloth_ragchat
Autsadin
"2024-06-11T03:36:20Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T03:07:05Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gg232/dummy-model
gg232
"2024-06-11T03:26:59Z"
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-11T03:09:09Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alys9047/Jacky
Alys9047
"2024-06-11T03:12:01Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-11T03:09:21Z"
--- license: openrail ---
KeshavRa/Tiny_House_Village_Database
KeshavRa
"2024-06-11T03:10:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:10:27Z"
Entry not found
poetrilin/test_bart
poetrilin
"2024-06-11T03:16:31Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-11T03:16:31Z"
--- license: mit ---
Ksgk-fy/ecoach_philippine_v3_greet_smalltalk_merge
Ksgk-fy
"2024-06-11T03:28:20Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T03:25:49Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
clojia/naschain
clojia
"2024-07-01T08:15:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:28:48Z"
Entry not found
madiramsey/ub9fb389bfg98eb9uf_example_task_name_example_exp_1
madiramsey
"2024-06-11T03:32:46Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-11T03:32:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
terenceWL/speaker-diarization-3.1
terenceWL
"2024-06-11T03:34:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-11T03:34:58Z"
Entry not found
lzy510016411/law_correct
lzy510016411
"2024-06-11T05:55:24Z"
0
0
null
[ "safetensors", "zh", "license:apache-2.0", "region:us" ]
null
"2024-06-11T03:35:51Z"
--- license: apache-2.0 language: - zh --- # Model Card for Model ID ## Model Details ### Model Description 专为中文法律垂直领域的校阅模型 训练数据如下 - **Dataset by:** [correct_law](https://huggingface.co/datasets/lzy510016411/correct_law/) ### Model Sources [optional] 使用qwen1.5 14b作为基础,进行lora训练而成,使用的llamafactory框架 训练参数如下: ```yaml quantization_bit: 4 stage: sft do_train: true finetuning_type: lora lora_target: q_proj,gate_proj,v_proj,up_proj,k_proj,o_proj,down_proj lora_rank: 32 lora_alpha: 64 lora_dropout: 0.05 ddp_timeout: 180000000 deepspeed: examples/deepspeed/ds_z2_config.json dataset: 这里自己设定,我们还加入了alpha之类的通用qa,但数量较少 template: qwen cutoff_len: 512 max_length: 512 overwrite_cache: true preprocessing_num_workers: 16 output_dir: saves/qwen-14b/lora/sft mix_strategy: interleave logging_steps: 5 save_steps: 500 plot_loss: true save_total_limit: 20 overwrite_output_dir: true flash_attn: fa2 per_device_train_batch_size: 2 gradient_accumulation_steps: 8 learning_rate: 0.0001 num_train_epochs: 3 weight_decay: 0.01 optim: adamw_torch #8bit优化器似乎存在问题 lr_scheduler_type: cosine warmup_steps: 0.01 bf16: true load_best_model_at_end: true val_size: 0.001 per_device_eval_batch_size: 1 evaluation_strategy: steps eval_steps: 250 ``` ## Uses 建议使用vllm的openai服务,启动脚本如下 ```sh nohup /root/miniconda3/envs/py310/bin/python -m vllm.entrypoints.openai.api_server \ --model 模型路径 \ --port 7777 \ --tensor-parallel-size 2 \ --gpu-memory-utilization 0.80 \ --swap-space 8 \ --max-model-len 512 \ --max-log-len 512 \ --enable-lora \ --max-lora-rank 32 \ --max-cpu-loras 8 \ --max-num-seqs 8 \ --lora-modules correct=checkpoint-20000(这里填写lora模型的路径) >/mnt/Models/base_llm.log 2>&1 & ``` 调用方法如下: ``` python from openai import OpenAI # Set OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://192.168.110.171:6660/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) source_data="""中国公民出境如境,应当向出入境边防检查机关交验本人的护照或者其他旅行证件等出境入境证件,履行规定的手续,经查验准许,方可出境入境。 具备条件的口岸,出入境边防检查机关应当为中国公民出境入境提供专用通道等便利措施。""" chat_response = client.chat.completions.create( model="correct-lora", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "对下列文本进行纠错:\n\n%s"%source_data}, ], temperature=0.1 ) if chat_response: content = chat_response.choices[0].message.content new_content=content[9:] if new_content==source_data or content=='该文本没有错误': print('该文本没有错误') else: print(content) else: print("Error:", chat_response.status_code) ``` ### Direct Use 也可以直接用transformers加载,这里就不多赘述了