Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
alvarobartt/openhermes-2.5-mistral-7b-dpo
https://huggingface.co/alvarobartt/openhermes-2.5-mistral-7b-dpo
Failed to access https://huggingface.co/alvarobartt/openhermes-2.5-mistral-7b-dpo - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : alvarobartt/openhermes-2.5-mistral-7b-dpo ### Model URL : https://huggingface.co/alvarobartt/openhermes-2.5-mistral-7b-dpo ### Model Description : Failed to access https://huggingface.co/alvarobartt/openhermes-2.5-mistral-7b-dpo - HTTP Status Code: 404
DoctorKrazy/sbaitso
https://huggingface.co/DoctorKrazy/sbaitso
This is a voice model trained on sbaitso, most famously known for the voice of SCP 079 in the SCP : Containement Breach video game. If you use this AI voice model please credit me by linking this page in the description.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoctorKrazy/sbaitso ### Model URL : https://huggingface.co/DoctorKrazy/sbaitso ### Model Description : This is a voice model trained on sbaitso, most famously known for the voice of SCP 079 in the SCP : Containement Breach video game. If you use this AI voice model please credit me by linking this page in the description.
OptimusAz/Comic
https://huggingface.co/OptimusAz/Comic
Titel: Die Schlange und der Verräter Panel 1: (Weite Einstellung. Ein dunkler Wald mit dichten Bäumen und einem schmalen Pfad. Die Sonne scheint durch die Baumkronen. Im Vordergrund sehen wir eine Schlange, die elegant über den Pfad gleitet.) Erzähler: In einem geheimnisvollen Wald, weit weg von jeglicher Zivilisation, lebte eine kluge Schlange namens Seraphina. Panel 2: (Nahaufnahme von Seraphina. Sie hat glänzende Schuppen und leuchtende Augen. Sie sieht misstrauisch aus.) Seraphina: Dieser Wald birgt viele Geheimnisse. Ich muss vorsichtig sein und darauf achten, wem ich vertraue. Panel 3: (Seraphina nähert sich einem anderen Tier, das halb im Schatten liegt. Es ist ein fuchsähnliches Wesen mit einem schelmischen Ausdruck.) Seraphina: Guten Tag, Fremder. Ich bin Seraphina. Was verschlägt dich in diesen Wald? Panel 4: (Das fuchsähnliche Wesen lächelt und entblößt seine spitzen Zähne. Es sieht bedrohlich aus.) Fuchsähnliches Wesen: Ich bin Vex, und ich durchstreife diesen Wald auf der Suche nach Abenteuern. Vielleicht können wir zusammen auf Entdeckungsreise gehen? Panel 5: (Seraphina betrachtet Vex skeptisch. Ihre Augen schimmern verdächtig.) Seraphina: Ich bin misstrauisch gegenüber Fremden, Vex. Warum sollte ich dir vertrauen? Panel 6: (Vex legt eine Pfote auf sein Herz und sieht Seraphina mit einem unschuldigen Blick an.) Vex: Mein Herz ist rein, Seraphina. Ich schwöre, ich werde dir kein Leid zufügen. Ich suche nur nach einem Freund, mit dem ich diese Abenteuer teilen kann. Panel 7: (Seraphina denkt einen Moment nach, dann nickt sie langsam.) Seraphina: Gut, Vex. Wir können zusammen reisen, aber sei gewarnt: Wenn du mich betrügst, wird es Konsequenzen geben. Panel 8: (Die beiden setzen ihre Reise durch den Wald fort, während die Sonne langsam untergeht. Seraphina bleibt wachsam, während Vex fröhlich plappert.) Erzähler: Und so begann die ungewöhnliche Freundschaft zwischen Seraphina und Vex. Doch in den Schatten lauerte ein düsteres Geheimnis, das bald ans Licht kommen würde.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : OptimusAz/Comic ### Model URL : https://huggingface.co/OptimusAz/Comic ### Model Description : Titel: Die Schlange und der Verräter Panel 1: (Weite Einstellung. Ein dunkler Wald mit dichten Bäumen und einem schmalen Pfad. Die Sonne scheint durch die Baumkronen. Im Vordergrund sehen wir eine Schlange, die elegant über den Pfad gleitet.) Erzähler: In einem geheimnisvollen Wald, weit weg von jeglicher Zivilisation, lebte eine kluge Schlange namens Seraphina. Panel 2: (Nahaufnahme von Seraphina. Sie hat glänzende Schuppen und leuchtende Augen. Sie sieht misstrauisch aus.) Seraphina: Dieser Wald birgt viele Geheimnisse. Ich muss vorsichtig sein und darauf achten, wem ich vertraue. Panel 3: (Seraphina nähert sich einem anderen Tier, das halb im Schatten liegt. Es ist ein fuchsähnliches Wesen mit einem schelmischen Ausdruck.) Seraphina: Guten Tag, Fremder. Ich bin Seraphina. Was verschlägt dich in diesen Wald? Panel 4: (Das fuchsähnliche Wesen lächelt und entblößt seine spitzen Zähne. Es sieht bedrohlich aus.) Fuchsähnliches Wesen: Ich bin Vex, und ich durchstreife diesen Wald auf der Suche nach Abenteuern. Vielleicht können wir zusammen auf Entdeckungsreise gehen? Panel 5: (Seraphina betrachtet Vex skeptisch. Ihre Augen schimmern verdächtig.) Seraphina: Ich bin misstrauisch gegenüber Fremden, Vex. Warum sollte ich dir vertrauen? Panel 6: (Vex legt eine Pfote auf sein Herz und sieht Seraphina mit einem unschuldigen Blick an.) Vex: Mein Herz ist rein, Seraphina. Ich schwöre, ich werde dir kein Leid zufügen. Ich suche nur nach einem Freund, mit dem ich diese Abenteuer teilen kann. Panel 7: (Seraphina denkt einen Moment nach, dann nickt sie langsam.) Seraphina: Gut, Vex. Wir können zusammen reisen, aber sei gewarnt: Wenn du mich betrügst, wird es Konsequenzen geben. Panel 8: (Die beiden setzen ihre Reise durch den Wald fort, während die Sonne langsam untergeht. Seraphina bleibt wachsam, während Vex fröhlich plappert.) Erzähler: Und so begann die ungewöhnliche Freundschaft zwischen Seraphina und Vex. Doch in den Schatten lauerte ein düsteres Geheimnis, das bald ans Licht kommen würde.
ypl/zephyr-support-movie_recom
https://huggingface.co/ypl/zephyr-support-movie_recom
Failed to access https://huggingface.co/ypl/zephyr-support-movie_recom - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ypl/zephyr-support-movie_recom ### Model URL : https://huggingface.co/ypl/zephyr-support-movie_recom ### Model Description : Failed to access https://huggingface.co/ypl/zephyr-support-movie_recom - HTTP Status Code: 404
ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss
https://huggingface.co/ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 2334 with parameters: Loss: sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss with parameters: Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss ### Model URL : https://huggingface.co/ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 2334 with parameters: Loss: sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss with parameters: Parameters of the fit()-Method:
Poliuszko/PPO-LunarLander
https://huggingface.co/Poliuszko/PPO-LunarLander
Failed to access https://huggingface.co/Poliuszko/PPO-LunarLander - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Poliuszko/PPO-LunarLander ### Model URL : https://huggingface.co/Poliuszko/PPO-LunarLander ### Model Description : Failed to access https://huggingface.co/Poliuszko/PPO-LunarLander - HTTP Status Code: 404
Gauri54damle/sdxl-dreambooth-model-BigMac-3.0
https://huggingface.co/Gauri54damle/sdxl-dreambooth-model-BigMac-3.0
Failed to access https://huggingface.co/Gauri54damle/sdxl-dreambooth-model-BigMac-3.0 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Gauri54damle/sdxl-dreambooth-model-BigMac-3.0 ### Model URL : https://huggingface.co/Gauri54damle/sdxl-dreambooth-model-BigMac-3.0 ### Model Description : Failed to access https://huggingface.co/Gauri54damle/sdxl-dreambooth-model-BigMac-3.0 - HTTP Status Code: 404
Gauravjjj/kkkk
https://huggingface.co/Gauravjjj/kkkk
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Gauravjjj/kkkk ### Model URL : https://huggingface.co/Gauravjjj/kkkk ### Model Description : No model card New: Create and edit this model card directly on the website!
tigerWu/texttoimage
https://huggingface.co/tigerWu/texttoimage
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tigerWu/texttoimage ### Model URL : https://huggingface.co/tigerWu/texttoimage ### Model Description : No model card New: Create and edit this model card directly on the website!
supreethrao/mistral-corrector-25k-full
https://huggingface.co/supreethrao/mistral-corrector-25k-full
Failed to access https://huggingface.co/supreethrao/mistral-corrector-25k-full - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : supreethrao/mistral-corrector-25k-full ### Model URL : https://huggingface.co/supreethrao/mistral-corrector-25k-full ### Model Description : Failed to access https://huggingface.co/supreethrao/mistral-corrector-25k-full - HTTP Status Code: 404
hyahyoo/corgy_dog_LoRA
https://huggingface.co/hyahyoo/corgy_dog_LoRA
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : hyahyoo/corgy_dog_LoRA ### Model URL : https://huggingface.co/hyahyoo/corgy_dog_LoRA ### Model Description : No model card New: Create and edit this model card directly on the website!
reach-vb/musicgen-small-test
https://huggingface.co/reach-vb/musicgen-small-test
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in Simple and Controllable Music Generation by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. Four checkpoints are released: Try out MusicGen yourself! You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. Or save them as a .wav file using a third-party library, e.g. scipy: For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the MusicGen docs. You can also run MusicGen locally through the original Audiocraft library: Organization developing the model: The FAIR team of Meta AI. Model date: MusicGen was trained between April 2023 and May 2023. Model version: This is the version 1 of the model. Model type: MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. Paper or resources for more information: More information can be found in the paper Simple and Controllable Music Generation. Citation details: License: Code is released under MIT, model weights are released under CC-BY-NC 4.0. Where to send questions or comments about the model: Questions and comments about MusicGen can be sent via the Github repository of the project, or by opening an issue. Primary intended use: The primary use of MusicGen is research on AI-based music generation, including: Primary intended users: The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. Out-of-scope use cases: The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. Models performance measures: We used the following objective measure to evaluate the model on a standard music benchmark: Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: More details on performance measures and human studies can be found in the paper. Decision thresholds: Not applicable. The model was evaluated on the MusicCaps benchmark and on an in-domain held-out evaluation set, with no artist overlap with the training set. The model was trained on licensed data using the following sources: the Meta Music Initiative Sound Collection, Shutterstock music collection and the Pond5 music collection. See the paper for more details about the training set and corresponding preprocessing. Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. More information can be found in the paper Simple and Controllable Music Generation, in the Results section. Data: The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. Mitigations: Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs). Limitations: Biases: The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. Risks and harms: Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. Use cases: Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : reach-vb/musicgen-small-test ### Model URL : https://huggingface.co/reach-vb/musicgen-small-test ### Model Description : MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in Simple and Controllable Music Generation by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. Four checkpoints are released: Try out MusicGen yourself! You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. Or save them as a .wav file using a third-party library, e.g. scipy: For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the MusicGen docs. You can also run MusicGen locally through the original Audiocraft library: Organization developing the model: The FAIR team of Meta AI. Model date: MusicGen was trained between April 2023 and May 2023. Model version: This is the version 1 of the model. Model type: MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. Paper or resources for more information: More information can be found in the paper Simple and Controllable Music Generation. Citation details: License: Code is released under MIT, model weights are released under CC-BY-NC 4.0. Where to send questions or comments about the model: Questions and comments about MusicGen can be sent via the Github repository of the project, or by opening an issue. Primary intended use: The primary use of MusicGen is research on AI-based music generation, including: Primary intended users: The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. Out-of-scope use cases: The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. Models performance measures: We used the following objective measure to evaluate the model on a standard music benchmark: Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: More details on performance measures and human studies can be found in the paper. Decision thresholds: Not applicable. The model was evaluated on the MusicCaps benchmark and on an in-domain held-out evaluation set, with no artist overlap with the training set. The model was trained on licensed data using the following sources: the Meta Music Initiative Sound Collection, Shutterstock music collection and the Pond5 music collection. See the paper for more details about the training set and corresponding preprocessing. Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. More information can be found in the paper Simple and Controllable Music Generation, in the Results section. Data: The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. Mitigations: Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source Hybrid Transformer for Music Source Separation (HT-Demucs). Limitations: Biases: The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. Risks and harms: Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. Use cases: Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
waldie/Yi-34B-200K-AEZAKMI-RAW-2901-4bpw-h6-exl2
https://huggingface.co/waldie/Yi-34B-200K-AEZAKMI-RAW-2901-4bpw-h6-exl2
EXPERIMENTAL MODEL, NOT FINAL, IT HAS SOME ISSUES, I DIDN'T TEST IT TOO MUCH YET Yi-34B 200K base model fine-tuned on RAWrr v1 dataset via DPO and then fine-tuned on AEZAKMI v2 dataset via SFT. DPO training took around 6 hours, SFT took around 25 hours. I used unsloth for both stages. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh. Training was done with max_position_embeddings set at 4096. Then it was reverted back to 200K after applying LoRA. I recommend using ChatML format, as this was used during fine-tune. Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. Both A chat. and A chat with uncensored assistant. system prompt work fine and are pretty refusal-free. It's a chat model, not a base completion-only one. Use is limited by Yi license. Since no-robots dataset was used for making rawrr_v1, I guess you maybe shouldn't use it for commercial activities. I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had somewhat good experience running this model with temperature 1.0-1.2. It seems like the strongest anti-refusal bias is at 0 ctx - the first prompt. But it's also present, albeit a little bit less, further down. I plan to expand rawrr dataset and include more samples without system prompt, this should help here. lora_r: 16 lora_alpha: 32 max_length: 500 learning_rate: 0.00005 lr_scheduler_type: "linear" target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",] gradient_accumulation_steps: 16 per_device_batch_size: 1 num_train_epochs: 1 Script used for DPO training can be found here: https://huggingface.co/adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r3/blob/main/yi-34b-dpo-unsloth-1.py lora_r: 16 lora_alpha: 32 max_length: 2400 learning_rate: 0.000095 lr_scheduler_type: "cosine" lr_scheduler_kwargs: { "num_cycles" : 0.25, } target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",] gradient_accumulation_steps: 1 per_device_batch_size: 1 num_train_epochs: 2 Script used for SFT training can be found here (older run, different hyperparameters): https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301-LoRA/blob/main/yi-34b-aezakmi-sft-1-hf.py Thanks to mlabonne, Daniel Han and Michael Han for providing open source code that was used for fine-tuning.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : waldie/Yi-34B-200K-AEZAKMI-RAW-2901-4bpw-h6-exl2 ### Model URL : https://huggingface.co/waldie/Yi-34B-200K-AEZAKMI-RAW-2901-4bpw-h6-exl2 ### Model Description : EXPERIMENTAL MODEL, NOT FINAL, IT HAS SOME ISSUES, I DIDN'T TEST IT TOO MUCH YET Yi-34B 200K base model fine-tuned on RAWrr v1 dataset via DPO and then fine-tuned on AEZAKMI v2 dataset via SFT. DPO training took around 6 hours, SFT took around 25 hours. I used unsloth for both stages. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh. Training was done with max_position_embeddings set at 4096. Then it was reverted back to 200K after applying LoRA. I recommend using ChatML format, as this was used during fine-tune. Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. Both A chat. and A chat with uncensored assistant. system prompt work fine and are pretty refusal-free. It's a chat model, not a base completion-only one. Use is limited by Yi license. Since no-robots dataset was used for making rawrr_v1, I guess you maybe shouldn't use it for commercial activities. I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had somewhat good experience running this model with temperature 1.0-1.2. It seems like the strongest anti-refusal bias is at 0 ctx - the first prompt. But it's also present, albeit a little bit less, further down. I plan to expand rawrr dataset and include more samples without system prompt, this should help here. lora_r: 16 lora_alpha: 32 max_length: 500 learning_rate: 0.00005 lr_scheduler_type: "linear" target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",] gradient_accumulation_steps: 16 per_device_batch_size: 1 num_train_epochs: 1 Script used for DPO training can be found here: https://huggingface.co/adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r3/blob/main/yi-34b-dpo-unsloth-1.py lora_r: 16 lora_alpha: 32 max_length: 2400 learning_rate: 0.000095 lr_scheduler_type: "cosine" lr_scheduler_kwargs: { "num_cycles" : 0.25, } target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",] gradient_accumulation_steps: 1 per_device_batch_size: 1 num_train_epochs: 2 Script used for SFT training can be found here (older run, different hyperparameters): https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301-LoRA/blob/main/yi-34b-aezakmi-sft-1-hf.py Thanks to mlabonne, Daniel Han and Michael Han for providing open source code that was used for fine-tuning.
Moumou54/generateurannonce
https://huggingface.co/Moumou54/generateurannonce
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Moumou54/generateurannonce ### Model URL : https://huggingface.co/Moumou54/generateurannonce ### Model Description : No model card New: Create and edit this model card directly on the website!
vieveks/llama-2-7b-platypus
https://huggingface.co/vieveks/llama-2-7b-platypus
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : vieveks/llama-2-7b-platypus ### Model URL : https://huggingface.co/vieveks/llama-2-7b-platypus ### Model Description : No model card New: Create and edit this model card directly on the website!
FirulAI/FirulaiModel
https://huggingface.co/FirulAI/FirulaiModel
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FirulAI/FirulaiModel ### Model URL : https://huggingface.co/FirulAI/FirulaiModel ### Model Description :
danaleee/CL_rank50_iter500
https://huggingface.co/danaleee/CL_rank50_iter500
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/CL_rank50_iter500 ### Model URL : https://huggingface.co/danaleee/CL_rank50_iter500 ### Model Description : These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
frahman/ppo-Huggy
https://huggingface.co/frahman/ppo-Huggy
This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: You can watch your agent playing directly in your browser
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : frahman/ppo-Huggy ### Model URL : https://huggingface.co/frahman/ppo-Huggy ### Model Description : This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: You can watch your agent playing directly in your browser
syedimran07/mistral-finetuned-alpaca
https://huggingface.co/syedimran07/mistral-finetuned-alpaca
Failed to access https://huggingface.co/syedimran07/mistral-finetuned-alpaca - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : syedimran07/mistral-finetuned-alpaca ### Model URL : https://huggingface.co/syedimran07/mistral-finetuned-alpaca ### Model Description : Failed to access https://huggingface.co/syedimran07/mistral-finetuned-alpaca - HTTP Status Code: 404
Vigilus/prova
https://huggingface.co/Vigilus/prova
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Vigilus/prova ### Model URL : https://huggingface.co/Vigilus/prova ### Model Description :
medxiaorudan/bert-base-cased-finetuned-MultiNERD-SystemB
https://huggingface.co/medxiaorudan/bert-base-cased-finetuned-MultiNERD-SystemB
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : medxiaorudan/bert-base-cased-finetuned-MultiNERD-SystemB ### Model URL : https://huggingface.co/medxiaorudan/bert-base-cased-finetuned-MultiNERD-SystemB ### Model Description : No model card New: Create and edit this model card directly on the website!
Hosseindastoorani/Maymodel
https://huggingface.co/Hosseindastoorani/Maymodel
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Hosseindastoorani/Maymodel ### Model URL : https://huggingface.co/Hosseindastoorani/Maymodel ### Model Description :
Sololeveling/insightface
https://huggingface.co/Sololeveling/insightface
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Sololeveling/insightface ### Model URL : https://huggingface.co/Sololeveling/insightface ### Model Description :
kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-3e-4
https://huggingface.co/kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-3e-4
This model was trained from scratch on the kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-3e-4 ### Model URL : https://huggingface.co/kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-3e-4 ### Model Description : This model was trained from scratch on the kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Lakoc/gpt2_256h_8l_add_head6_05
https://huggingface.co/Lakoc/gpt2_256h_8l_add_head6_05
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Lakoc/gpt2_256h_8l_add_head6_05 ### Model URL : https://huggingface.co/Lakoc/gpt2_256h_8l_add_head6_05 ### Model Description : No model card New: Create and edit this model card directly on the website!
Lakoc/gpt2_256h_8l_add_head3_03
https://huggingface.co/Lakoc/gpt2_256h_8l_add_head3_03
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Lakoc/gpt2_256h_8l_add_head3_03 ### Model URL : https://huggingface.co/Lakoc/gpt2_256h_8l_add_head3_03 ### Model Description : No model card New: Create and edit this model card directly on the website!
HeydarS/flan-t5-base_peft_v23
https://huggingface.co/HeydarS/flan-t5-base_peft_v23
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : HeydarS/flan-t5-base_peft_v23 ### Model URL : https://huggingface.co/HeydarS/flan-t5-base_peft_v23 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
LoneStriker/Senku-70B-Full-2.65bpw-h6-exl2
https://huggingface.co/LoneStriker/Senku-70B-Full-2.65bpw-h6-exl2
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/Senku-70B-Full-2.65bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/Senku-70B-Full-2.65bpw-h6-exl2 ### Model Description : Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_it.layer1_v0.2_wandblog
https://huggingface.co/ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_it.layer1_v0.2_wandblog
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_it.layer1_v0.2_wandblog ### Model URL : https://huggingface.co/ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_it.layer1_v0.2_wandblog ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
MohamedSaeed-dev/MyTinyLLAMA-GGUF
https://huggingface.co/MohamedSaeed-dev/MyTinyLLAMA-GGUF
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : MohamedSaeed-dev/MyTinyLLAMA-GGUF ### Model URL : https://huggingface.co/MohamedSaeed-dev/MyTinyLLAMA-GGUF ### Model Description : No model card New: Create and edit this model card directly on the website!
asorokoumov/ppo-LunarLander-v2
https://huggingface.co/asorokoumov/ppo-LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : asorokoumov/ppo-LunarLander-v2 ### Model URL : https://huggingface.co/asorokoumov/ppo-LunarLander-v2 ### Model Description : This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Ttimofeyka/PiVotStarling-NoromaidNSFW-Mistral-7B
https://huggingface.co/Ttimofeyka/PiVotStarling-NoromaidNSFW-Mistral-7B
This is a merge model of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ttimofeyka/PiVotStarling-NoromaidNSFW-Mistral-7B ### Model URL : https://huggingface.co/Ttimofeyka/PiVotStarling-NoromaidNSFW-Mistral-7B ### Model Description : This is a merge model of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
deepnetguy/zeta-x
https://huggingface.co/deepnetguy/zeta-x
Failed to access https://huggingface.co/deepnetguy/zeta-x - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : deepnetguy/zeta-x ### Model URL : https://huggingface.co/deepnetguy/zeta-x ### Model Description : Failed to access https://huggingface.co/deepnetguy/zeta-x - HTTP Status Code: 404
jaegon-kim/distilbert-base-uncased-finetuned-emotion
https://huggingface.co/jaegon-kim/distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jaegon-kim/distilbert-base-uncased-finetuned-emotion ### Model URL : https://huggingface.co/jaegon-kim/distilbert-base-uncased-finetuned-emotion ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
airalribalta/Passthrough-Latxa-Llama-LlamaCode-7b
https://huggingface.co/airalribalta/Passthrough-Latxa-Llama-LlamaCode-7b
Passthrough-Latxa-Llama-LlamaCode-7b is a merge of the following models using LazyMergekit:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : airalribalta/Passthrough-Latxa-Llama-LlamaCode-7b ### Model URL : https://huggingface.co/airalribalta/Passthrough-Latxa-Llama-LlamaCode-7b ### Model Description : Passthrough-Latxa-Llama-LlamaCode-7b is a merge of the following models using LazyMergekit:
danaleee/CL_rank50_iter500_valprompt
https://huggingface.co/danaleee/CL_rank50_iter500_valprompt
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks duck using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/CL_rank50_iter500_valprompt ### Model URL : https://huggingface.co/danaleee/CL_rank50_iter500_valprompt ### Model Description : These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks duck using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
danaleee/CL_rank10_iter500_valprompt
https://huggingface.co/danaleee/CL_rank10_iter500_valprompt
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/CL_rank10_iter500_valprompt ### Model URL : https://huggingface.co/danaleee/CL_rank10_iter500_valprompt ### Model Description : These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Nzham/OIL_ANALYSES
https://huggingface.co/Nzham/OIL_ANALYSES
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Nzham/OIL_ANALYSES ### Model URL : https://huggingface.co/Nzham/OIL_ANALYSES ### Model Description : No model card New: Create and edit this model card directly on the website!
dustin1776/DN
https://huggingface.co/dustin1776/DN
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : dustin1776/DN ### Model URL : https://huggingface.co/dustin1776/DN ### Model Description : No model card New: Create and edit this model card directly on the website!
LoneStriker/Senku-70B-Full-3.5bpw-h6-exl2
https://huggingface.co/LoneStriker/Senku-70B-Full-3.5bpw-h6-exl2
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/Senku-70B-Full-3.5bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/Senku-70B-Full-3.5bpw-h6-exl2 ### Model Description : Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
praison/qwen-1.8B-test-function-calling
https://huggingface.co/praison/qwen-1.8B-test-function-calling
This model is a fine-tuned version of qwen/Qwen-1_8B-Chat on the glaive_toolcall dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : praison/qwen-1.8B-test-function-calling ### Model URL : https://huggingface.co/praison/qwen-1.8B-test-function-calling ### Model Description : This model is a fine-tuned version of qwen/Qwen-1_8B-Chat on the glaive_toolcall dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
s3nh/gpupoor4
https://huggingface.co/s3nh/gpupoor4
Failed to access https://huggingface.co/s3nh/gpupoor4 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : s3nh/gpupoor4 ### Model URL : https://huggingface.co/s3nh/gpupoor4 ### Model Description : Failed to access https://huggingface.co/s3nh/gpupoor4 - HTTP Status Code: 404
Crayo1902/DPBH_DISTILBERT
https://huggingface.co/Crayo1902/DPBH_DISTILBERT
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Crayo1902/DPBH_DISTILBERT ### Model URL : https://huggingface.co/Crayo1902/DPBH_DISTILBERT ### Model Description : No model card New: Create and edit this model card directly on the website!
CJWeiss/sled_LEX_t5_ukabs_4
https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_4
Failed to access https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_4 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : CJWeiss/sled_LEX_t5_ukabs_4 ### Model URL : https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_4 ### Model Description : Failed to access https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_4 - HTTP Status Code: 404
kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-3e-4
https://huggingface.co/kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-3e-4
This model was trained from scratch on the kanishka/counterfactual-babylm-only_other_det_removal dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-3e-4 ### Model URL : https://huggingface.co/kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-3e-4 ### Model Description : This model was trained from scratch on the kanishka/counterfactual-babylm-only_other_det_removal dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
LoneStriker/DeepMagic-Coder-7b-Alt-AWQ
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-Alt-AWQ
(Note: From short testing, this Alt version generated much better code) Alternate version of DeepMagic-Coder-7b which can be found bellow. This version uses a diffrent config setup, with the actual base model of the two merges as the "base_model". Test both for yourself and see which is better at coding. Benchmarks coming soon. Config can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-Alt-AWQ ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-Alt-AWQ ### Model Description : (Note: From short testing, this Alt version generated much better code) Alternate version of DeepMagic-Coder-7b which can be found bellow. This version uses a diffrent config setup, with the actual base model of the two merges as the "base_model". Test both for yourself and see which is better at coding. Benchmarks coming soon. Config can be found bellow:
fleaxiao/instruct-pix2pix-model
https://huggingface.co/fleaxiao/instruct-pix2pix-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : fleaxiao/instruct-pix2pix-model ### Model URL : https://huggingface.co/fleaxiao/instruct-pix2pix-model ### Model Description : No model card New: Create and edit this model card directly on the website!
metythorn/khmergpt-llama2-7b-v-001
https://huggingface.co/metythorn/khmergpt-llama2-7b-v-001
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : metythorn/khmergpt-llama2-7b-v-001 ### Model URL : https://huggingface.co/metythorn/khmergpt-llama2-7b-v-001 ### Model Description :
erikhsos/campusbiernew2_LoRA
https://huggingface.co/erikhsos/campusbiernew2_LoRA
Failed to access https://huggingface.co/erikhsos/campusbiernew2_LoRA - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : erikhsos/campusbiernew2_LoRA ### Model URL : https://huggingface.co/erikhsos/campusbiernew2_LoRA ### Model Description : Failed to access https://huggingface.co/erikhsos/campusbiernew2_LoRA - HTTP Status Code: 404
gypztl123/AI
https://huggingface.co/gypztl123/AI
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : gypztl123/AI ### Model URL : https://huggingface.co/gypztl123/AI ### Model Description : No model card New: Create and edit this model card directly on the website!
Wajid333/a2c-PandaReachDense-v3
https://huggingface.co/Wajid333/a2c-PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Wajid333/a2c-PandaReachDense-v3 ### Model URL : https://huggingface.co/Wajid333/a2c-PandaReachDense-v3 ### Model Description : This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. TODO: Add your code
Ingrid0693/mini-bravo
https://huggingface.co/Ingrid0693/mini-bravo
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ingrid0693/mini-bravo ### Model URL : https://huggingface.co/Ingrid0693/mini-bravo ### Model Description :
Nexesenex/MiquMaid-v2-70B-alpha-Requant-iMat.GGUF
https://huggingface.co/Nexesenex/MiquMaid-v2-70B-alpha-Requant-iMat.GGUF
Requant with iMatrix of : https://huggingface.co/NeverSleepHistorical/MiquMaid-v2-70B-alpha-GGUF From Q4_K_M through Q8_0. Q3_K_M quant available, IQ2_XS otw. For testing purpose, so the folks with 36GB & 24 GB VRAM can use the model. Some LlamaCPP benchs : My requant of Miqu : Miqumaid v1 : Miqu DPO : Miqumaid v2 Alpha Requant : The Hellaswag scores are divergent due to a change in LlamaCPP 5-6 days ago, but in fact, they are stable. The divergence is showed in Miqumaid v1, which goes from 89.25 to 83.75 on the exact same eval.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Nexesenex/MiquMaid-v2-70B-alpha-Requant-iMat.GGUF ### Model URL : https://huggingface.co/Nexesenex/MiquMaid-v2-70B-alpha-Requant-iMat.GGUF ### Model Description : Requant with iMatrix of : https://huggingface.co/NeverSleepHistorical/MiquMaid-v2-70B-alpha-GGUF From Q4_K_M through Q8_0. Q3_K_M quant available, IQ2_XS otw. For testing purpose, so the folks with 36GB & 24 GB VRAM can use the model. Some LlamaCPP benchs : My requant of Miqu : Miqumaid v1 : Miqu DPO : Miqumaid v2 Alpha Requant : The Hellaswag scores are divergent due to a change in LlamaCPP 5-6 days ago, but in fact, they are stable. The divergence is showed in Miqumaid v1, which goes from 89.25 to 83.75 on the exact same eval.
Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large
https://huggingface.co/Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large
Failed to access https://huggingface.co/Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large ### Model URL : https://huggingface.co/Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large ### Model Description : Failed to access https://huggingface.co/Anujgr8/wav2vec2-indic-hindi-codeswitch-anuj-large - HTTP Status Code: 404
lukaSlingshot/counsel_chat_dataset
https://huggingface.co/lukaSlingshot/counsel_chat_dataset
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : lukaSlingshot/counsel_chat_dataset ### Model URL : https://huggingface.co/lukaSlingshot/counsel_chat_dataset ### Model Description : No model card New: Create and edit this model card directly on the website!
Amadeus99/image_classification
https://huggingface.co/Amadeus99/image_classification
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Amadeus99/image_classification ### Model URL : https://huggingface.co/Amadeus99/image_classification ### Model Description : This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
varun-v-rao/roberta-large-bn-adapter-3.17M-snli-model3
https://huggingface.co/varun-v-rao/roberta-large-bn-adapter-3.17M-snli-model3
This model is a fine-tuned version of roberta-large on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : varun-v-rao/roberta-large-bn-adapter-3.17M-snli-model3 ### Model URL : https://huggingface.co/varun-v-rao/roberta-large-bn-adapter-3.17M-snli-model3 ### Model Description : This model is a fine-tuned version of roberta-large on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
WGNW/chamcham_v1_checkpoint_onnx
https://huggingface.co/WGNW/chamcham_v1_checkpoint_onnx
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : WGNW/chamcham_v1_checkpoint_onnx ### Model URL : https://huggingface.co/WGNW/chamcham_v1_checkpoint_onnx ### Model Description : No model card New: Create and edit this model card directly on the website!
everpink/nayeontest
https://huggingface.co/everpink/nayeontest
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : everpink/nayeontest ### Model URL : https://huggingface.co/everpink/nayeontest ### Model Description :
mhms/Embeddings
https://huggingface.co/mhms/Embeddings
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mhms/Embeddings ### Model URL : https://huggingface.co/mhms/Embeddings ### Model Description : No model card New: Create and edit this model card directly on the website!
xshini/HiguchiKaede
https://huggingface.co/xshini/HiguchiKaede
https://civitai.com/models/18732/higuchi-kaede-nijisanji
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : xshini/HiguchiKaede ### Model URL : https://huggingface.co/xshini/HiguchiKaede ### Model Description : https://civitai.com/models/18732/higuchi-kaede-nijisanji
CJWeiss/sled_LEX_t5_ukabs_5
https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_5
Failed to access https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_5 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : CJWeiss/sled_LEX_t5_ukabs_5 ### Model URL : https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_5 ### Model Description : Failed to access https://huggingface.co/CJWeiss/sled_LEX_t5_ukabs_5 - HTTP Status Code: 404
delli/mixtral-7b-address-validator-merged
https://huggingface.co/delli/mixtral-7b-address-validator-merged
Failed to access https://huggingface.co/delli/mixtral-7b-address-validator-merged - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : delli/mixtral-7b-address-validator-merged ### Model URL : https://huggingface.co/delli/mixtral-7b-address-validator-merged ### Model Description : Failed to access https://huggingface.co/delli/mixtral-7b-address-validator-merged - HTTP Status Code: 404