modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
soinov/Automatic1111_test
soinov
"2024-06-10T07:05:25Z"
0
0
transformers
[ "transformers", "arxiv:2211.06679", "endpoints_compatible", "region:us" ]
null
"2024-06-10T06:53:38Z"
# Stable Diffusion web UI A web interface for Stable Diffusion, implemented using Gradio library. ![](screenshot.png) ## Features [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): - Original txt2img and img2img modes - One click install and run script (but you still must install python and git) - Outpainting - Inpainting - Color Sketch - Prompt Matrix - Stable Diffusion Upscale - Attention, specify parts of text that the model should pay more attention to - a man in a `((tuxedo))` - will pay more attention to tuxedo - a man in a `(tuxedo:1.21)` - alternative syntax - select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user) - Loopback, run img2img processing multiple times - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters - Textual Inversion - have as many embeddings as you want and use any names you like for them - use multiple embeddings with different numbers of vectors per token - works with half precision floating point numbers - train embeddings on 8GB (also reports of 6GB working) - Extras tab with: - GFPGAN, neural network that fixes faces - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network upscaler with a lot of third party models - SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - LDSR, Latent diffusion super resolution upscaling - Resizing aspect ratio options - Sampling method selection - Adjust sampler eta values (noise multiplier) - More advanced noise setting options - Interrupt processing at any time - 4GB video card support (also reports of 2GB working) - Correct seeds for batches - Live prompt token length validation - Generation parameters - parameters you used to generate images are saved with that image - in PNG chunks for PNG, in EXIF for JPEG - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI - can be disabled in settings - drag and drop an image/text-parameters to promptbox - Read Generation Parameters Button, loads parameters in promptbox to UI - Settings page - Running arbitrary python code from UI (must run with `--allow-code` to enable) - Mouseover hints for most UI elements - Possible to change defaults/mix/max/step values for UI elements via text config - Tiling support, a checkbox to create images that can be tiled like textures - Progress bar and live image generation preview - Can use a separate neural network to produce previews with almost none VRAM or compute requirement - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image - Styles, a way to save part of prompt and easily apply them via dropdown later - Variations, a way to generate same image but with tiny differences - Seed resizing, a way to generate same image but at slightly different resolution - CLIP interrogator, a button that tries to guess prompt from an image - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway - Batch Processing, process a group of files using img2img - Img2img Alternative, reverse Euler method of cross attention control - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions - Reloading checkpoints on the fly - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - separate prompts using uppercase `AND` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - DeepDanbooru integration, creates danbooru style tags for anime prompts - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args) - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - Generate forever option - Training tab - hypernetworks and embeddings options - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime) - Clip skip - Hypernetworks - Loras (same as Hypernetworks but more pretty) - A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt - Can select to load a different VAE from settings screen - Estimated completion time in progress bar - API - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions - Now without any bad letters! - Load checkpoints in safetensors format - Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64 - Now with a license! - Reorder elements in the UI from settings screen - [Segmind Stable Diffusion](https://huggingface.co/segmind/SSD-1B) support ## Installation and Running Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for: - [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) - [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. - [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page) - [Ascend NPUs](https://github.com/wangshuai09/stable-diffusion-webui/wiki/Install-and-run-on-Ascend-NPUs) (external wiki page) Alternatively, use online services (like Google Colab): - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) ### Installation on Windows 10/11 with NVidia-GPUs using release package 1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract its contents. 2. Run `update.bat`. 3. Run `run.bat`. > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) ### Automatic Installation on Windows 1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH". 2. Install [git](https://git-scm.com/download/win). 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. ### Automatic Installation on Linux 1. Install the dependencies: ```bash # Debian-based: sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0 # Red Hat-based: sudo dnf install wget git python3 gperftools-libs libglvnd-glx # openSUSE-based: sudo zypper install wget git python3 libtcmalloc4 libglvnd # Arch-based: sudo pacman -S wget git python3 ``` 2. Navigate to the directory you would like the webui to be installed and execute the following command: ```bash wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh ``` 3. Run `webui.sh`. 4. Check `webui-user.sh` for options. ### Installation on Apple Silicon Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). ## Contributing Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) ## Documentation The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki). For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki). ## Credits Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. - Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers - k-diffusion - https://github.com/crowsonkb/k-diffusion.git - Spandrel - https://github.com/chaiNNer-org/spandrel implementing - GFPGAN - https://github.com/TencentARC/GFPGAN.git - CodeFormer - https://github.com/sczhou/CodeFormer - ESRGAN - https://github.com/xinntao/ESRGAN - SwinIR - https://github.com/JingyunLiang/SwinIR - Swin2SR - https://github.com/mv-lab/swin2sr - LDSR - https://github.com/Hafiidz/latent-diffusion - MiDaS - https://github.com/isl-org/MiDaS - Ideas for optimizations - https://github.com/basujindal/stable-diffusion - Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. - Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) - Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention) - Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - xformers - https://github.com/facebookresearch/xformers - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru - Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6) - Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix - Security advice - RyotaK - UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC - TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd - LyCORIS - KohakuBlueleaf - Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling - Hypertile - tfernd - https://github.com/tfernd/HyperTile - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - (You)
QingchuanMa/CartPole-v1
QingchuanMa
"2024-06-10T14:34:26Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T06:57:09Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 964.40 +/- 63.11 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kai-oh/llama-3-instruct-realtoktok-240529-name-chat-enkr-tapt-hf-hf
kai-oh
"2024-06-10T07:22:02Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T06:57:21Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MicardiumBulgaria/Micardium
MicardiumBulgaria
"2024-06-10T07:01:42Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T06:59:22Z"
--- license: apache-2.0 --- Какво е Micardium? Micardium Хапчета е капсула за хипертония, формулирана да подпомага хората при ефективното управление на високото кръвно налягане. Хипертонията или високото кръвно налягане е хронично състояние, което увеличава риска от сърдечни заболявания, инсулт и други здравословни усложнения. Micardium капсула има за цел да осигури естествен и холистичен подход към управлението на кръвното налягане чрез използване на смес от билкови екстракти и основни хранителни вещества. Тази капсула е предназначена за тези, които търсят алтернатива на конвенционалните лекарства, като се фокусира върху естествени съставки, известни със своите сърдечно-съдови ползи. Официален сайт:<a href="https://www.nutritionsee.com/miczbulga">www.Micardium.com</a> <p><a href="https://www.nutritionsee.com/miczbulga"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Micardium-Bulgaria-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/miczbulga">Купи сега!! Щракнете върху връзката по-долу за повече информация и вземете 50% отстъпка сега... Побързайте</a> Официален сайт:<a href="https://www.nutritionsee.com/miczbulga">www.Micardium.com</a>
beloneyd/Cardiotonus-Bewertungen
beloneyd
"2024-06-10T07:05:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:03:31Z"
(50% off) Bestellen Sie jetzt auf der offiziellen Website zum reduzierten Preis 🤩 ===> https://healthnewz.xyz/get-cardiotonus Cardiotonus ist ein innovatives Präparat in Kapsel-Form, entwickelt zur Kontrolle und Behandlung von Hypertonie, also erhöhtem Blutdruck. Die Notwendigkeit, den Blutdruck zu regulieren und somit das Risiko für Herz-Kreislauf-Erkrankungen zu minimieren, wird immer drängender, insbesondere in unserer schnelllebigen Welt. Cardiotonus bietet eine effektive Lösung mit natürlichen Inhaltsstoffen, die helfen, die Kontraktion der Blutgefäße zu verringern und den Blutdruck im Kreislaufsystem zu regulieren. Dies führt zu einer verbesserten Blutzufuhr zum Herzen und anderen Organen, was wiederum das Risiko für kardiovaskuläre Krankheiten senkt. Für Menschen mit Diagnose von hohem Blutdruck oder solche, die aufgrund von familiären Vorbelastungen ein erhöhtes Risiko tragen, bietet Cardiotonus eine wertvolle Unterstützung im Alltag. (50% off) Bestellen Sie jetzt auf der offiziellen Website zum reduzierten Preis 🤩 ===> https://healthnewz.xyz/get-cardiotonus Was ist Cardiotonus? Cardiotonus-Kapseln sind eine speziell entwickelte Formulierung, die darauf abzielt, den Blutdruck zu kontrollieren und das Risiko von Herz-Kreislauf-Erkrankungen bei bestimmten Personen zu verringern. Diese Kapseln sind besonders für Erwachsene geeignet, bei denen bereits eine Hypertonie diagnostiziert wurde oder die aufgrund ihrer familiären Vorgeschichte einem erhöhten Risiko ausgesetzt sind. Durch die gezielte Kombination von Inhaltsstoffen bieten die Cardiotonus-Kapseln eine wirksame Unterstützung für die Gesundheit des Herz-Kreislauf-Systems. (50% off) Bestellen Sie jetzt auf der offiziellen Website zum reduzierten Preis 🤩 ===> https://healthnewz.xyz/get-cardiotonus Mit einer präzisen Dosierung und einer gut durchdachten Zusammensetzung von Inhaltsstoffen können die Cardiotonus-Kapseln dazu beitragen, den Blutdruck auf natürliche Weise zu stabilisieren. Dies ist besonders wichtig für Menschen, die eine präventive Strategie zur Reduzierung ihres Risikos für Herz-Kreislauf-Erkrankungen verfolgen möchten. Durch die regelmäßige Einnahme dieser Kapseln als Teil eines gesunden Lebensstils können Patienten dazu beitragen, ihre Herzgesundheit zu erhalten und potenzielle Komplikationen im Zusammenhang mit Bluthochdruck zu minimieren. Wie funktioniert Cardiotonus? Die Vorteile von Cardiotonus erstrecken sich über seine Fähigkeit hinaus, den Blutdruck auf natürliche Weise zu regulieren. Durch die gezielte Entspannung der Blutgefäße ermöglicht es eine verbesserte Blutzirkulation im Körper. Dieser Effekt führt nicht nur zu einer direkten Senkung des Blutdruckniveaus, sondern kann auch dazu beitragen, die Belastung des Herz-Kreislauf-Systems zu verringern, was langfristig zu einer verbesserten Herzgesundheit beiträgt. Darüber hinaus bietet Cardiotonus eine schonende und effektive Alternative zu pharmazeutischen Medikamenten zur Blutdruckkontrolle. Da es auf natürlichen Inhaltsstoffen basiert, minimiert es das Risiko von Nebenwirkungen im Vergleich zu synthetischen Medikamenten. Dies macht es besonders attraktiv für Menschen, die eine sanfte, aber wirksame Unterstützung für ihre Herz-Kreislauf-Gesundheit suchen. Cardiotonus – Effekte – Auswirkungen Cardiotonus kann eine Reihe von positiven Effekten auf die Gesundheit des Herz-Kreislauf-Systems haben: Reduziert effektiv den Blutdruck: Durch die gezielte Entspannung der Blutgefäße führt Cardiotonus zu einer Senkung des Blutdruckniveaus, was besonders wichtig ist für Menschen mit Hypertonie oder einem erhöhten Risiko für Herz-Kreislauf-Erkrankungen. Verbessert die Blutzufuhr zum Herzen und anderen Organen: Die Formulierung von Cardiotonus unterstützt eine verbesserte Blutzirkulation, was zu einer besseren Sauerstoffversorgung der Organe, einschließlich des Herzens, führt und damit die Gesundheit des gesamten Herz-Kreislauf-Systems fördert. Senkt das Risiko für Herz-Kreislauf-Erkrankungen: Indem es den Blutdruck reguliert und die Blutzirkulation verbessert, trägt Cardiotonus dazu bei, das Risiko für Herzinfarkt, Schlaganfall und andere Herz-Kreislauf-Erkrankungen zu reduzieren, insbesondere bei Personen mit familiärer Vorbelastung. Cardiotonus bietet somit eine ganzheitliche Unterstützung für die Herzgesundheit, indem es nicht nur den Blutdruck reguliert, sondern auch die Blutzufuhr zu lebenswichtigen Organen verbessert und das Risiko für Herz-Kreislauf-Erkrankungen senkt. Cardiotonus – Meinungen aus dem Forum und Bewertungen Die positiven Meinungen und Bewertungen zu Cardiotonus im Internet und auf der offiziellen Website spiegeln die wirksamen Ergebnisse und die Zufriedenheit der Nutzer wider. Viele Anwender berichten von einer deutlichen Verbesserung ihrer Blutdruckwerte nach der Einnahme von Cardiotonus, was sich positiv auf ihre Gesundheit und Lebensqualität auswirkt. (50% off) Bestellen Sie jetzt auf der offiziellen Website zum reduzierten Preis 🤩 Zum Beispiel sagt ein Nutzer: „Seit ich Cardiotonus nehme, ist mein Blutdruck viel stabiler. Ich fühle mich fitter und meine regelmäßigen Arztbesuche zeigen deutliche Verbesserungen.“ Ein weiterer Nutzer betont: „Cardiotonus hat wirklich einen Unterschied in meinem Leben gemacht. Ich empfehle es jedem, der nach einer effektiven Möglichkeit sucht, seinen Bluthochdruck zu managen.“ Diese Erfahrungsberichte unterstreichen die Wirksamkeit und den Nutzen von Cardiotonus für Menschen, die ihren Blutdruck kontrollieren und ihr Risiko für Herz-Kreislauf-Erkrankungen verringern möchten. Wo können Sie Cardiotonus kaufen? Preis – Apotheke, Ebay Cardiotonus ist bewusst nicht in Apotheken oder stationären Geschäften erhältlich, um sicherzustellen, dass die Verbraucher das Originalprodukt mit der authentischen Formulierung erhalten. Ebenso wird es nicht auf Plattformen wie Ebay oder Amazon angeboten, um Fälschungen oder unautorisierte Produkte zu vermeiden. Stattdessen wird dringend empfohlen, das Produkt ausschließlich über die offizielle Website zu beziehen, wo derzeit eine attraktive 50%-Promotion angeboten wird. Zu diesem Aktionspreis von 39€ pro Packung, im Vergleich zum regulären Preis von 78€, können Verbraucher das Produkt zu einem deutlich reduzierten Preis erwerben und von den positiven Effekten profitieren. Die Möglichkeit, Cardiotonus zu einem vergünstigten Preis zu erwerben, macht es für Menschen mit Bluthochdruck oder einem erhöhten Risiko für Herz-Kreislauf-Erkrankungen erschwinglicher, eine effektive Unterstützung für ihre Gesundheit zu erhalten. Durch den direkten Kauf über die offizielle Website wird zudem sichergestellt, dass Kunden ein authentisches Produkt erhalten und von den Vorteilen der natürlichen Inhaltsstoffe in ihrer optimalen Zusammensetzung profitieren können. Cardiotonus – Dosierung – Anwendungsmethode Die empfohlene Dosierung von Cardiotonus richtet sich nach dem Schweregrad der Hypertonie des Einzelnen. Für die meisten Anwender wird empfohlen, zweimal täglich eine Kapsel einzunehmen, idealerweise zusammen mit den Mahlzeiten. Dieses Dosierungsschema hilft, eine konstante Wirkstoffkonzentration im Körper aufrechtzuerhalten und unterstützt eine effektive Regulierung des Blutdrucks über den Tag hinweg. (50% off) Bestellen Sie jetzt auf der offiziellen Website zum reduzierten Preis 🤩 ===> https://healthnewz.xyz/get-cardiotonus Es ist jedoch wichtig zu beachten, dass individuelle Unterschiede bestehen können und die Dosierung entsprechend angepasst werden sollte. Eine konsistente und sachgemäße Einnahme gemäß den Anweisungen kann dazu beitragen, die gewünschten Ergebnisse bei der Blutdruckregulierung zu erzielen und die allgemeine Herz-Kreislauf-Gesundheit zu verbessern. Cardiotonus – Kontraindikationen und Nebenwirkungen Cardiotonus ist bekannt für seine Verträglichkeit und verursacht in der Regel keine Nebenwirkungen bei den meisten Anwendern. Aufgrund seiner natürlichen Zusammensetzung und der sorgfältigen Auswahl der Inhaltsstoffe ist das Risiko von unerwünschten Reaktionen minimal. Viele Nutzer berichten von einer guten Verträglichkeit und einer angenehmen Einnahme des Produkts ohne negative Begleiterscheinungen. Die einzigen Kontraindikationen für die Einnahme von Cardiotonus sind eine bekannte Allergie gegen einen oder mehrere der enthaltenen Inhaltsstoffe.Durch diese Vorsichtsmaßnahmen können potenzielle Risiken vermieden und die Sicherheit bei der Anwendung von Cardiotonus gewährleistet werden.
nnheui/pythia-410m-qlora-5ep
nnheui
"2024-06-10T11:46:27Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-10T07:03:34Z"
Entry not found
kikikara/GeM2-Llamion-14B-tuned01-Chat
kikikara
"2024-06-10T07:08:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:08:58Z"
Entry not found
MsgmSgmsG/eeve-ocr-ko
MsgmSgmsG
"2024-06-10T07:10:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:10:36Z"
Entry not found
fragger246/idefics2-8b-docvqa-finetuned-tutorial
fragger246
"2024-06-10T08:08:04Z"
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
"2024-06-10T07:12:11Z"
--- license: apache-2.0 base_model: HuggingFaceM4/idefics2-8b tags: - generated_from_trainer model-index: - name: idefics2-8b-docvqa-finetuned-tutorial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
LuuNgoc2k2/ViNER-DeBerta-v3-small
LuuNgoc2k2
"2024-06-10T07:14:24Z"
0
0
null
[ "pytorch", "region:us" ]
null
"2024-06-10T07:13:02Z"
Entry not found
Yurikshevtsov/Mushroom
Yurikshevtsov
"2024-06-10T07:13:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:13:11Z"
Entry not found
Mouadhsnapiq/Mistral-7B-ON-PARSED-DATA
Mouadhsnapiq
"2024-06-10T07:14:05Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:14:05Z"
Entry not found
cch1006/unsloth_taide_v0
cch1006
"2024-06-10T07:14:36Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:14:36Z"
Entry not found
beloneyd/Circulaxil
beloneyd
"2024-06-10T07:16:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:14:43Z"
Warum ist dieses Nahrungsergänzungsmittel notwendig? Circulaxil richtet sich an Personen, die eine natürliche Lösung suchen, um ihren Blutdruck wieder unter Kontrolle zu bringen. Alter und Geschlecht spielen keine große Rolle, da sowohl junge als auch alte Menschen gleichermaßen von der Einnahme profitieren können. Es spricht insbesondere diejenigen an, die zuvor erfolglos alternative Methoden ausprobiert haben. Es ist wichtig, dass sich jeder sein eigenes Urteil über Circulaxil bildet und versucht, seinen Blutdruck positiv zu beeinflussen. Darüber hinaus macht die nahtlose Integration in die tägliche Routine eine spezielle Verabreichung überflüssig. So können die Kapseln beispielsweise bequem mitgenommen und bei Bedarf während der Arbeitszeit eingenommen werden. Daher stören sie den hektischen Lebensstil nicht und stellen für jeden eine praktikable Option dar. Was sind die Vorteile von Circulaxil? [SONDERANGEBOT] Klicken Sie hier, um die Circulaxil-Blutzuckerformel zu bestellen ==> https://healthnewz.xyz/circulaxil-price Möchten Sie Ihren Blutzuckerspiegel auf natürliche, gesunde Weise regulieren? Dann sind Sie bei Circulaxil genau richtig! Diese innovative Formel wurde entwickelt, um das natürliche Immunsystem Ihres Körpers zu unterstützen und Ihnen zu helfen, Ihren Blutzuckerspiegel zu kontrollieren. Darüber hinaus kann sie auch Diabetikern helfen, ganzheitlicher mit ihrer Gesundheit umzugehen. Circulaxil hat auch mehrere andere Vorteile: Hilft bei der Regulierung des Blutdrucks und entlastet das Herz vom ständigen Kampf gegen die Auswirkungen eines hohen Blutzuckerspiegels. Unterstützt die Herzgesundheit, indem es Schutz vor schädlichen freien Radikalen bietet. Trägt zur Optimierung der Gehirngesundheit bei und verbessert die geistige Klarheit und Konzentration. Es ist vollgepackt mit Antioxidantien und Vitaminen, damit Sie sich immer bestens fühlen. Wenn Sie also nach einer völlig natürlichen Lösung zur Kontrolle Ihres Blutzuckerspiegels suchen, ist Circulaxil definitiv einen Versuch wert! Sicherheit und Nebenwirkungen von Circulaxil Sind Sie besorgt über die möglichen Nebenwirkungen der Einnahme von Circulaxil? Es besteht kein Grund zur Sorge! Circulaxil ist ein völlig natürliches Produkt, das sorgfältig formuliert wurde, um optimale Sicherheit zu gewährleisten. Es enthält keine künstlichen Inhaltsstoffe, Füllstoffe oder Zusatzstoffe und ist frei von Verunreinigungen oder Unreinheiten. Darüber hinaus wurden umfangreiche Tests an Circulaxil durchgeführt, die seine völlige Unbedenklichkeit für den Verzehr bestätigen. Es wurde gründlich erforscht und hat sich als wirksam bei der Regulierung des Blutzuckerspiegels erwiesen, ohne nennenswerte Gesundheitsrisiken zu bergen. Darüber hinaus werden bei der Herstellung von Circulaxil nur die hochwertigsten Zutaten verwendet. Diese Zutaten werden mit größter Sorgfalt beschafft und die Formel ist sowohl vegan-freundlich als auch gentechnikfrei. Sie können sich voll und ganz darauf verlassen, dass das, was Sie Ihrem Körper zuführen, rein, natürlich und frei von unerwarteten Überraschungen ist. Wie wirkt Circulaxil? [SONDERANGEBOT] Klicken Sie hier, um die Circulaxil-Blutzuckerformel zu bestellen ==> https://healthnewz.xyz/circulaxil-price Das Ziel des Produkts ist es, den Fettgehalt im Körper zu senken, indem es auf eine bestimmte Chemikalie abzielt, die bei übergewichtigen Personen unerwünschte Moleküle erzeugt. Diese Chemikalie neigt dazu, sich in lebenswichtigen Organen wie Herz und Leber anzusammeln, was zum Ausbruch von Typ-2-Diabetes und anderen Erkrankungen führt. Ceramid, eine Fremdsubstanz, stimuliert die Ansammlung von Fettzellen. Diese Fettzellen zirkulieren dann durch verschiedene Organe, einschließlich der Bauchspeicheldrüse. Aufgrund ihrer entscheidenden Rolle bei der Regulierung des Blutzuckerspiegels ist die Bauchspeicheldrüse für jeden lebenswichtig, dem seine Gesundheit und Fitness am Herzen liegt. Glukosemoleküle wandern durch den Blutkreislauf und wenn sie mit Insulin interagieren, erleichtert dies ihre Nutzung als Energie. Wenn die Produktion dieses Hormons unzureichend ist, sammelt sich Zucker an, was zu Typ-2-Diabetes führt. Circulaxil wirkt, indem es den Körper bei der Verarbeitung von Ceramiden durch seinen „Diabetes-umkehrenden Mechanismus“ unterstützt. Diese Pillen wirken, indem sie diese Substanzen gezielt angreifen und aus dem Körper entfernen. Folglich können Personen ihren Blutzuckerspiegel effektiv kontrollieren, was die Fettansammlung um die Bauchspeicheldrüse reduziert und ihre Fähigkeit verbessert, eingehende Nährstoffe aufzunehmen. Was sind die Inhaltsstoffe von Circulaxil? Circulaxil ist vollgepackt mit natürlichen Inhaltsstoffen, die sorgfältig ausgewählt wurden, um Ihnen bei der Kontrolle Ihres Blutzuckerspiegels zu helfen. Das ist in der einzigartigen Mischung enthalten: Wacholderbeere: Diese kraftvolle Beere wird seit Jahrhunderten wegen ihrer heilenden Wirkung verwendet und hilft, die Reaktion des Körpers auf Zucker zu regulieren. Zimtrinde: Zimt wird seit der Antike verwendet, um Entzündungen zu lindern und den Blutzuckerspiegel zu stabilisieren. Chrom: Dieses essentielle Mineral hilft Ihrem Körper, Kohlenhydrate, Fette und Proteine ​​zu verstoffwechseln – und ein Mangel kann zu Gewichtszunahme und Glukoseintoleranz führen. Cayenne-Pfeffer: Dieses Gewürz verleiht nicht nur Geschmack, es kann auch die Fettverbrennung anregen, die Durchblutung unterstützen und Risikofaktoren für Herzerkrankungen reduzieren. Bittermelone: ​​Diese traditionelle Frucht ist reich an Phytonährstoffen, die auf natürliche Weise dabei helfen, einen gesunden Blutzuckerspiegel aufrechtzuerhalten. Alle diese Inhaltsstoffe sind darin perfekt ausgewogen, sodass Sie das Beste daraus machen können, mit dem zusätzlichen Vorteil, dass sie aus rein natürlichen Quellen hergestellt werden. Kombination der Inhaltsstoffe: [SONDERANGEBOT] Klicken Sie hier, um die Circulaxil-Blutzuckerformel zu bestellen ==> https://healthnewz.xyz/circulaxil-price Dieses Produkt wird durch wissenschaftliche Forschung unterstützt und verwendet eine einzigartige Kombination aus 12 verschiedenen Komponenten. Diese Komponenten bestehen aus essentiellen Vitaminen, Mineralien und Kräuterextrakten wie Bittermelone und Gymnema sylvestre, die sich bei der Förderung der Glukosekontrolle als wirksam erwiesen haben. Darüber hinaus enthält es Chrom, ein lebenswichtiges Mineral, das für seine Rolle bei der Aufrechterhaltung eines optimalen Blutzuckerspiegels bekannt ist. Natürlicher Ansatz: Circulaxil unterscheidet sich von anderen auf dem Markt erhältlichen Produkten, da es frei von synthetischen Chemikalien oder zugesetztem Zucker ist. Stattdessen verwendet es natürliche Inhaltsstoffe und stellt sicher, dass Sie Ergebnisse erzielen, ohne Ihr Wohlbefinden zu beeinträchtigen. Darüber hinaus ist Circulaxil sowohl vegan als auch glutenfrei und erfüllt die Bedürfnisse von Personen mit unterschiedlichen Ernährungsbedürfnissen. Täglich einnehmen: Die Anwendung von Circulaxil ist unglaublich einfach. Nehmen Sie einfach täglich zwei Kapseln vor den Mahlzeiten ein und schon sind Sie fertig! Wenn Sie jedoch schwanger sind oder stillen, sollten Sie vor Beginn einer Nahrungsergänzung Ihren Arzt konsultieren. Fazit Zusammenfassend lässt sich sagen, dass Circulaxil eine klinisch erprobte, vollkommen natürliche Lösung zur effektiven Kontrolle des Blutzuckerspiegels bietet. Durch die Verwendung von Circulaxil können Sie Ihren Blutzucker sicher und einfach regulieren und ein gesundes Gleichgewicht aufrechterhalten. Es ist eine hervorragende Alternative zu verschreibungspflichtigen Medikamenten und birgt aufgrund seiner natürlichen Inhaltsstoffe ein geringeres Risiko von Nebenwirkungen. Ob Sie an Typ-2-Diabetes oder prädiabetischen Symptomen leiden, Circulaxil kann Ihnen dabei helfen, einen gesünderen Blutzuckerspiegel aufrechtzuerhalten. Wenn Sie einen natürlichen und unkomplizierten Ansatz bevorzugen, ist Circulaxil die ideale Lösung für Sie. Zögern Sie nicht länger – probieren Sie Circulaxil aus und begeben Sie sich auf den Weg zu einer besseren Blutzuckerkontrolle.
creator853/Llama-2-7b-chat-finetune-body_language
creator853
"2024-06-10T07:22:13Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T07:15:40Z"
Entry not found
mjac1830/video
mjac1830
"2024-06-10T07:47:34Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-10T07:15:50Z"
--- license: openrail ---
Rohaen/rtryryrtr
Rohaen
"2024-06-10T07:18:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:18:29Z"
Entry not found
WbjuSrceu/mistral-7b-instruct-v0.2-LORA
WbjuSrceu
"2024-06-10T07:19:24Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T07:19:01Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** WbjuSrceu - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ArmanAsq/MedGPT
ArmanAsq
"2024-06-10T07:21:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:21:11Z"
Entry not found
QingchuanMa/Pixelcopter-PLE-v0
QingchuanMa
"2024-06-10T15:26:56Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T07:22:55Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 36.60 +/- 26.93 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Dikshant058/my_cool_submission
Dikshant058
"2024-06-10T07:23:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:23:16Z"
# Empty solution example for the S23DR competition This repo provides a minimalistic example of a valid, but empty submission to S23DR competition. We recommend you take a look at [this example](https://huggingface.co/usm3d/handcrafted_baseline_submission), which implements some primitive algorithms and provides useful I/O and visualization functions. This example seeks to simply provide minimal code which succeeds at reading the dataset and producing a solution (in this case two vertices at the origin and edge of zero length connecting them). `script.py` - is the main file which is run by the competition space. It should produce `submission.parquet` as the result of the run. Please see the additional comments in the `script.py` file. --- license: apache-2.0 ---
kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-3.5bpw-h6-exl2
kim512
"2024-06-10T08:43:54Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
"2024-06-10T07:23:59Z"
--- library_name: transformers tags: - not-for-all-audiences --- # EXL2 quants of [crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B](https://huggingface.co/crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B) [3.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-3.0bpw-h6-exl2) [3.50 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-3.5bpw-h6-exl2) [4.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-4.0bpw-h6-exl2) [4.50 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-4.5bpw-h6-exl2) [6.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-6.0bpw-h6-exl2) [8.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-8.0bpw-h8-exl2) Created using the defaults from exllamav2 1.4.0 convert.py 3.0bpw to 6.0bpw head bits = 6 8.0bpw head bits = 8 length = 8192 dataset rows = 200 measurement rows = 32 measurement length = 8192 # sunfall-midnight-miqu-v0.2-v1.5-70B4 Sunfall (2024-06-07) dataset trained directly on top of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5 Beware, depraved. Not suitable for any audience. Experimental. Please give feedback. Begone if you demand perfection. This is still an early stage experiment. *Recommend a decently high temperature. Start with temp 1.7, smoothing factor 0.3.* To use lore book tags, make sure you use **Status: Blue (constant)** and write e.g. ``` Follow the Diamond Law at all costs. Tags: humor, dark, complex storytelling, intricate characters, immersive. ``` This model has been trained on context that mimics that of Silly Tavern's Mistral preset, with the following settings: **System Prompt:** ``` You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}. ``` Below method still works, but the lore book approach above is more convenient: **System Same as User Enabled** (This is the default) **Author's Note** (In-chat @ Depth 4) ``` Follow The Diamond Law at all costs. ``` Below method still works, but unless you want to write tags for a specific character card only, the lore book approach above is more convenient: **Scenario Information** (open a character card and press "Advanced Definitions") may also contain tags at the end to guide the model further. E.g.: ``` Two friends having fun. Set in 1947. Tags: dark, exploration, friendship, self-discovery, historical fiction ``` The card has also been trained on content which includes a narrator card, which was used when the content did not mainly revolve around two characters. Future versions will expand on this idea, so forgive the vagueness at this time. (The Diamond Law is this: https://files.catbox.moe/d15m3g.txt -- So far results are unclear, but the training was done with this phrase included, and the training data adheres to the law.) The model has also been trained to do storywriting, both interactively with the user and on its own. The system message ends up looking something like this: ``` You are an expert storyteller, who can roleplay or write compelling stories. Follow the Diamond Law. Below is a scenario with character descriptions and content tags. Write a story together with the user based on this scenario. Scenario: The story is about James, blabla. James is an overweight 63 year old blabla. Lucy: James's 62 year old wife. Tags: tag1, tag2, tag3, ... ``` If you remove the "together with the user" part, the model will be more inclined to write on its own.
kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-6.0bpw-h6-exl2
kim512
"2024-06-10T10:45:32Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
"2024-06-10T07:24:41Z"
--- library_name: transformers tags: - not-for-all-audiences --- # EXL2 quants of [crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B](https://huggingface.co/crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B) [3.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-3.0bpw-h6-exl2) [3.50 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-3.5bpw-h6-exl2) [4.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-4.0bpw-h6-exl2) [4.50 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-4.5bpw-h6-exl2) [6.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-6.0bpw-h6-exl2) [8.00 bits per weight](https://huggingface.co/kim512/sunfall-midnight-miqu-v0.2-v1.5-70B-8.0bpw-h8-exl2) Created using the defaults from exllamav2 1.4.0 convert.py 3.0bpw to 6.0bpw head bits = 6 8.0bpw head bits = 8 length = 8192 dataset rows = 200 measurement rows = 32 measurement length = 8192 # sunfall-midnight-miqu-v0.2-v1.5-70B4 Sunfall (2024-06-07) dataset trained directly on top of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5 Beware, depraved. Not suitable for any audience. Experimental. Please give feedback. Begone if you demand perfection. This is still an early stage experiment. *Recommend a decently high temperature. Start with temp 1.7, smoothing factor 0.3.* To use lore book tags, make sure you use **Status: Blue (constant)** and write e.g. ``` Follow the Diamond Law at all costs. Tags: humor, dark, complex storytelling, intricate characters, immersive. ``` This model has been trained on context that mimics that of Silly Tavern's Mistral preset, with the following settings: **System Prompt:** ``` You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}. ``` Below method still works, but the lore book approach above is more convenient: **System Same as User Enabled** (This is the default) **Author's Note** (In-chat @ Depth 4) ``` Follow The Diamond Law at all costs. ``` Below method still works, but unless you want to write tags for a specific character card only, the lore book approach above is more convenient: **Scenario Information** (open a character card and press "Advanced Definitions") may also contain tags at the end to guide the model further. E.g.: ``` Two friends having fun. Set in 1947. Tags: dark, exploration, friendship, self-discovery, historical fiction ``` The card has also been trained on content which includes a narrator card, which was used when the content did not mainly revolve around two characters. Future versions will expand on this idea, so forgive the vagueness at this time. (The Diamond Law is this: https://files.catbox.moe/d15m3g.txt -- So far results are unclear, but the training was done with this phrase included, and the training data adheres to the law.) The model has also been trained to do storywriting, both interactively with the user and on its own. The system message ends up looking something like this: ``` You are an expert storyteller, who can roleplay or write compelling stories. Follow the Diamond Law. Below is a scenario with character descriptions and content tags. Write a story together with the user based on this scenario. Scenario: The story is about James, blabla. James is an overweight 63 year old blabla. Lucy: James's 62 year old wife. Tags: tag1, tag2, tag3, ... ``` If you remove the "together with the user" part, the model will be more inclined to write on its own.
aigchacker/realesrgan-x4
aigchacker
"2024-06-10T07:27:10Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T07:26:16Z"
--- license: mit ---
Arunima693/results_modified
Arunima693
"2024-06-10T10:33:32Z"
0
0
null
[ "tensorboard", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-10T07:27:48Z"
--- base_model: NousResearch/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: results_modified results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_modified This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.13.3
xtester/naschain
xtester
"2024-07-01T17:24:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:30:15Z"
Entry not found
Victori0129/Seungri
Victori0129
"2024-06-10T07:31:33Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T07:31:33Z"
--- license: apache-2.0 ---
AnaNoSleep/xformers_build_archive
AnaNoSleep
"2024-06-20T12:37:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:32:37Z"
Entry not found
iron-huray/llama_test_huray-100-free_v02
iron-huray
"2024-06-18T13:48:28Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
"2024-06-10T07:35:12Z"
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: llama_test_huray-100-free_v02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_test_huray-100-free_v02 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2
Aravindan/llama-2-7b-miniguanaco
Aravindan
"2024-06-10T07:36:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:36:15Z"
Entry not found
beloneyd/Collagenico
beloneyd
"2024-06-10T07:41:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:37:32Z"
<img class="alignnone wp-image-17792" src="https://healthtalkrev.com/wp-content/uploads/2024/06/asdw-300x188.jpg" alt="" width="360" height="226" /> In der heutigen schnelllebigen Welt suchen immer mehr Menschen nach natürlichen Lösungen, um ihre Haargesundheit zu verbessern. Ob es um Haarwachstum, Stärkung oder einfach nur um ein gesünderes Aussehen geht, die Antwort liegt oft in der Qualität der Nahrungsergänzungsmittel, die wir verwenden. Hier kommt Collagenico, ein revolutionäres Haarergänzungsmittel, ins Spiel. Entwickelt, um Ihr Haar von innen heraus zu stärken, bietet Collagenico eine einzigartige Mischung aus natürlichen Inhaltsstoffen, die speziell darauf abzielen, Haarwachstum zu fördern, Spliss und Haarbruch zu reduzieren sowie das Haar insgesamt zu kräftigen. Was ist Collagenico? Collagenico ist ein hochwirksames Nahrungsergänzungsmittel in Kapselform, das gezielt darauf abzielt, die Gesundheit der Haare zu verbessern. Die speziell entwickelte Formel von Collagenico setzt auf eine einzigartige Kombination natürlicher Inhaltsstoffe, darunter Kollagen, Biotin, Zink, Vitamin C, Vitamin E, Silizium und Bambusextrakt. Diese synergistische Zusammensetzung wirkt zusammen, um das Haarwachstum anzuregen, die Struktur der Haare zu stärken und sie vor den täglichen Belastungen und Schäden zu schützen, die durch Umwelteinflüsse, Styling und andere externe Faktoren verursacht werden können. Durch die regelmäßige Einnahme von Collagenico können Anwender eine verbesserte Haargesundheit erleben, die sich in stärkeren, glänzenderen und widerstandsfähigeren Haaren zeigt. Die sorgfältig ausgewählten Inhaltsstoffe tragen dazu bei, die natürliche Schönheit der Haare zu unterstützen und gleichzeitig ihre Gesundheit von innen heraus zu fördern. Collagenico bietet eine ganzheitliche Lösung für Menschen, die nach einem effektiven Nahrungsergänzungsmittel suchen, um ihr Haar zu stärken und seinen natürlichen Glanz zu verbessern. Wie funktioniert Collagenico? OFFIZIELLE WEBSITE (JETZT BESTELLEN) ===> https://healthnewz.xyz/collagenico-offer Collagenico wirkt auf vielfältige Weise, um die Haargesundheit zu fördern. Eine Schlüsselkomponente seiner Wirkung ist die Stärkung des Haares von innen heraus durch die Zufuhr von Kollagen und anderen essentiellen Nährstoffen. Diese helfen, die Haarstruktur zu verbessern und das Haar widerstandsfähiger gegen Bruch zu machen, was zu einem insgesamt gesünderen und kräftigeren Erscheinungsbild führt. Darüber hinaus unterstützen Inhaltsstoffe wie Biotin und Zink aktiv das Haarwachstum, indem sie dessen Regulierung fördern. Dies trägt dazu bei, dass das Haar schneller und in einer gesünderen Form wächst, was zu einer verbesserten Haarfülle und -dichte führen kann. Zusätzlich zur Stärkung und Förderung des Haarwachstums trägt Collagenico auch zur Verbesserung des Haaraussehens bei. Die enthaltenen Antioxidantien wie Vitamin C und E spielen eine wichtige Rolle beim Schutz des Haares vor schädlichen Umwelteinflüssen, die es schwächen und beschädigen können. Darüber hinaus tragen Silizium und Bambusextrakt dazu bei, die Elastizität und Festigkeit des Haares zu verbessern, was zu einem insgesamt gesünderen und glänzenderen Aussehen führt. Diese ganzheitliche Wirkung macht Collagenico zu einer effektiven Lösung für diejenigen, die nach einem Nahrungsergänzungsmittel suchen, das nicht nur die Haargesundheit verbessert, sondern auch das Erscheinungsbild des Haares insgesamt optimiert. Collagenico – Effekte – Auswirkungen Collagenico ist ein hochwirksames Nahrungsergänzungsmittel, das speziell entwickelt wurde, um die Gesundheit und das Erscheinungsbild der Haare zu verbessern. Mit einer kraftvollen Formel aus natürlichen Zutaten bietet Collagenico eine Vielzahl von Vorteilen für das Haar: Schnelleres Haarwachstum: Collagenico enthält Biotin und Zink, die das Haarwachstum regulieren und fördern, was zu einem schnelleren und gesünderen Haarwachstum führt. Reduzierung von Spliss und Haarbruch: Die Stärkungswirkung von Kollagen und anderen Nährstoffen in Collagenico hilft, das Haar von innen heraus zu stärken, wodurch Spliss und Haarbruch reduziert werden. Stärkeres, widerstandsfähigeres Haar: Durch die Zufuhr von wichtigen Nährstoffen wird das Haar insgesamt stärker und widerstandsfähiger gegenüber äußeren Einflüssen und Schäden. Gesünder aussehendes Haar mit mehr Glanz und Geschmeidigkeit: Die antioxidative Wirkung von Vitamin C und E schützt das Haar vor schädlichen Umwelteinflüssen, während Silizium und Bambusextrakt die Elastizität und Festigkeit des Haares verbessern, was zu einem gesünderen, glänzenderen und geschmeidigeren Aussehen führt. Collagenico – Meinungen aus dem Forum und Bewertungen Als jemand, der seit der Jugend mit dünner werdendem Haar und Haarbruch zu kämpfen hatte, entdeckte ich Collagenico und war von den Ergebnissen überwältigt. Bereits nach wenigen Wochen bemerkte ich eine signifikante Verbesserung: mein Haar wurde widerstandsfähiger, das Wachstum beschleunigte sich, und das Gesamtbild meines Haares verbesserte sich spürbar – es wurde glänzender, weicher und voller. Diese Veränderungen machten nicht nur mir, sondern auch meinem Umfeld Freude, das den Unterschied in der Qualität meines Haares bemerkte. Die einfache Anwendung der Collagenico-Kapseln integrierte sich nahtlos in meine tägliche Routine, was die Erfahrung umso besser machte. Meine positiven Erfahrungen mit Collagenico spiegeln die zahlreichen zufriedenen Kundenbewertungen wider, die auf spezialisierten Blogs und der offiziellen Website zu finden sind. Diese Bewertungen betonen die Effektivität von Collagenico bei der Verbesserung der Haarqualität und die Benutzerfreundlichkeit des Produkts. Ich empfehle jedem, der mit ähnlichen Haarproblemen kämpft, sich diese Erfahrungsberichte anzusehen und Collagenico eine Chance zu geben. Als Teil der Collagenico-Gemeinschaft freue ich mich, meine Geschichte zu teilen, in der Hoffnung, dass sie anderen helfen kann, ihre Haargesundheit zu verbessern und das Selbstvertrauen zurückzugewinnen, das mit starkem, gesundem Haar einhergeht. Collagenico – Zusammensetzung, Inhaltsstoffe Collagenico ist ein hochwirksames Nahrungsergänzungsmittel, das speziell entwickelt wurde, um die Gesundheit und das Erscheinungsbild der Haare zu verbessern. Die einzigartige Formel von Collagenico setzt auf eine Vielzahl von natürlichen Inhaltsstoffen, die gezielt darauf abzielen, das Haar zu stärken und zu pflegen: OFFIZIELLE WEBSITE (JETZT BESTELLEN) ===> https://healthnewz.xyz/collagenico-offer Kollagen: Stärkt das Haar von innen heraus und fördert die Gesundheit der Kopfhaut, was zu einem kräftigeren und widerstandsfähigeren Haar führt. Biotin (Vitamin B7): Spielt eine entscheidende Rolle beim Haarwachstum und trägt zur Stärkung der Haarstruktur bei, was zu gesünderem und schnellerem Haarwachstum führt. Zink: Unterstützt die Bildung von Kollagen und reguliert das Haarwachstum, wodurch das Haar gestärkt und das Haarwachstum optimiert wird. Vitamin C: Fördert die Produktion von Kollagen, was zu einer verbesserten Haarstruktur und -gesundheit beiträgt, und unterstützt gleichzeitig die Aufnahme von Eisen für gesundes Haarwachstum. Vitamin E: Schützt das Haar vor Schäden durch freie Radikale und fördert ein gesundes, glänzendes Aussehen. Silizium: Verbessert die Stärke und Elastizität des Haares, wodurch es widerstandsfähiger gegenüber Bruch und Beschädigungen wird. Bambusextrakt: Enthält Nährstoffe, die das Haarwachstum fördern und die Haarstruktur verbessern, um für stärkeres, gesünderes Haar zu sorgen. Diese synergistische Kombination von Inhaltsstoffen macht Collagenico zu einer effektiven Lösung für diejenigen, die ihr Haar von innen heraus stärken und pflegen möchten, um ein schöneres und gesünderes Erscheinungsbild zu erzielen. Wo können Sie Collagenico kaufen? Preis – Apotheke, Ebay Collagenico ist eine exklusive Lösung für die Haargesundheit, die nicht in Apotheken, stationären Geschäften oder auf Online-Plattformen wie eBay oder Amazon erhältlich ist. Um sicherzustellen, dass Sie das authentische Produkt erhalten und von den besten Ergebnissen profitieren, ist es ratsam, Collagenico direkt über die offizielle Website zu erwerben. Durch den Kauf auf der offiziellen Website haben Sie die Gewissheit, dass Sie ein hochwertiges und authentisches Produkt erhalten, das speziell entwickelt wurde, um die Gesundheit und das Erscheinungsbild Ihrer Haare zu verbessern. Aktuell bietet die offizielle Website von Collagenico eine exklusive Promotion an, bei der Sie das Produkt zum halben Preis erwerben können. Mit einem Rabatt von 50% beträgt der Preis für Collagenico nur 65 € statt der ursprünglichen 130 €. Diese attraktive Promotion bietet eine großartige Gelegenheit, um Collagenico zu einem erschwinglichen Preis zu erwerben und die Vorteile für Ihre Haargesundheit zu genießen. Verpassen Sie nicht die Chance, von dieser exklusiven Aktion zu profitieren und Ihr Haar mit Collagenico zu stärken und zu pflegen. Collagenico – Dosierung – Anwendungsmethode Für optimale Ergebnisse wird empfohlen, täglich 2 Kapseln Collagenico einzunehmen, gemäß den Anweisungen auf der Verpackung. Diese empfohlene Dosierung wurde sorgfältig formuliert, um die Wirksamkeit des Produkts zu maximieren und die Haargesundheit zu fördern. Es ist wichtig, die angegebene Dosierung nicht zu überschreiten, um mögliche unerwünschte Nebenwirkungen zu vermeiden und die besten Ergebnisse zu erzielen. Halten Sie sich an die empfohlenen Richtlinien, um die Stärkung und Pflege Ihrer Haare mit Collagenico optimal zu unterstützen. Collagenico – Kontraindikationen und Nebenwirkungen Collagenico ist bekannt für seine Verträglichkeit und verursacht in der Regel keine Nebenwirkungen. Die Kontraindikationen sind minimal und betreffen hauptsächlich Personen mit nachgewiesener Allergie gegen einen der Inhaltsstoffe. Personen, die jedoch bekannte Allergien haben, sollten vor der Einnahme von Collagenico die Liste der Inhaltsstoffe überprüfen. In der Regel ist Collagenico jedoch eine sichere und effektive Option zur Unterstützung der Haargesundheit. OFFIZIELLE WEBSITE (JETZT BESTELLEN) ===> https://healthnewz.xyz/collagenico-offer Häufig gestellte Fragen Wie hoch ist der Preis von Collagenico? Aktuell gibt es eine -50% Promotion, sodass Collagenico für 65 € statt der ursprünglichen 130 € erhältlich ist. Kann ich Collagenico auf Plattformen wie Ebay? Nein, Collagenico ist ausschließlich über die offizielle Website erhältlich, um die Authentizität und Qualität des Produkts zu gewährleisten. Wie kann ich Collagenico bestellen? Bestellungen können schnell und einfach über die offizielle Website aufgegeben werden, wo Sie das Bestellformular ausfüllen können. Sind die Nutzer mit Collagenico zufrieden? Ja, zahlreiche positive Bewertungen und Erfahrungsberichte belegen die hohe Zufriedenheit der Nutzer mit den Ergebnissen von Collagenico. Official Blogs https://huggingface.co/beloneyd/Circulaxil https://huggingface.co/beloneyd/cardiotonus
mrsjn/abe_nagomi
mrsjn
"2024-06-10T07:43:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:40:05Z"
Entry not found
MsgmSgmsG/STT_OCR
MsgmSgmsG
"2024-06-25T03:01:10Z"
0
0
transformers
[ "transformers", "onnx", "phi", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T07:41:06Z"
Entry not found
Daohien/marian-finetuned-kde4-en-to-fr
Daohien
"2024-06-10T07:42:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:42:47Z"
Entry not found
PageOrg/text_summarization
PageOrg
"2024-06-10T07:51:42Z"
0
0
transformers
[ "transformers", "onnx", "t5", "text2text-generation", "license:unlicense", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-06-10T07:46:29Z"
--- license: unlicense ---
geraldabrhm/llama-2-13b-pcl-augment-antonym
geraldabrhm
"2024-06-10T23:43:36Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-10T07:46:54Z"
Entry not found
KunalThakare279/dumb-llm
KunalThakare279
"2024-06-10T07:47:26Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T07:47:26Z"
--- license: apache-2.0 ---
Danielrahmai1991/llama3_model_v1
Danielrahmai1991
"2024-06-10T07:49:18Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T07:48:56Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Danielrahmai1991 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tanira6498/progressapp
tanira6498
"2024-06-10T07:49:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:49:18Z"
Entry not found
yzhuang/gemma-1.1-7b-it_fictional_English_v1
yzhuang
"2024-06-10T20:58:04Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:google/gemma-1.1-7b-it", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T07:53:42Z"
--- license: gemma base_model: google/gemma-1.1-7b-it tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: gemma-1.1-7b-it_fictional_English_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-1.1-7b-it_fictional_English_v1 This model is a fine-tuned version of [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
akshat2k/Llama-3-8B-AWQ-4bit
akshat2k
"2024-06-10T07:57:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T07:57:35Z"
Entry not found
OpenVINO/stable-diffusion-v1-5-fp16-ov
OpenVINO
"2024-06-27T07:08:16Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-10T07:58:07Z"
--- license: creativeml-openrail-m --- # stable-diffusion-v1-5-fp16-ov * Model creator: [Runway](https://huggingface.co/runwayml) * Original model: [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) ## Description This is [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format. ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2024.1.0 and higher * Optimum Intel 1.16.0 and higher ## Running Model Inference 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from optimum.intel.openvino import OVStableDiffusionPipeline model_id = "OpenVINO/stable-diffusion-v1-5-fp16-ov" pipeline = OVStableDiffusionPipeline.from_pretrained(model_id) prompt = "sailing ship in storm by Rembrandt" images = pipeline(prompt).images ``` ## Usage examples * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks): - [Latent Consistency Model using Optimum-Intel OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/stable-diffusion-text-to-image/stable-diffusion-text-to-image.ipynb) * [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai): - [C++ image generation pipeline](https://github.com/openvinotoolkit/openvino.genai/tree/master/image_generation/stable_diffusion_1_5/cpp) ## Limitations Check the original model card for [limitations](https://huggingface.co/runwayml/stable-diffusion-v1-5). ## Legal information The original model is distributed under [creativeml-openrail-m](https://huggingface.co/spaces/CompVis/stable-diffusion-license) license. More details can be found in [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
lucalolee/q-FrozenLake-v1-4x4-noSlippery
lucalolee
"2024-06-10T08:02:06Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T08:02:03Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lucalolee/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kokodio/my_awesome_opus_books_model
kokodio
"2024-06-10T15:35:28Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-06-10T08:03:55Z"
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0592 - Bleu: 10.0783 - Gen Len: 16.4672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:------:|:----:|:---------------:|:------:|:-------:| | 1.3998 | 0.0131 | 100 | 1.1909 | 9.0438 | 16.6128 | | 1.3745 | 0.0262 | 200 | 1.1892 | 8.9685 | 16.7342 | | 1.3584 | 0.0393 | 300 | 1.1884 | 8.9498 | 16.6153 | | 1.3615 | 0.0525 | 400 | 1.1913 | 8.94 | 16.5859 | | 1.3417 | 0.0656 | 500 | 1.1818 | 8.8506 | 16.6169 | | 1.3459 | 0.0787 | 600 | 1.1812 | 9.1565 | 16.6316 | | 1.35 | 0.0918 | 700 | 1.1819 | 8.9922 | 16.5945 | | 1.3244 | 0.1049 | 800 | 1.1749 | 8.9409 | 16.6778 | | 1.3282 | 0.1180 | 900 | 1.1690 | 8.9618 | 16.5828 | | 1.3198 | 0.1311 | 1000 | 1.1703 | 9.1664 | 16.6026 | | 1.3359 | 0.1443 | 1100 | 1.1685 | 8.978 | 16.6677 | | 1.3177 | 0.1574 | 1200 | 1.1654 | 8.9768 | 16.6347 | | 1.3334 | 0.1705 | 1300 | 1.1615 | 8.9667 | 16.6148 | | 1.3192 | 0.1836 | 1400 | 1.1635 | 9.1455 | 16.5879 | | 1.315 | 0.1967 | 1500 | 1.1618 | 8.97 | 16.5452 | | 1.309 | 0.2098 | 1600 | 1.1606 | 9.1667 | 16.6367 | | 1.3052 | 0.2229 | 1700 | 1.1613 | 8.962 | 16.6047 | | 1.3006 | 0.2361 | 1800 | 1.1535 | 9.042 | 16.6484 | | 1.2999 | 0.2492 | 1900 | 1.1560 | 8.977 | 16.5513 | | 1.2939 | 0.2623 | 2000 | 1.1553 | 9.0578 | 16.5996 | | 1.3079 | 0.2754 | 2100 | 1.1505 | 9.1548 | 16.6438 | | 1.3086 | 0.2885 | 2200 | 1.1521 | 8.9797 | 16.5493 | | 1.2993 | 0.3016 | 2300 | 1.1498 | 9.1659 | 16.5727 | | 1.2963 | 0.3147 | 2400 | 1.1454 | 9.1355 | 16.532 | | 1.2894 | 0.3279 | 2500 | 1.1423 | 9.2378 | 16.5803 | | 1.2914 | 0.3410 | 2600 | 1.1425 | 9.3786 | 16.6011 | | 1.2898 | 0.3541 | 2700 | 1.1447 | 9.2694 | 16.5112 | | 1.2883 | 0.3672 | 2800 | 1.1446 | 9.2671 | 16.561 | | 1.2796 | 0.3803 | 2900 | 1.1407 | 9.3267 | 16.5528 | | 1.2854 | 0.3934 | 3000 | 1.1403 | 9.1921 | 16.5838 | | 1.2657 | 0.4065 | 3100 | 1.1375 | 9.1904 | 16.5727 | | 1.2729 | 0.4197 | 3200 | 1.1396 | 9.1816 | 16.596 | | 1.2782 | 0.4328 | 3300 | 1.1382 | 9.3068 | 16.5503 | | 1.2784 | 0.4459 | 3400 | 1.1345 | 9.2616 | 16.5168 | | 1.2687 | 0.4590 | 3500 | 1.1333 | 9.2731 | 16.5569 | | 1.2802 | 0.4721 | 3600 | 1.1285 | 9.2272 | 16.5772 | | 1.2693 | 0.4852 | 3700 | 1.1304 | 9.3535 | 16.5645 | | 1.279 | 0.4983 | 3800 | 1.1343 | 9.3037 | 16.565 | | 1.2678 | 0.5115 | 3900 | 1.1306 | 9.3029 | 16.6118 | | 1.2579 | 0.5246 | 4000 | 1.1318 | 9.3173 | 16.6448 | | 1.262 | 0.5377 | 4100 | 1.1282 | 9.3084 | 16.6199 | | 1.2778 | 0.5508 | 4200 | 1.1258 | 9.4782 | 16.6032 | | 1.2567 | 0.5639 | 4300 | 1.1246 | 9.3401 | 16.5965 | | 1.2425 | 0.5770 | 4400 | 1.1293 | 9.4245 | 16.5671 | | 1.2593 | 0.5901 | 4500 | 1.1228 | 9.2466 | 16.6037 | | 1.2591 | 0.6033 | 4600 | 1.1220 | 9.3294 | 16.5925 | | 1.2661 | 0.6164 | 4700 | 1.1255 | 9.333 | 16.5361 | | 1.2446 | 0.6295 | 4800 | 1.1235 | 9.3146 | 16.5676 | | 1.2563 | 0.6426 | 4900 | 1.1205 | 9.3765 | 16.5661 | | 1.2416 | 0.6557 | 5000 | 1.1188 | 9.3549 | 16.5849 | | 1.2605 | 0.6688 | 5100 | 1.1187 | 9.313 | 16.5767 | | 1.253 | 0.6819 | 5200 | 1.1191 | 9.24 | 16.5407 | | 1.2429 | 0.6951 | 5300 | 1.1178 | 9.1666 | 16.5549 | | 1.2587 | 0.7082 | 5400 | 1.1167 | 9.26 | 16.5513 | | 1.2432 | 0.7213 | 5500 | 1.1135 | 9.2584 | 16.5381 | | 1.2422 | 0.7344 | 5600 | 1.1137 | 9.3422 | 16.5752 | | 1.2581 | 0.7475 | 5700 | 1.1146 | 9.3159 | 16.5767 | | 1.2451 | 0.7606 | 5800 | 1.1142 | 9.278 | 16.534 | | 1.25 | 0.7737 | 5900 | 1.1140 | 9.3551 | 16.596 | | 1.2435 | 0.7869 | 6000 | 1.1117 | 9.3174 | 16.561 | | 1.2452 | 0.8000 | 6100 | 1.1112 | 9.3823 | 16.5706 | | 1.2344 | 0.8131 | 6200 | 1.1120 | 9.3922 | 16.5508 | | 1.2231 | 0.8262 | 6300 | 1.1092 | 9.3544 | 16.532 | | 1.2449 | 0.8393 | 6400 | 1.1071 | 9.3757 | 16.5534 | | 1.2154 | 0.8524 | 6500 | 1.1087 | 9.3746 | 16.5366 | | 1.236 | 0.8655 | 6600 | 1.1083 | 9.3719 | 16.5554 | | 1.2355 | 0.8787 | 6700 | 1.1088 | 9.4179 | 16.5701 | | 1.2403 | 0.8918 | 6800 | 1.1079 | 9.3163 | 16.5407 | | 1.2213 | 0.9049 | 6900 | 1.1062 | 9.3422 | 16.5605 | | 1.2315 | 0.9180 | 7000 | 1.1067 | 9.4145 | 16.5615 | | 1.2217 | 0.9311 | 7100 | 1.1062 | 9.4026 | 16.5452 | | 1.2418 | 0.9442 | 7200 | 1.1053 | 9.3595 | 16.5564 | | 1.2181 | 0.9573 | 7300 | 1.1058 | 9.3921 | 16.5737 | | 1.214 | 0.9705 | 7400 | 1.1051 | 9.4053 | 16.5671 | | 1.2135 | 0.9836 | 7500 | 1.1054 | 9.377 | 16.5615 | | 1.2327 | 0.9967 | 7600 | 1.1051 | 9.3944 | 16.5625 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
akshat2k/QantizedLlama-3-8B-AWQ-4bit
akshat2k
"2024-06-10T08:05:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:05:19Z"
Entry not found
K00B404/t5_finetuned_for_navigation_commands
K00B404
"2024-06-10T08:07:09Z"
0
0
null
[ "seq2seq", "en", "license:wtfpl", "region:us" ]
null
"2024-06-10T08:05:39Z"
--- license: wtfpl language: - en tags: - seq2seq ---
kai-oh/llama-3-instruct-realtoktok-240529-name-chat-enkr-tapt-ift-hf
kai-oh
"2024-06-10T09:15:11Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T08:09:07Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manbeast3b/KinoInferLord2
manbeast3b
"2024-06-10T08:09:16Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:09:09Z"
Entry not found
medicalai/radfound
medicalai
"2024-06-10T15:34:19Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T08:09:44Z"
--- license: apache-2.0 ---
imdatta0/mistral_7b_v_Magiccoder_evol_ortho
imdatta0
"2024-06-10T08:10:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:10:00Z"
Entry not found
imdatta0/llama_2_7b_Magiccoder_evol_10k_ortho
imdatta0
"2024-06-10T09:41:45Z"
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:meta-llama/llama-2-7b-hf", "license:llama2", "region:us" ]
null
"2024-06-10T08:11:58Z"
--- license: llama2 library_name: peft tags: - unsloth - generated_from_trainer base_model: meta-llama/llama-2-7b-hf model-index: - name: llama_2_7b_Magiccoder_evol_10k_ortho results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_2_7b_Magiccoder_evol_10k_ortho This model is a fine-tuned version of [meta-llama/llama-2-7b-hf](https://huggingface.co/meta-llama/llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2602 | 0.0261 | 4 | 1.2781 | | 1.2365 | 0.0523 | 8 | 1.2245 | | 1.1865 | 0.0784 | 12 | 1.2092 | | 1.1293 | 0.1046 | 16 | 1.1980 | | 1.1637 | 0.1307 | 20 | 1.1921 | | 1.1178 | 0.1569 | 24 | 1.1846 | | 1.1207 | 0.1830 | 28 | 1.1776 | | 1.156 | 0.2092 | 32 | 1.1737 | | 1.0896 | 0.2353 | 36 | 1.1697 | | 1.1748 | 0.2614 | 40 | 1.1672 | | 1.1531 | 0.2876 | 44 | 1.1638 | | 1.1973 | 0.3137 | 48 | 1.1618 | | 1.1697 | 0.3399 | 52 | 1.1603 | | 1.1422 | 0.3660 | 56 | 1.1584 | | 1.196 | 0.3922 | 60 | 1.1556 | | 1.1874 | 0.4183 | 64 | 1.1542 | | 1.1323 | 0.4444 | 68 | 1.1516 | | 1.1521 | 0.4706 | 72 | 1.1513 | | 1.0959 | 0.4967 | 76 | 1.1520 | | 1.1895 | 0.5229 | 80 | 1.1504 | | 1.2051 | 0.5490 | 84 | 1.1481 | | 1.1386 | 0.5752 | 88 | 1.1467 | | 1.0816 | 0.6013 | 92 | 1.1457 | | 1.0882 | 0.6275 | 96 | 1.1450 | | 1.1404 | 0.6536 | 100 | 1.1447 | | 1.1387 | 0.6797 | 104 | 1.1441 | | 1.1269 | 0.7059 | 108 | 1.1440 | | 1.1617 | 0.7320 | 112 | 1.1439 | | 1.103 | 0.7582 | 116 | 1.1434 | | 1.1244 | 0.7843 | 120 | 1.1424 | | 1.148 | 0.8105 | 124 | 1.1419 | | 1.1023 | 0.8366 | 128 | 1.1419 | | 1.0434 | 0.8627 | 132 | 1.1419 | | 1.1591 | 0.8889 | 136 | 1.1417 | | 1.0678 | 0.9150 | 140 | 1.1417 | | 1.1211 | 0.9412 | 144 | 1.1417 | | 1.1353 | 0.9673 | 148 | 1.1417 | | 1.1253 | 0.9935 | 152 | 1.1416 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
kajamo/model_k1
kajamo
"2024-06-10T08:12:39Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:12:38Z"
Entry not found
JinYoungho/llama3-8b-ocr
JinYoungho
"2024-06-10T08:19:43Z"
0
0
null
[ "safetensors", "license:mit", "region:us" ]
null
"2024-06-10T08:12:50Z"
--- license: mit ---
Owos/whisper-small-en
Owos
"2024-06-10T20:00:14Z"
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T08:14:20Z"
Entry not found
GNOEYHEAT/LowResolution_ImgClf
GNOEYHEAT
"2024-06-10T13:06:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:18:02Z"
Entry not found
Jaradlife/whisper-small-hi
Jaradlife
"2024-06-10T10:12:24Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T08:20:04Z"
Entry not found
Coolwowsocoolwow/Big_Bird
Coolwowsocoolwow
"2024-06-10T08:34:35Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-10T08:25:28Z"
--- license: openrail ---
RsGoksel/Xen_01
RsGoksel
"2024-06-10T08:27:14Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2-7B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-10T08:26:49Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2-7B-bnb-4bit --- # Uploaded model - **Developed by:** RsGoksel - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RsGoksel/Xen_01_Tokenizer
RsGoksel
"2024-06-10T08:27:17Z"
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T08:27:15Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lucalolee/Taxi-v3
lucalolee
"2024-06-10T08:29:21Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T08:29:19Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="lucalolee/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
elenaovv/IceSpike
elenaovv
"2024-06-10T08:34:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:30:55Z"
Entry not found
OpenVINO/stable-diffusion-v1-5-int8-ov
OpenVINO
"2024-06-27T07:08:49Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-10T08:31:58Z"
--- license: creativeml-openrail-m --- # stable-diffusion-v1-5-int8-ov * Model creator: [Runway](https://huggingface.co/runwayml) * Original model: [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) ## Description This is [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** * ratio: **1.0** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2024.1.0 and higher * Optimum Intel 1.16.0 and higher ## Running Model Inference 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from optimum.intel.openvino import OVStableDiffusionPipeline model_id = "OpenVINO/stable-diffusion-v1-5-int8-ov" pipeline = OVStableDiffusionPipeline.from_pretrained(model_id) prompt = "sailing ship in storm by Rembrandt" images = pipeline(prompt).images ``` ## Usage examples * [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks): - [Latent Consistency Model using Optimum-Intel OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/stable-diffusion-text-to-image/stable-diffusion-text-to-image.ipynb) * [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai): - [C++ image generation pipeline](https://github.com/openvinotoolkit/openvino.genai/tree/master/image_generation/stable_diffusion_1_5/cpp) ## Limitations Check the original model card for [limitations](https://huggingface.co/runwayml/stable-diffusion-v1-5). ## Legal information The original model is distributed under [creativeml-openrail-m](https://huggingface.co/spaces/CompVis/stable-diffusion-license) license. More details can be found in [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
Minjunili/1
Minjunili
"2024-06-10T08:32:11Z"
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
"2024-06-10T08:32:11Z"
--- license: artistic-2.0 ---
lamthuy/great_gittables
lamthuy
"2024-06-10T08:51:36Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T08:33:44Z"
--- license: mit --- This is a tabular foundational model trained on a subset of the GitTables datasets using the transformer model. Please refer to https://tabularfm.github.io/ for more information.
Dashuwo/MM10_88_128
Dashuwo
"2024-06-10T08:47:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:34:18Z"
Entry not found
kanishka7/dummy_model
kanishka7
"2024-06-10T08:40:20Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:36:38Z"
Entry not found
Weixiaoxian/Norton
Weixiaoxian
"2024-06-10T08:37:04Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T08:37:04Z"
--- license: apache-2.0 ---
AshiqaSameem/llama_3_biology_summarizer_model
AshiqaSameem
"2024-06-10T08:45:38Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T08:40:03Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** AshiqaSameem - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
elzara/new_model_prompt
elzara
"2024-06-10T08:40:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T08:40:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
imagepipeline/pony-realism
imagepipeline
"2024-06-10T09:26:29Z"
0
1
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T08:43:16Z"
--- license: apache-2.0 ---
jlbaker361/ddpo-dcgan-256
jlbaker361
"2024-06-10T08:44:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:44:12Z"
Entry not found
mmtg/wav2vec2-xlsr-cv-11-nvidia
mmtg
"2024-06-10T18:27:59Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T08:47:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Winmodel/tmp
Winmodel
"2024-06-10T08:50:15Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T08:48:53Z"
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - trl - sft - generated_from_trainer model-index: - name: tmp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 18 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5555 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+rocm6.0 - Datasets 2.19.2 - Tokenizers 0.19.1
MicardiumCzechRepublic/Micardium
MicardiumCzechRepublic
"2024-06-10T08:51:28Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-10T08:49:24Z"
--- license: apache-2.0 --- Co je Micardium? Micardium Pilulky je špičková tobolka na hypertenzi speciálně navržená tak, aby pomáhala efektivně zvládat vysoký krevní tlak. Vysoký krevní tlak nebo hypertenze je běžný stav, který může vést k vážným zdravotním problémům, jako je srdeční onemocnění, mrtvice a selhání ledvin, pokud se neléčí. Micardium kapsle kombinuje silnou směs přírodních ingrediencí navržených pro podporu kardiovaskulárního zdraví, snížení krevního tlaku a zlepšení celkové pohody. Toto komplexní řešení si klade za cíl poskytnout přirozený a účinný přístup k léčbě hypertenze. Oficiální webové stránky:<a href="https://www.nutritionsee.com/miczbajczech">www.Micardium.com</a> <p><a href="https://www.nutritionsee.com/miczbajczech"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Micardium-Czech-Republic-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/miczbajczech">Kup nyní!! Klikněte na odkaz níže pro více informací a získejte nyní 50% slevu... Pospěšte si</a> Oficiální webové stránky:<a href="https://www.nutritionsee.com/miczbajczech">www.Micardium.com</a>
chaoweihuang/mistral-7B-pairwise-feedback
chaoweihuang
"2024-06-10T17:50:38Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-10T08:49:38Z"
Entry not found
lamthuy/ctgan_gittables
lamthuy
"2024-06-10T08:59:27Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T08:52:17Z"
--- license: mit --- This is a tabular foundational model trained on a subset of the GitTables datasets using the CTGAN model. Please refer to https://tabularfm.github.io/ for more information.
DivyanHirasen/path_to_saved_model
DivyanHirasen
"2024-06-10T08:54:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:54:19Z"
Entry not found
Heem2/AGROLLM
Heem2
"2024-06-10T08:58:39Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-10T08:56:45Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** Heem2 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LinxuanPastel/SmurfCatV2
LinxuanPastel
"2024-06-10T09:08:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:58:05Z"
Entry not found
LinxuanPastel/SmurfCatV3
LinxuanPastel
"2024-06-10T09:15:35Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T08:58:25Z"
Entry not found
onizukal/Boya1_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold5
onizukal
"2024-06-10T09:25:01Z"
0
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/beit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-10T08:58:44Z"
--- license: apache-2.0 base_model: microsoft/beit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Boya1_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8505560075942501 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold5 This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5210 - Accuracy: 0.8506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5592 | 1.0 | 924 | 0.4183 | 0.8264 | | 0.4048 | 2.0 | 1848 | 0.4146 | 0.8435 | | 0.1067 | 3.0 | 2772 | 0.5210 | 0.8506 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
DatGuyTaco/minecraft_textures
DatGuyTaco
"2024-06-10T08:59:15Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T08:59:15Z"
--- license: mit ---
lamthuy/stvae_gittables
lamthuy
"2024-06-10T09:03:08Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-10T09:00:18Z"
--- license: mit --- This is a tabular foundational model trained on a subset of the GitTables datasets using the Shared TVAE model. Please refer to https://tabularfm.github.io/ for more information.
ivankiprich/results
ivankiprich
"2024-06-10T09:02:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:02:00Z"
Entry not found
hmyrcmn/cv_mentor_match
hmyrcmn
"2024-06-10T09:02:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:02:50Z"
Entry not found
RewrZ/RWKV6-Amadeus
RewrZ
"2024-06-23T10:15:14Z"
0
0
null
[ "zh", "region:us" ]
null
"2024-06-10T09:06:41Z"
--- language: - zh --- # PS:该项目仅用作个人学习研究使用。 # 对大模型进行自我认知微调尝试,目的是做一个类似《命运石之门》中的人工智能助手 Amadeus 闲聊。后续若有新模型,待我有空会继续更新。 微调数据是自己造的,大模型会认为自己是天才美少女——牧濑红莉牺(设定上是傲娇、对User有爱慕之情但又不坦率),而不是人工智能。 ## 1、RWKV-x060-World-1B6-v2.1-chris-test.pth 使用的 base 模型是 RWKV-x060-World-1B6-v2.1-20240328-ctx4096 ; loss值收敛到了1以下(大佬说正常,虽然个人觉得有点过拟合就是了) 这个模型没有NSFW,甚至因为红莉牺傲娇的设定,她会拒绝,不过给她灌输的数据上又表现对User(非人)的喜欢,所以有时候会很奇怪。 表现效果(不太稳定): ![rwkv6-v2.1-1b6-chris-test1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/GQn59rOZpOyNUYqzIfI9j.jpeg) ![rwkv6-v2.1-1b6-chris-test2.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/hu_zZIYH1EguVNJH2rzsd.jpeg) ![rwkv6-v2.1-1b6-chris-test3.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/1uRk0TtiK-mGFSbgnS-5O.jpeg) ## 2、RWKV-x060-World-1B6-v2.1-novel_extend-lora-20.pth 在 base 模型 RWKV-x060-World-1B6-v2.1-20240328-ctx4096 的基础上做涩涩小说扩写指令微调后,使用了错误的数据格式微调的模型,所以表现很奇葩,但还是保留下来了。 ![rwkv6-1b6-v2.1-novel_extend-chris-error3.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/7mDdjbX50Ei7iigG_VNwP.jpeg) ![rwkv6-1b6-v2.1-novel_extend-chris-error2.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/KAZePUxUtTtq-9A88Tvtp.jpeg) ![rwkv6-1b6-v2.1-novel_extend-chris-error1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/FZPkDuKLfNE3C3_xMhnr2.jpeg) ## 3、RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw.pth 在 base 模型 RWKV-x060-World-1B6-v2.1-20240328-ctx4096 的基础上做涩涩小说扩写指令微调后,使用自我认知数据继续微调5轮而成(对比了一下5、6、7轮的,综合来说这个泛化表现更有趣些)。 由于上一个试水版本个人觉得有点过拟合(部分能力降低甚至丧失),所以这次 loss 值收敛不多,还在 1.0 以上,结果自我认知清洗不彻底,每次新的会话它都会胡说八道自己给自己上设定,虽然比上一个版本的输出表现更不稳定,但这种不稳定却是有趣得多,已作为我开发的一个AI聊天机器人后端模型之一 表现效果(因为傲娇+对User有爱慕之情但又不坦率的矛盾设定,表现不稳定但是挺有意思的): ![RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw5.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/Vb8faeHDsZ9eBwJxt8phs.jpeg) ![RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw4.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/JoMc9wLMDYntEYxYXwL9j.jpeg) ![RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw3.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/wxQagWxe-Dd6DohVzcVVV.jpeg) ![RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw2.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/nByKJXeRUADY6zD8xjfZD.jpeg) ![RWKV-x060-World-1B6-v2.1-novel_extend_chris-nsfw1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/ZhhFG52OnM7QFgIiEHcEM.jpeg) 用来无聊打发时间很有趣。 ## 4、RWKV6-3B-v2.1-chris_roleplay_novel_extend-NSFW-ctx1024.pth 由于对 16 亿参数量的逻辑表现不满意,所以我尝试强行微调 30 亿参数量模型,之前都是用 lora 微调,由于主机显卡显存有限,30亿参数量使用 NF4 量化的 PISSA 微调。 在 base 模型 RWKV-x060-World-3B-v2.1-20240417-ctx4096.pth 的基础做 PISSA 微调2轮而成,除了自我认知外,还加入了少量自造的扩写语料以及角色扮演语料。 表现直接提升一个台阶,是我目前体验最久的模型。 ![rwkv6-3B-v2.1-chris_roleplay_novel_extend2.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/cAi9EwkAdcacB36jfxzIJ.jpeg) ![rwkv6-3B-v2.1-chris_roleplay_novel_extend1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/mbcMIURsILPvkTc52Ql91.jpeg) ![rwkv6-3B-v2.1-chris_roleplay_novel_extend3.jpg](https://cdn-uploads.huggingface.co/production/uploads/6666bea9567d3484336d35cc/R7QfYKGB7bL2Q84Hdu-tA.jpeg)
andrejikica/siglip_finetune
andrejikica
"2024-06-10T09:10:17Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:10:17Z"
Entry not found
carolinaallende/whisper-small-dv
carolinaallende
"2024-06-10T09:11:17Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-10T09:11:06Z"
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-small-dv results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.32945736434108525 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-dv This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.7055 - Wer Ortho: 0.3297 - Wer: 0.3295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:| | 0.0006 | 17.8571 | 500 | 0.7055 | 0.3297 | 0.3295 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
ivankiprich/llama-2-7b-purim_updated1
ivankiprich
"2024-06-11T00:08:09Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T09:13:35Z"
Entry not found
Peerapol/Machine_LLama_16bit
Peerapol
"2024-06-10T09:24:47Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-10T09:13:54Z"
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Peerapol/Machine_LLama_lora
Peerapol
"2024-06-10T09:58:11Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:16:21Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
elenaovv/IceSpike-igc-1-3
elenaovv
"2024-06-10T09:21:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:18:16Z"
Entry not found
ShapeKapseln33/Nooro
ShapeKapseln33
"2024-06-10T09:20:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:18:52Z"
Nooro Leg Massager Reviews & Experiences – Nooro Foot Massager is an electrically-powered foot massager that targets special points - the acupuncture points - on your foot to relieve foot muscle fatigue and promote feet health. This state-of-the-art massage table is equivalent to a mini spa that works right out of the box to improve pedal blood circulation. Luckily you don't have to spend several dollars or your time in standard spas because Nooro Foot Massager provides at-home service. **[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)** ##Customizable Massage Experience You have the option to individualize your massage experience by choosing from a variety of modes and heat setups. The Nooro massager allows you to customize the settings for maximum comfort and effectiveness, regardless of your preference for milder or more vigorous massage therapy. ##Economical Alternative to Therapy The Nooro massager is an economical replacement for regular therapy sessions for individuals looking for relief from edema, swelling, and connected problems. You may benefit from day-to-day massage therapy in the convenience of your home with a single settlement, eventually saving money and time. Yes, the Nooro 3-in-1 Leg Massager is created to be quickly brought while taking a trip. It is compact and lightweight, making it practical for taking with you on journeys for work or recreation The Nooro massager's wireless connectivity, powered by rechargeable batteries, gets rid of the demand for constant power accessibility, unlike bigger wired tools. This attribute makes it perfect for travel, enabling use in locations with restricted electrical outlet access, and offering the flexibility to get massage treatment nearly anywhere. Among the essential functions of the Nooro massager is its cordless connection. Powered by rechargeable batteries, it removes the need for constant power supply unlike bigger, wired massage therapy tools. This makes it suitable for travel or usage in places with minimal access to electrical outlets, allowing for flexible use almost anywhere. **[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)** If you're considering purchasing the Nooro 3-in-1 Leg Massager, I very much advise buying it straight from the maker's main website. This is because it ensures you're obtaining the real item at the most effective and feasible price. With many internet sellers and third-party vendors around, it can be difficult to guarantee the authenticity of the product. By buying from the official site, you can avoid the threat of obtaining fake or low-grade products. Getting from the official internet site offers the benefits of reliability and quality control. The supplier guarantees that consumers receive the high-grade massager and are proud of their item. When you buy from the official site, you can be positive you are obtaining a genuine product that fulfills the maker's criteria for durability and performance. ##Nooro 3-in-1 Leg Massager Pricing Furthermore, purchasing straight from the main site usually features unique perks. One-of-a-kind deals, such as limited-time price cuts and special promos, are commonly only available via this network. Buying from the source allows you to take advantage of the manufacturer's dedicated client assistance group, who are geared up to attend to any inquiries or concerns you may have concerned the items. ##Last Statemet on Nooro 3-In-1 Leg Massager I can confidently state that the Nooro 3-in-1 Leg Massager has ended up being a crucial part of my everyday routine after using it every day for several months. The consistent demands on my time and energy as a busy mom and dad and preschool instructor frequently left me exhausted. I was tired and throbbing after long days of standing around at college and busy nights at home, with little opportunity to unwind and recover. However, my general wellness has considerably enhanced because I began using the Nooro massager in my routine. This ingenious device has transformed my life by providing relief from the tension and fatigue of my hectic routine. The Nooro massager's distinct compression technique soothes aching legs, enhances blood circulation, and advertises relaxation unlike any other massager I've tried. Whether taking a break after a challenging day or preparing for the next, it uses vital comfort and leisure. What establishes the Nooro 3-in-1 Leg Massager aside from others in the marketplace is its remarkable performance and convenience. I can customize my massage therapy to my private demands and preferences by choosing from three distinct modes and including a warm warmth option. The Nooro massager offers targeted alleviation precisely where I require it, whether I'm seeking to minimize muscle spasms, decrease swelling, or merely relax and de-stress. I've utilized the Nooro massager in different locations and conditions, such as when cooking supper or lounging on the sofa and viewing TV. Regardless of the method I select to make use of, enhanced flow, much less anxiety, and a revitalized feeling of convenience and well-being are the consistent results. Repossessing control of my physical health and wellness and living a better lifestyle has been truly lifesaving for me. **[Click here to buy now from official website of Nooro](https://slim-gummies-deutschland.de/nooro-leg-massager)**
dimvarsamis/llama-2-13b-claim-detection
dimvarsamis
"2024-06-10T13:52:28Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-10T09:19:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aleoaaaa/camembert2camembert_shared-finetuned-french-summarization_finetuned_10_06_11_21
aleoaaaa
"2024-06-10T09:21:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:21:10Z"
Entry not found
ChilChat/Qwen2-7B-Instruct-q4f16_1-MLC
ChilChat
"2024-06-10T09:35:48Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:21:42Z"
Entry not found
Armageddon1337/Reinforce-Pixelcopter-PLE-v0
Armageddon1337
"2024-07-02T10:48:34Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-10T09:24:14Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.90 +/- 21.89 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vintage-lavender619/swinv2-tiny-patch4-window8-256-finalterm
vintage-lavender619
"2024-06-10T10:12:05Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-06-10T09:24:31Z"
--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-finalterm results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-finalterm This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2805 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3578 | 1.0 | 10 | 1.2444 | 0.475 | | 1.1054 | 2.0 | 20 | 0.9180 | 0.6531 | | 0.8485 | 3.0 | 30 | 0.6632 | 0.725 | | 0.674 | 4.0 | 40 | 0.4736 | 0.7969 | | 0.5968 | 5.0 | 50 | 0.4341 | 0.8125 | | 0.508 | 6.0 | 60 | 0.5391 | 0.8187 | | 0.4852 | 7.0 | 70 | 0.3906 | 0.8344 | | 0.4354 | 8.0 | 80 | 0.3257 | 0.8656 | | 0.4165 | 9.0 | 90 | 0.3478 | 0.8656 | | 0.4385 | 10.0 | 100 | 0.3114 | 0.8781 | | 0.4156 | 11.0 | 110 | 0.3461 | 0.8781 | | 0.4055 | 12.0 | 120 | 0.3108 | 0.8844 | | 0.4282 | 13.0 | 130 | 0.2916 | 0.8875 | | 0.3546 | 14.0 | 140 | 0.2972 | 0.9 | | 0.3608 | 15.0 | 150 | 0.3428 | 0.8688 | | 0.369 | 16.0 | 160 | 0.2885 | 0.8969 | | 0.3525 | 17.0 | 170 | 0.2861 | 0.9 | | 0.338 | 18.0 | 180 | 0.2832 | 0.9062 | | 0.3633 | 19.0 | 190 | 0.2797 | 0.9031 | | 0.3712 | 20.0 | 200 | 0.2805 | 0.9 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
ramyaramakrishnappa/the-nexus-1
ramyaramakrishnappa
"2024-06-10T09:24:37Z"
0
0
null
[ "region:us" ]
null
"2024-06-10T09:24:37Z"
Entry not found
Abhilash-12/text-summarizer
Abhilash-12
"2024-06-10T09:24:49Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-10T09:24:48Z"
--- license: unknown ---