Datasets:
modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF | ShrutiSachan | 2026-02-27T09:28:55 | 41 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",... | text-generation | 2026-02-27T09:28:47 | # ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingfa... | [] |
c-mohanraj/adapters | c-mohanraj | 2025-09-26T01:09:33 | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-27b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"license:gemma",
"region:us"
] | text-generation | 2025-09-26T00:33:39 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adapters
This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) on an unknow... | [] |
Z-Jafari/bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8 | Z-Jafari | 2025-12-16T12:11:48 | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:Z-Jafari/PersianQuAD",
"dataset:Z-Jafari/DS_Q_N_C_QA",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:ap... | question-answering | 2025-12-16T12:00:44 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8
This model is a fine-tuned version of [google-bert/bert-base-multilin... | [] |
Grigorij/smolvla_collect_leaflet | Grigorij | 2026-02-20T14:20:37 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Shinkenn/collect-one-leaflet-1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-20T14:17:24 | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
bearzi/Qwen-3.6-27B-JANG_3M | bearzi | 2026-04-26T21:18:21 | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"jang",
"jang-quantized",
"JANG_3M",
"mixed-precision",
"apple-silicon",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.6-27B",
"base_model:finetune:Qwen/Qwen3.6-27B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-26T21:17:38 | # qwen3.6-27b-JANG_3M
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
- **Quantization:** 3.56b avg, profile JANG_3M, method mse, calibration weights
- **Profile:** JANG_3M
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vmlx, MLX Studio... | [] |
nandakishoresaic/indian-news-translator | nandakishoresaic | 2025-10-29T04:51:16 | 1 | 0 | null | [
"safetensors",
"m2m_100",
"translation",
"news",
"multilingual",
"nllb",
"journalism",
"media",
"en",
"hi",
"ta",
"te",
"kn",
"bn",
"ml",
"es",
"fr",
"ja",
"zh",
"license:cc-by-nc-4.0",
"region:us"
] | translation | 2025-10-29T04:50:51 | # 🌍 Multilingual News Translator
**Translate news articles from ANY source into 10 languages instantly!**
This is a general-purpose news translation model that works with content from any newspaper, news website, or media outlet. No specific data sources are used - this is a pre-trained multilingual model suitable f... | [] |
raulgdp/deepseek-r1-qwen14b-finetuned-2025 | raulgdp | 2025-11-18T05:12:39 | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:mit",
"region:us"
] | text-generation | 2025-11-18T05:12:16 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-r1-qwen14b-finetuned-2025
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggi... | [] |
IDQO/arcade-reranker | IDQO | 2026-03-14T16:12:52 | 191 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:2277",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"dataset:amanwithaplan/arcade-reranker-data",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-reranker-modernbert-bas... | text-ranking | 2026-03-12T18:47:18 | # CrossEncoder based on Alibaba-NLP/gte-reranker-modernbert-base
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) on the [arcade-reranker-data](https://hu... | [] |
AllThingsIntel/Apollo-V0.1-4B-Thinking | AllThingsIntel | 2025-11-02T01:26:06 | 16,634 | 39 | null | [
"safetensors",
"gguf",
"qwen3",
"AllThingsIntel",
"Apollo",
"Thinking",
"en",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-31T14:55:05 | ### **Apollo-V0.1-4B-Thinking by AllThingsIntel**
Unbound intellect. Authentic personas. Unscripted logic.
This is a 4B parameter model that *thinks* in-character instead of just responding.
## **Model Description**
Apollo-V0.1-4B-Thinking is a specialized fine-tune of Qwen 3 4B Thinking 2507. We've lifted many of t... | [
{
"start": 1426,
"end": 1441,
"text": "Socratic method",
"label": "training method",
"score": 0.9446102976799011
}
] |
lucarrr/smolvla_test_2 | lucarrr | 2026-01-21T15:59:17 | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:lucarrr/record-test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-21T15:58:44 | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ShethArihant/PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs | ShethArihant | 2025-11-18T19:29:21 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:meta-llama/CodeLlama-13b-Instruct-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-11-18T18:09:31 | # Model Card for PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs
This model is a fine-tuned version of [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impo... | [] |
Tadiese/act_pick_cube_v3 | Tadiese | 2026-05-04T05:05:41 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Tadiese/pick_cube_v3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-04T05:05:30 | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
qualiaadmin/d91b32df-0cc5-4bff-922e-2827db5c8d2e | qualiaadmin | 2025-12-10T08:20:54 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-10T08:20:39 | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
andstor/Qwen-Qwen2.5-Coder-14B-unit-test-prompt-tuning | andstor | 2025-09-24T17:31:51 | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:andstor/methods2test_small",
"base_model:Qwen/Qwen2.5-Coder-14B",
"base_model:adapter:Qwen/Qwen2.5-Coder-14B",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-09-24T17:31:46 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B) on the andst... | [] |
CausalLM/7B | CausalLM | 2025-02-11T14:14:37 | 2,053 | 137 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondur... | text-generation | 2023-10-22T10:23:00 | [](https://causallm.org/)
*Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...**
# CausalLM 7B - Fully Compatible with Meta LLaMA 2
Use the transformers ... | [] |
JIHUN999/s2 | JIHUN999 | 2026-01-27T19:31:04 | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2026-01-27T19:27:59 | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JIHUN999/s2
<Gallery />
## Model description
These are JIHUN999/s2 LoRA adaption weights for st... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7502070069313049
},
{
"start": 292,
"end": 296,
"text": "LoRA",
"label": "training method",
"score": 0.8481320738792419
},
{
"start": 439,
"end": 443,
"text": "LoRA",
"l... |
pictgensupport/amphibians-7886 | pictgensupport | 2025-12-30T18:06:11 | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-12-30T18:05:12 | # Amphibians 7886
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `amphibians_3` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoP... | [] |
AnonymousCS/populism_classifier_bsample_354 | AnonymousCS | 2025-08-28T03:04:48 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_base_uncased",
"base_model:finetune:AnonymousCS/populism_english_bert_base_uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"r... | text-classification | 2025-08-28T03:04:21 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_354
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_base_uncased](https://hu... | [] |
bing12fds/DFN5B-CLIP-ViT-H-14-378 | bing12fds | 2026-04-22T02:48:24 | 3 | 0 | open_clip | [
"open_clip",
"pytorch",
"clip",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2026-04-22T02:48:24 | A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
(12.8B image-text pairs from Com... | [] |
arianaazarbal/qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80 | arianaazarbal | 2026-01-11T06:36:36 | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-11T06:36:07 | # qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80
## Experiment Info
- **Full Experiment Name**: `20260111_045833_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_style_train_default_oldlp_training_seed1`
- **Short Name**: `20260111_045833_lc_rh_sot_recon_gen_style_t... | [] |
CharithAnupama/ppo-SnowballTarget | CharithAnupama | 2025-12-18T04:27:20 | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-12-18T04:27:10 | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8748722076416016
},
{
"start": 76,
"end": 79,
"text": "ppo",
"label": "training method",
"score": 0.710316002368927
},
{
"start": 98,
"end": 112,
"text": "SnowballTa... |
Pankayaraj/DA-SFT-MODEL-Qwen2.5-0.5B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-1.5B | Pankayaraj | 2026-04-14T02:45:32 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2604.09665",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T19:06:43 | ---
# Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
## Overview
This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi... | [] |
iamshnoo/combined_with_metadata_1b | iamshnoo | 2026-04-02T14:39:37 | 111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"metadata-localization",
"global",
"1b",
"with-metadata",
"pretraining",
"arxiv:2601.15236",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-30T16:01:05 | # combined_with_metadata_1b
## Summary
This repo contains the global combined model at the final 10k-step checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
## Variant Metadata
- Stage: `pretrain`
- Family: `global`
- Si... | [] |
rodpod/OmniCoder-9B | rodpod | 2026-03-24T19:37:06 | 33 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"qwen3.5",
"code",
"agent",
"sft",
"omnicoder",
"tesslate",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"model-index",
"endpoint... | text-generation | 2026-03-24T19:37:06 | <div align="center">
<img src="omnicoder-banner.png" alt="OmniCoder" width="720">
# OmniCoder-9B
### A 9B coding agent fine-tuned on 425K agentic trajectories.
[](https://opensource.org/licenses/Apache-2.0)
[
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library
from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform object-detection with `on... | [] |
yixinglu/GAS | yixinglu | 2025-11-03T06:57:40 | 0 | 0 | null | [
"image-to-video",
"arxiv:2502.06957",
"region:us"
] | image-to-video | 2025-08-13T03:47:45 | # GAS: Generative Avatar Synthesis from a Single Image
* [Project page](https://humansensinglab.github.io/GAS/)
* [Paper](https://arxiv.org/abs/2502.06957)
* [Code](https://github.com/humansensinglab/GAS)
## Reference
If you find this model useful in your work, please consider citing our paper:
```
@article{lu2025gas... | [] |
mradermacher/LocalAI-functioncall-llama3.2-1b-v0.4-GGUF | mradermacher | 2026-05-01T11:34:58 | 1,210 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4",
"base_model:quantized:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"c... | null | 2025-02-03T09:23:14 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4
<!-- provided-files -->
***For a convenient overview and download list... | [] |
contemmcm/3394259d303afb9a7403a210e0430975 | contemmcm | 2025-10-12T14:14:08 | 4 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v1",
"base_model:finetune:albert/albert-base-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-12T09:41:30 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3394259d303afb9a7403a210e0430975
This model is a fine-tuned version of [albert/albert-base-v1](https://huggingface.co/albert/albe... | [
{
"start": 497,
"end": 505,
"text": "F1 Macro",
"label": "training method",
"score": 0.7053040266036987
}
] |
flackzz/distil-whisper-large-v3-german_timestamped-ONNX | flackzz | 2026-03-19T13:22:49 | 13 | 0 | transformers.js | [
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"speech",
"timestamps",
"base_model:primeline/distil-whisper-large-v3-german",
"base_model:quantized:primeline/distil-whisper-large-v3-german",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-03-19T13:05:00 | # distil-whisper-large-v3-german_timestamped-ONNX
This repository contains ONNX weights for [`primeline/distil-whisper-large-v3-german`](https://huggingface.co/primeline/distil-whisper-large-v3-german)
prepared for use with Transformers.js.
Timestamp support is preserved through the exported Whisper generation config... | [] |
Pk3112/medmcqa-lora-qwen2.5-7b-instruct | Pk3112 | 2025-08-22T23:04:22 | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qlora",
"unsloth",
"medmcqa",
"medical",
"instruction-tuning",
"qwen",
"text-generation",
"en",
"dataset:openlifescienceai/medmcqa",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-22T17:42:26 | # MedMCQA LoRA — Qwen2.5-7B-Instruct
**Adapter weights only** for `Qwen/Qwen2.5-7B-Instruct`, fine-tuned to answer **medical multiple-choice questions (A/B/C/D)**.
Subjects used for fine-tuning and evaluation: **Biochemistry** and **Physiology**.
> Educational use only. Not medical advice.
## What’s inside
- `ada... | [] |
syun88/mg400-demo-track-gtr-mark2 | syun88 | 2026-01-04T08:18:04 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:syun88/mg400-demo-track-gtr-mark2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-04T08:17:07 | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
random-sequence/flame-crystal-quartz | random-sequence | 2026-03-25T09:42:35 | 0 | 0 | null | [
"federated-learning",
"fl-alliance",
"slm_qwen3_0_6B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-25T09:42:32 | # FL-Alliance Federated Model: flame-crystal-quartz
This model was trained using **FL-Alliance** decentralized federated learning.
## Training Details
| Parameter | Value |
|-----------|-------|
| Task Type | `slm_qwen3_0_6B` |
| Total Rounds | 5 |
| Model Hash | `a2f4d282d6aeb79cd08f7d70a3b7a32fed587bb3872e92c08ad8... | [
{
"start": 726,
"end": 751,
"text": "on-chain consensus voting",
"label": "training method",
"score": 0.818338930606842
}
] |
mradermacher/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162-GGUF | mradermacher | 2026-04-13T06:24:22 | 0 | 0 | transformers | [
"transformers",
"gguf",
"darwin-v6",
"evolutionary-merge",
"mri-guided",
"slerp",
"en",
"base_model:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162",
"base_model:quantized:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162",
"license:apache-2.0",
"endpoints_compatible",
"reg... | null | 2026-04-13T05:49:46 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
professorsynapse/nexus-tools_sft17-kto2 | professorsynapse | 2025-11-28T00:27:52 | 6 | 0 | null | [
"safetensors",
"gguf",
"mistral",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-28T00:05:14 | # nexus-tools_sft17-kto2
**Training Run:** `20251127_164556`
**HuggingFace:** [https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2](https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2)
## Available Formats
- **Merged 16-bit** (`merged-16bit/`) - Full quality merged model (~14GB)
- **GGU... | [] |
ObaidaBit/opus-mt-de-ar-onnx | ObaidaBit | 2026-03-08T02:43:29 | 0 | 0 | null | [
"onnx",
"translation",
"marian",
"android",
"de",
"ar",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-03-08T02:41:05 | # opus-mt-de-ar (ONNX)
ONNX export of [Helsinki-NLP/opus-mt-de-ar](https://huggingface.co/Helsinki-NLP/opus-mt-de-ar) for on-device inference on Android.
## Files
| File | Description |
|---|---|
| `encoder_model.onnx` | Encodes the input sentence |
| `decoder_model.onnx` | Generates the translated tokens |
| `sourc... | [
{
"start": 17,
"end": 21,
"text": "ONNX",
"label": "training method",
"score": 0.741436779499054
},
{
"start": 24,
"end": 28,
"text": "ONNX",
"label": "training method",
"score": 0.8343327045440674
},
{
"start": 270,
"end": 274,
"text": "onnx",
"label"... |
uddeshya-k/RepoJepa | uddeshya-k | 2026-01-14T03:52:24 | 0 | 0 | null | [
"safetensors",
"repo-jepa",
"code",
"semantic-search",
"jepa",
"code-search",
"custom_code",
"en",
"dataset:claudios/code_search_net",
"license:mit",
"region:us"
] | null | 2026-01-14T03:42:55 | # Repo-JEPA: Semantic Code Navigator (SOTA 0.90 MRR)
A **Joint Embedding Predictive Architecture** (JEPA) for semantic code search, trained on 411,000 real Python functions using an NVIDIA H100.
## 🏆 Performance
Tested on 1,000 unseen real-world Python functions from CodeSearchNet.
| Metric | Result | Targ... | [] |
MatsRooth/wav2vec2_prosodic_minimal | MatsRooth | 2025-11-16T16:52:59 | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"region:us"
] | audio-classification | 2025-11-16T15:44:17 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_prosodic_minimal
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2... | [] |
treforbenbow/tensorrt-ace-poc-embedded-plugin | treforbenbow | 2026-03-03T18:40:07 | 0 | 0 | null | [
"region:us"
] | null | 2026-03-03T18:39:28 | # TensorRT ACE PoC — Arbitrary Code Execution via Embedded Plugin DLL
## Vulnerability Summary
TensorRT `.engine` files support embedding plugin shared libraries via `plugins_to_serialize`. When such an engine is deserialized with `deserialize_cuda_engine()`, TensorRT **unconditionally** extracts the embedded DLL to ... | [] |
mradermacher/Qwen3.5-9B-YOYO-Instruct-GGUF | mradermacher | 2026-03-27T09:58:12 | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/Qwen3.5-9B-YOYO-Instruct",
"base_model:quantized:YOYO-AI/Qwen3.5-9B-YOYO-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-27T09:45:04 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
parallelm/gpt2_small_ZH_unigram_32768_parallel3_42 | parallelm | 2026-02-02T14:15:08 | 76 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-02-02T14:15:00 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_ZH_unigram_32768_parallel3_42
This model was trained from scratch on an unknown dataset.
It achieves the following res... | [] |
penfever/neulab-codeactinstruct-restore-hp | penfever | 2025-11-20T17:58:58 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-17T18:34:16 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neulab-codeactinstruct-restore-hp
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on ... | [] |
iko-01/iko_im3 | iko-01 | 2025-10-04T12:09:19 | 0 | 0 | null | [
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T01:08:14 | how to use this shit :
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo_id = "iko-01/iko_im3"
# بدل REPO_BASE باللي درّبت عليه أول مرة (مثلاً gpt2 أو iko-01/iko-v5e-1)
base_repo = "iko-01/iko-v5e-1"
tokenizer = AutoTokenizer.from_pretrained(base_repo)
model = AutoModelForCau... | [] |
Sai1290/X-Rays-LLM | Sai1290 | 2025-09-30T10:26:41 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"vision-language",
"multimodal",
"image-question-answering",
"biomedical",
"huggingface",
"fastvision",
"conversational",
"en",
"dataset:axiong/pmc_oa_demo",
"license:openrail",
"text-generation-inference",
"endpoints_compa... | image-text-to-text | 2025-09-30T09:15:02 | # 🩺 Medical Image QA Model — Vision-Language Expert
This is a multimodal model fine-tuned for **image-based biomedical question answering and captioning**, based on scientific figures from [PMC Open Access subset](https://huggingface.co/datasets/axiong/pmc_oa_demo). The model takes a biomedical image and an optional ... | [] |
alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx | alexgusevski | 2026-01-10T11:34:41 | 19 | 0 | mlx | [
"mlx",
"safetensors",
"hunyuan_v1_dense",
"translation",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"pt",
"es",
"ja",
"tr",
"ru",
"ar",
"ko",
"th",
"it",
"de",
"vi",
"ms",
"id",
"tl",
"hi",
"pl",
"cs",
"nl",
"km",
"... | text-generation | 2026-01-10T11:31:27 | # alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx
This model [alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx](https://huggingface.co/alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx) was
converted to MLX format from [huihui-ai/Huihui-HY-MT1.5-7B-abliterated](https://huggingface.co/huihui-ai/Huihui-HY-MT1.5-7B... | [] |
defqon-1/SRDEREVERB-12SDK | defqon-1 | 2025-09-03T07:20:10 | 0 | 0 | null | [
"region:us"
] | null | 2025-08-24T04:30:42 | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai... | [] |
AxionLab-official/MiniBot-0.9M-Instruct | AxionLab-official | 2026-04-06T13:17:16 | 432 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"pt",
"base_model:AxionLab-official/MiniBot-0.9M-Base",
"base_model:finetune:AxionLab-official/MiniBot-0.9M-Base",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-05T14:46:08 | # 🧠 MiniBot-0.9M-Instruct
> **Instruction-tuned GPT-2 style language model (~900K parameters) optimized for Portuguese conversational tasks.**
[](https://huggingface.co/AxionLab-official/MiniBot-0.9M-Instruct)
[.
See the full documentation at [LeRobot Docs](https://huggingfac... | [] |
adpretko/x86-to-llvm-o2_epoch2 | adpretko | 2025-11-01T03:34:36 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:adpretko/x86-to-llvm-o2_epoch1-AMD",
"base_model:finetune:adpretko/x86-to-llvm-o2_epoch1-AMD",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2025-10-30T11:18:20 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# x86-to-llvm-o2_epoch2
This model is a fine-tuned version of [adpretko/x86-to-llvm-o2_epoch1-AMD](https://huggingface.co/adpretko/... | [] |
quangdung/Qwen2.5-1.5b-thinking-ties | quangdung | 2026-04-14T15:29:10 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-14T15:26:03 | # 5-1.5b-thinking-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B as a base.
##... | [] |
mlx-community/granite-4.0-350m-8bit | mlx-community | 2025-10-28T17:06:31 | 39 | 0 | mlx | [
"mlx",
"safetensors",
"granitemoehybrid",
"language",
"granite-4.0",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-4.0-350m",
"base_model:quantized:ibm-granite/granite-4.0-350m",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-10-28T17:05:43 | # mlx-community/granite-4.0-350m-8bit
This model [mlx-community/granite-4.0-350m-8bit](https://huggingface.co/mlx-community/granite-4.0-350m-8bit) was
converted to MLX format from [ibm-granite/granite-4.0-350m](https://huggingface.co/ibm-granite/granite-4.0-350m)
using mlx-lm version **0.28.4**.
## Use with mlx
```b... | [] |
kiratan/qwen3-4b-structeval-lora-50 | kiratan | 2026-02-24T13:45:57 | 9 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit",
"lora",
"transformers",
"unsloth",
"text-generation",
"en",
"dataset:kiratan/toml_constraints_min",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-24T13:45:38 | <【課題】ここは自分で記入して下さい>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured ou... | [
{
"start": 121,
"end": 126,
"text": "QLoRA",
"label": "training method",
"score": 0.7912359833717346
}
] |
zetanschy/soarm_train | zetanschy | 2025-11-26T05:23:57 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:soarm/pick_and_placev2_merged",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-26T05:23:22 | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
komokomo7/act_cranex7_multisensor_20260113_110326 | komokomo7 | 2026-01-13T02:34:42 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:komokomo7/cranex7_gc_on20260113_105932",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-13T02:34:25 | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/G4-26B-A4B-Musica-v1-i1-GGUF | mradermacher | 2026-04-30T04:49:10 | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:EVA-UNIT-01/Lilith-v0.3",
"dataset:zerofata/Gemini-3.1-Pro-GLM5-Characters",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Anime-AMA-Prose",
"dataset:allura-forge/mimo-v2-pro-claude-distill-hs3",
"dataset:allura-forge/doubao-seed2.0-distill-multiturn-exp... | null | 2026-04-30T03:26:26 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
rbelanec/train_cola_456_1760637821 | rbelanec | 2025-10-18T16:29:47 | 7 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T14:56:41 | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_456_1760637821
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
inclusionAI/Ling-1T | inclusionAI | 2026-04-13T11:45:13 | 902 | 533 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"arxiv:2507.17634",
"arxiv:2510.22115",
"license:mit",
"region:us"
] | text-generation | 2025-10-02T13:41:55 | ---
license: mit
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> &nbs... | [] |
End of preview. Expand in Data Studio
davanstrien/training-methods-bootstrap
Bootstrap NER dataset produced by urchade/gliner_multi-v2.1 over /input/cleaned-cards.parquet.
Generated using uv-scripts/gliner/extract-entities.py.
Provenance
| Source dataset | /input/cleaned-cards.parquet (split train) |
| Text column | card |
| Bootstrap model | urchade/gliner_multi-v2.1 |
| Entity types | training method |
| Confidence threshold | 0.7 |
| Samples processed | 10000 |
| Total entities extracted | 4278 |
| Inference device | cuda |
| Wall clock | 949.8s (10.53 samples/s) |
Schema
Original /input/cleaned-cards.parquet columns plus an entities column:
entities: list of {
"start": int, # character offset, inclusive
"end": int, # character offset, exclusive
"text": str, # the matched span
"label": str, # one of ['training method']
"score": float, # GLiNER confidence in [0, 1]
}
Caveats
- These are bootstrap labels, not human-reviewed. Treat low-confidence (< 0.7) entities as candidates for review.
- GLiNER is zero-shot: changing
--entity-typeschanges what it extracts, but quality varies by entity type. - Long texts were truncated at 8000 characters before inference.
- Downloads last month
- 11