Adaptive Minds โ€” LoRA Adapter Collection

124 LoRA adapters trained across multiple base models (~57.4 GB total). Each adapter directory contains the standard PEFT files (adapter_config.json, adapter_model.safetensors or .bin, tokenizer + chat template where applicable).

Base models

Family Count Base model
qwen3.5-9b/ 49 Qwen3.5-9B (bf16)
qwen2.5-7b/ 42 Qwen2.5-7B-Instruct
llama-3.1-8b/ 27 meta-llama/Llama-3.1-8B-Instruct
qwen3/ 4 Qwen3-8B (older)
misc/ 2 other / sub-checkpoints

Standard hyperparameters (most adapters)

  • r=16, lora_alpha=32, lora_dropout=0.05
  • target_modules: q/k/v/o/gate/up/down projections (some use all-linear)
  • training: PEFT + TRL SFTTrainer, bf16, cosine LR, AdamW fused

Some adapters use GRPO or DPO (*_grpo_* / *_dpo_*); those deviate from the SFT defaults.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base  = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct", torch_dtype="bfloat16")
tok   = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "pavan01729/adaptive-minds-loras", subfolder="qwen2.5-7b/qwen25_sql_v1")

Inventory

qwen3.5-9b/ (49)

  • qwen35-astro-cpt
  • qwen35-astro-sft-ablation
  • qwen35-astronomy
  • qwen35-biogenetics-cpt
  • qwen35-biogenetics-sft-ablation
  • qwen35-biology
  • qwen35-biomedical
  • qwen35-calcquicksmokev5-reasoning-v1
  • qwen35-can_u_train_an_adapter_on_quantum_computing
  • qwen35-chemistry
  • qwen35-chemistry-cpt
  • qwen35-chemistry-sft-ablation
  • qwen35-code
  • qwen35-creative_writing
  • qwen35-cybersecurity
  • qwen35-education
  • qwen35-electrical_drawings
  • qwen35-elon_musk
  • qwen35-engineering
  • qwen35-finance
  • qwen35-finance-reasoning-v1
  • qwen35-fnf_demo3
  • qwen35-gardening
  • qwen35-general-reasoning-v1
  • qwen35-helpfulness_dpo
  • qwen35-history
  • qwen35-legal
  • qwen35-math
  • qwen35-math-v2
  • qwen35-math_grpo
  • qwen35-math_grpo_v1
  • qwen35-math_reasoning
  • qwen35-medical
  • qwen35-medical-reasoning-v1
  • qwen35-multilingual
  • qwen35-object_detection_vlm
  • qwen35-philosophy
  • qwen35-physics
  • qwen35-psychology
  • qwen35-science-reasoning-v1
  • qwen35-science_v15p2
  • qwen35-science_v15p3
  • qwen35-shakespeare
  • qwen35-smoke_hf
  • qwen35-smoke_v15p1
  • qwen35-socrates
  • qwen35-steve_jobs
  • qwen35-veterinary
  • qwen35-visual_reasoning

qwen2.5-7b/ (42)

  • Qwen2.5-7b-Med-REFL-LoraAdapter
  • qwen25-grpo-qed-optim-v1-hacked
  • qwen25-grpo-qed-optim-v3-stuck
  • qwen25-grpo-qed-optim-v4
  • qwen25-sft-warm-qed
  • qwen25-smiles-C
  • qwen25-smiles-cpt
  • qwen25-smiles-grpo
  • qwen25-smiles-midtrain
  • qwen25_bash_v1
  • qwen25_chem_v1_grpo
  • qwen25_chem_v1_s1
  • qwen25_chem_v1_s2
  • qwen25_cti_v1_grpo
  • qwen25_cti_v1_s1
  • qwen25_cti_v1_s2
  • qwen25_cypher_v1
  • qwen25_flowertune_med
  • qwen25_funccall_v1
  • qwen25_kernel_v2
  • qwen25_legal_v1_grpo
  • qwen25_legal_v1_s1
  • qwen25_legal_v1_s2
  • qwen25_mermaid_v1
  • qwen25_pii
  • qwen25_pii_v1
  • qwen25_quantum_v1_grpo
  • qwen25_quantum_v1_s1
  • qwen25_quantum_v1_s2
  • qwen25_query_rewriter
  • qwen25_query_rewriter_peft
  • qwen25_reasoning_v1
  • qwen25_regex_v1
  • qwen25_regex_v1_step50
  • qwen25_sparql_v1
  • qwen25_sql_v1
  • qwen25_terraform_v1_grpo
  • qwen25_terraform_v1_s1
  • qwen25_terraform_v1_s2
  • qwen25_theorem_v1_grpo
  • qwen25_theorem_v1_s1
  • qwen25_theorem_v1_s2

llama-3.1-8b/ (27)

  • llama-8B-ai-v2
  • llama-8B-ai-v2-low
  • llama-8B-ai-v2-med
  • llama-8B-alpaca-2k
  • llama-8B-chemistry
  • llama-8B-chemistry-v2
  • llama-8B-chemistry-v2-low
  • llama-8B-chemistry-v2-med
  • llama-8B-finance-alpaca
  • llama-8B-finance-v2
  • llama-8B-finance-v2-low
  • llama-8B-finance-v2-med
  • llama-8B-general-v2
  • llama-8B-general-v2-low
  • llama-8B-general-v2-med
  • llama-8B-gpt-ai
  • llama-8B-mcq-v1
  • llama-8B-medical-alpaca
  • llama-8B-medical-v2
  • llama-8B-medical-v2-low
  • llama-8B-medical-v2-med
  • llama8b_math_grpo_v1
  • llama8b_math_v1
  • llama8b_math_v2_fixed
  • llama8b_science_mcq_v1
  • llama8b_science_mcq_v2
  • llama8b_science_mcq_v3_fixed

qwen3/ (4)

  • qwen3-code
  • qwen3-math
  • qwen3-science
  • qwen3-writing

misc/ (2)

  • s1
  • s2
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support