Instructions to use pavan01729/adaptive-minds-loras with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use pavan01729/adaptive-minds-loras with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
Adaptive Minds โ LoRA Adapter Collection
124 LoRA adapters trained across multiple base models (~57.4 GB total). Each adapter directory contains the standard PEFT files (adapter_config.json, adapter_model.safetensors or .bin, tokenizer + chat template where applicable).
Base models
| Family | Count | Base model |
|---|---|---|
qwen3.5-9b/ |
49 | Qwen3.5-9B (bf16) |
qwen2.5-7b/ |
42 | Qwen2.5-7B-Instruct |
llama-3.1-8b/ |
27 | meta-llama/Llama-3.1-8B-Instruct |
qwen3/ |
4 | Qwen3-8B (older) |
misc/ |
2 | other / sub-checkpoints |
Standard hyperparameters (most adapters)
r=16,lora_alpha=32,lora_dropout=0.05- target_modules: q/k/v/o/gate/up/down projections (some use
all-linear) - training: PEFT + TRL SFTTrainer, bf16, cosine LR, AdamW fused
Some adapters use GRPO or DPO (*_grpo_* / *_dpo_*); those deviate from the SFT defaults.
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct", torch_dtype="bfloat16")
tok = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "pavan01729/adaptive-minds-loras", subfolder="qwen2.5-7b/qwen25_sql_v1")
Inventory
qwen3.5-9b/ (49)
qwen35-astro-cptqwen35-astro-sft-ablationqwen35-astronomyqwen35-biogenetics-cptqwen35-biogenetics-sft-ablationqwen35-biologyqwen35-biomedicalqwen35-calcquicksmokev5-reasoning-v1qwen35-can_u_train_an_adapter_on_quantum_computingqwen35-chemistryqwen35-chemistry-cptqwen35-chemistry-sft-ablationqwen35-codeqwen35-creative_writingqwen35-cybersecurityqwen35-educationqwen35-electrical_drawingsqwen35-elon_muskqwen35-engineeringqwen35-financeqwen35-finance-reasoning-v1qwen35-fnf_demo3qwen35-gardeningqwen35-general-reasoning-v1qwen35-helpfulness_dpoqwen35-historyqwen35-legalqwen35-mathqwen35-math-v2qwen35-math_grpoqwen35-math_grpo_v1qwen35-math_reasoningqwen35-medicalqwen35-medical-reasoning-v1qwen35-multilingualqwen35-object_detection_vlmqwen35-philosophyqwen35-physicsqwen35-psychologyqwen35-science-reasoning-v1qwen35-science_v15p2qwen35-science_v15p3qwen35-shakespeareqwen35-smoke_hfqwen35-smoke_v15p1qwen35-socratesqwen35-steve_jobsqwen35-veterinaryqwen35-visual_reasoning
qwen2.5-7b/ (42)
Qwen2.5-7b-Med-REFL-LoraAdapterqwen25-grpo-qed-optim-v1-hackedqwen25-grpo-qed-optim-v3-stuckqwen25-grpo-qed-optim-v4qwen25-sft-warm-qedqwen25-smiles-Cqwen25-smiles-cptqwen25-smiles-grpoqwen25-smiles-midtrainqwen25_bash_v1qwen25_chem_v1_grpoqwen25_chem_v1_s1qwen25_chem_v1_s2qwen25_cti_v1_grpoqwen25_cti_v1_s1qwen25_cti_v1_s2qwen25_cypher_v1qwen25_flowertune_medqwen25_funccall_v1qwen25_kernel_v2qwen25_legal_v1_grpoqwen25_legal_v1_s1qwen25_legal_v1_s2qwen25_mermaid_v1qwen25_piiqwen25_pii_v1qwen25_quantum_v1_grpoqwen25_quantum_v1_s1qwen25_quantum_v1_s2qwen25_query_rewriterqwen25_query_rewriter_peftqwen25_reasoning_v1qwen25_regex_v1qwen25_regex_v1_step50qwen25_sparql_v1qwen25_sql_v1qwen25_terraform_v1_grpoqwen25_terraform_v1_s1qwen25_terraform_v1_s2qwen25_theorem_v1_grpoqwen25_theorem_v1_s1qwen25_theorem_v1_s2
llama-3.1-8b/ (27)
llama-8B-ai-v2llama-8B-ai-v2-lowllama-8B-ai-v2-medllama-8B-alpaca-2kllama-8B-chemistryllama-8B-chemistry-v2llama-8B-chemistry-v2-lowllama-8B-chemistry-v2-medllama-8B-finance-alpacallama-8B-finance-v2llama-8B-finance-v2-lowllama-8B-finance-v2-medllama-8B-general-v2llama-8B-general-v2-lowllama-8B-general-v2-medllama-8B-gpt-aillama-8B-mcq-v1llama-8B-medical-alpacallama-8B-medical-v2llama-8B-medical-v2-lowllama-8B-medical-v2-medllama8b_math_grpo_v1llama8b_math_v1llama8b_math_v2_fixedllama8b_science_mcq_v1llama8b_science_mcq_v2llama8b_science_mcq_v3_fixed
qwen3/ (4)
qwen3-codeqwen3-mathqwen3-scienceqwen3-writing
misc/ (2)
s1s2
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support