YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

由于qwen在预训练阶段过滤了h知识,因此需要做的不是在指令微调阶段补偿h知识,而是通过继续预训练来补偿h知识。此lora基于qwen(非instruct)训练,适用于qwen-instruct和deepseek-r1-distill-qwen、tifa-deepsex等模型,并建议与abliterated lora一齐使用。

qwq与现有lora不兼容,需要重新训练。

Downloads last month
68
GGUF
Model size
40.4M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support