AI Job Navigator Model
This is a fine-tuned distilgpt2
model using LoRA, trained on a small dataset to provide career advice for the AI industry in 2025.
Model Details
- Base Model: distilgpt2
- Fine-tuning Method: LoRA
- Training Data: 16 samples (148,668 characters) of AI industry career advice
- Training Steps: 10 steps, 5 epochs
- Loss: ~9.67 to 11.61
Usage
You can load and use the model as follows:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("distilgpt2")
tokenizer = AutoTokenizer.from_pretrained("jinv2/ai-job-navigator-model")
model = PeftModel.from_pretrained(base_model, "jinv2/ai-job-navigator-model")
prompt = "根据最新的AI行业趋势,提供2025年的职业建议:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, do_sample=True, temperature=0.7, top_k=50, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for jinv2/ai-job-navigator-model
Base model
distilbert/distilgpt2