Time-R1 / README.md
ustc-zyt's picture
Update README.md
e3ef37a verified
metadata
license: apache-2.0
datasets:
  - ustc-zyt/time-r1-data
language:
  - en
metrics:
  - mse
  - mae
base_model:
  - Qwen/Qwen2.5-7B

🧠 Time-R1 Reinforced Model Weights

These are the official reinforcement learning (RL) fine-tuned model checkpoints for the paper: "Time Series Forecasting as Reasoning: A Slow-Thinking Approach with Reinforced LLMs".


πŸ“¦ Model Details

  • Base Model: Qwen2.5-7B
  • Tuning Framework: Verl + LLaMA Factory
  • Final Stage: Trained using GRIP (Group-based Relative Importance Policy optimization)
  • Objective: Multi-horizon time series forecasting with structured reasoning

πŸ“¦ Files Included

This model follows the standard Hugging Face transformers format and uses the efficient safetensors backend.

Time-R1/
β”œβ”€β”€ config.json
β”œβ”€β”€ generation_config.json
β”œβ”€β”€ model.safetensors.index.json
β”œβ”€β”€ model-00001-of-00004.safetensors
β”œβ”€β”€ model-00002-of-00004.safetensors
β”œβ”€β”€ model-00003-of-00004.safetensors
β”œβ”€β”€ model-00004-of-00004.safetensors
β”œβ”€β”€ tokenizer_config.json
β”œβ”€β”€ tokenizer.json
└── vocab.json

βœ… Fully compatible with Hugging Face transformers and AutoModelForCausalLM.