Overview
Marco-o1 not only focuses on disciplines with standard answers, such as mathematics, physics, and coding—which are well-suited for reinforcement learning (RL)—but also places greater emphasis on open-ended resolutions. We aim to address the question: "Can the o1 model effectively generalize to broader domains where clear standards are absent and rewards are challenging to quantify?"
Currently, Marco-o1 Large Language Model (LLM) is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and innovative reasoning strategies—optimized for complex real-world problem-solving tasks.
Variants
No | Variant | Cortex CLI command |
---|---|---|
1 | Marco-o1-8b | cortex run marco-o1:8b |
Use it with Jan (UI)
- Install Jan using Quickstart
- Use in Jan model Hub:
cortexhub/marco-o1
Use it with Cortex (CLI)
- Install Cortex using Quickstart
- Run the model with command:
cortex run marco-o1
Credits
- Downloads last month
- 247
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support