Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
compasszzn commited on
Commit
a168622
β€’
1 Parent(s): e2dafad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -49,9 +49,9 @@ Our dataset and models are all available at Huggingface.
49
  |----|---------------------------------------------------------------|---------------------------------------------------------------------------|
50
  | 7B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | πŸ™ [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
51
  || πŸ™[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|πŸ™[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
52
- | 13B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-13B] | πŸ™ [MathOctopus-Cross-13B] |
53
- || πŸ™[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B/tree/main)|πŸ™[MathOctopus-Cross-xRFT-13B]|
54
- | 33B-LLaMA 1 | πŸ™ [MathOctopus-Parallel-33B] | πŸ™ [MathOctopus-Cross-33B] |
55
  | 70B-LLaMA 2 | Coming soon! | Coming Soon! |
56
 
57
  *-Parallel refers to our model trained with the parallel-training strategy.
@@ -120,4 +120,4 @@ Our dataset and models are all available at Huggingface.
120
  | MathOctopus<sup>P</sup>-33B | 56.0 | 52.5 |
121
  | MathOctopus<sup>C</sup>-33B | 53.7 | 51.5 |
122
  ## Intended Uses
123
- These models are trained for research purposes. They are designed to solve multilingual math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed.
 
49
  |----|---------------------------------------------------------------|---------------------------------------------------------------------------|
50
  | 7B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-7B](https://huggingface.co/Mathoctopus/Parallel_7B) | πŸ™ [MathOctopus-Cross-7B](https://huggingface.co/Mathoctopus/Cross_7B) |
51
  || πŸ™[MathOctopus-Parallel-xRFT-7B](https://huggingface.co/Mathoctopus/Parallel_xRFT_7B)|πŸ™[MathOctopus-Cross-xRFT-7B](https://huggingface.co/Mathoctopus/Cross_xRFT_7B)|
52
+ | 13B-LLaMA 2 | πŸ™ [MathOctopus-Parallel-13B](https://huggingface.co/Mathoctopus/Parallel_13B) | πŸ™ [MathOctopus-Cross-13B](https://huggingface.co/Mathoctopus/Cross_13B) |
53
+ || πŸ™[MathOctopus-Parallel-xRFT-13B](https://huggingface.co/Mathoctopus/Parallel_xRFT_13B)|πŸ™[MathOctopus-Cross-xRFT-13B]|
54
+ | 33B-LLaMA 1 | πŸ™ [MathOctopus-Parallel-33B](https://huggingface.co/Mathoctopus/Parallel_33B) | πŸ™ [MathOctopus-Cross-33B] |
55
  | 70B-LLaMA 2 | Coming soon! | Coming Soon! |
56
 
57
  *-Parallel refers to our model trained with the parallel-training strategy.
 
120
  | MathOctopus<sup>P</sup>-33B | 56.0 | 52.5 |
121
  | MathOctopus<sup>C</sup>-33B | 53.7 | 51.5 |
122
  ## Intended Uses
123
+ These models are trained for research purposes. They are designed to solve multilingual math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed.