igitman commited on
Commit
26bc63f
1 Parent(s): fd999a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -4
README.md CHANGED
@@ -34,13 +34,42 @@ The pipeline we used to produce the data and models is fully open-sourced!
34
  - [Models](https://huggingface.co/collections/nvidia/openmath-2-66fb142317d86400783d2c7b)
35
  - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2)
36
 
 
37
 
38
  # How to use the models?
39
 
40
- Our models are fully compatible with Llama3.1-instruct format, so you should be able to just replace an existing Llama3.1 checkpoint and use it in the same way.
41
- Please note that these models have not been instruction tuned and might not provide good answers outside of math domain.
42
-
43
- If you don't know how to use Llama3.1 models, we provide convenient [instructions in our repo](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  # Reproducing our results
46
 
 
34
  - [Models](https://huggingface.co/collections/nvidia/openmath-2-66fb142317d86400783d2c7b)
35
  - [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2)
36
 
37
+ See our [paper](https://arxiv.org/abs/2410.01560) to learn more details!
38
 
39
  # How to use the models?
40
 
41
+ Our models are trained with the same "chat format" as Llama3.1-instruct models (same system/user/assistant tokens).
42
+ Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
43
+
44
+ We recommend using [instructions in our repo](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) to run inference with these models, but here is
45
+ an example of how to do it through transformers api:
46
+
47
+ ```python
48
+ import transformers
49
+ import torch
50
+
51
+ model_id = "nvidia/OpenMath2-Llama3.1-70B"
52
+
53
+ pipeline = transformers.pipeline(
54
+ "text-generation",
55
+ model=model_id,
56
+ model_kwargs={"torch_dtype": torch.bfloat16},
57
+ device_map="auto",
58
+ )
59
+
60
+ messages = [
61
+ {
62
+ "role": "user",
63
+ "content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.\n\n" +
64
+ "What is the minimum value of $a^2+6a-7$?"},
65
+ ]
66
+
67
+ outputs = pipeline(
68
+ messages,
69
+ max_new_tokens=4096,
70
+ )
71
+ print(outputs[0]["generated_text"][-1]['content'])
72
+ ```
73
 
74
  # Reproducing our results
75