lewtun HF staff commited on
Commit
6467693
1 Parent(s): 005033e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -70,7 +70,7 @@ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_genera
70
  gen_config = {
71
  "max_new_tokens": 1024,
72
  "do_sample": False,
73
- "stop_strings": ["```output"],
74
  "tokenizer": pipe.tokenizer,
75
  }
76
 
@@ -84,7 +84,7 @@ python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
84
  exec(python_code)
85
  ```
86
 
87
- In practice you will want to repeat the
88
 
89
  ## Bias, Risks, and Limitations
90
 
 
70
  gen_config = {
71
  "max_new_tokens": 1024,
72
  "do_sample": False,
73
+ "stop_strings": ["```output"], # Generate until Python code block is complete
74
  "tokenizer": pipe.tokenizer,
75
  }
76
 
 
84
  exec(python_code)
85
  ```
86
 
87
+ The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
88
 
89
  ## Bias, Risks, and Limitations
90