Crystalcareai commited on
Commit
264696f
1 Parent(s): ee0a4d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -2
README.md CHANGED
@@ -63,7 +63,39 @@ Llama-3-SEC has been trained using the llama3 chat template, which allows for ef
63
 
64
  To run inference with the Llama-3-SEC model using the llama3 chat template, use the following code:
65
 
66
- <chat_example>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ## Limitations and Future Work
69
 
@@ -71,7 +103,7 @@ This release represents the initial checkpoint of the Llama-3-SEC model, trained
71
 
72
  ## Usage
73
 
74
- To use the Llama-3-SEC model, please refer to the detailed instructions provided in the repository. The model is available for both commercial and non-commercial use under the Llama-3 license. We encourage users to explore the model's capabilities and provide feedback to help us continuously improve its performance and usability.
75
 
76
  ## Citation
77
 
 
63
 
64
  To run inference with the Llama-3-SEC model using the llama3 chat template, use the following code:
65
 
66
+ ```python
67
+ from transformers import AutoModelForCausalLM, AutoTokenizer
68
+ device = "cuda"
69
+
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ "arcee-ai/Llama-3-SEC",
72
+ torch_dtype="auto",
73
+ device_map="auto"
74
+ )
75
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct")
76
+
77
+ prompt = "What are the key regulatory considerations for a company planning to conduct an initial public offering (IPO) in the United States?"
78
+ messages = [
79
+ {"role": "system", "content": "You are an expert financial assistant - specializing in governance and regulatory domains."},
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
88
+
89
+ generated_ids = model.generate(
90
+ model_inputs.input_ids,
91
+ max_new_tokens=512
92
+ )
93
+ generated_ids = [
94
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
95
+ ]
96
+
97
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
98
+ ```
99
 
100
  ## Limitations and Future Work
101
 
 
103
 
104
  ## Usage
105
 
106
+ The model is available for both commercial and non-commercial use under the Llama-3 license. We encourage users to explore the model's capabilities and provide feedback to help us continuously improve its performance and usability. For more information - please see our detailed blog on Llama-3-SEC.
107
 
108
  ## Citation
109