Sourabh2 commited on
Commit
39333b6
1 Parent(s): de39331

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ## How to Load and Use the Model
3
+ To use the model:
4
+
5
+ 1. Install required libraries: torch and transformers
6
+ 2. Use the following code:
7
+
8
+ ```python
9
+ # Load model directly
10
+ from transformers import AutoTokenizer, AutoModelForCausalLM
11
+
12
+ tokenizer = AutoTokenizer.from_pretrained("Sourabh2/Chemistry_elements", trust_remote_code=True)
13
+ model = AutoModelForCausalLM.from_pretrained("Sourabh2/Chemistry_elements", trust_remote_code=True)
14
+ # Set up the device (GPU if available, otherwise CPU)
15
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
16
+ model = model.to(device)
17
+ # Example usage
18
+ messages = [
19
+ {
20
+ "role": "user",
21
+ "content": "hydrogen"
22
+ }
23
+ ]
24
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
25
+ inputs = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True)
26
+ outputs = model.generate(**inputs, max_length=150, num_return_sequences=1)
27
+ text = tokenizer.decode(outputs[0], skip_special_tokens=True)
28
+ print(text.split("assistant")[1])
29
+ # Decode and print output
30
+ ['Symbol: H', 'Atomic_Number: 1', 'Atomic_Weight: 1.008', 'Density: 0.0899', 'Melting_Point: 14.01', 'Boiling_Point: 20.28', 'Phase: Gas', 'Absolute_Melting_Point: 14.01']