ergotts commited on
Commit
73822d8
·
verified ·
1 Parent(s): 82a051d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -13
README.md CHANGED
@@ -9,34 +9,53 @@ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
9
  license: other
10
  ---
11
 
12
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
15
 
16
  # Usage
17
 
18
  ```python
19
 
20
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
 
 
 
21
 
22
- model_path = "PATH_TO_THIS_REPO"
 
 
23
 
24
- tokenizer = AutoTokenizer.from_pretrained(model_path)
25
- model = AutoModelForCausalLM.from_pretrained(
26
- model_path,
27
- device_map="auto",
28
- torch_dtype='auto'
29
- ).eval()
30
 
31
- # Prompt content: "hi"
 
 
32
  messages = [
33
- {"role": "user", "content": "hi"}
34
  ]
35
 
36
  input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
37
- output_ids = model.generate(input_ids.to('cuda'))
38
  response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
39
 
40
  # Model response: "Hello! How can I assist you today?"
41
  print(response)
42
- ```
 
 
 
 
 
 
9
  license: other
10
  ---
11
 
12
+ # LoRA Model: Fine-Tuned on Argument to Logical Structure Dataset
13
+
14
+ Model Description
15
+
16
+ This model is a LoRA (Low-Rank Adaptation) fine-tuned version of the LLaMA 3.1 8B Instruct model. It has been specifically trained to convert natural language arguments into structured logical forms, including premises, conclusions, and formal symbolic proofs.
17
+
18
+ Training Details
19
+
20
+ • Base Model: LLaMA 3.1 8B Instruct
21
+ • LoRA Rank: 16
22
+ • LoRA Alpha: 32
23
+ • Epochs: 15
24
+ • Batch Size: 2
25
+ • Dataset: The model was trained on a custom dataset designed to map natural language arguments to logical structures. For more details, visit the dataset link: https://huggingface.co/datasets/ergotts/propositional-logic.
26
 
 
27
 
28
  # Usage
29
 
30
  ```python
31
 
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
+ from peft import PeftModel, PeftConfig
34
+
35
+ base_model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
36
+ adapter_model = "ergotts/llama_3.1_8b_prop_logic_ft"
37
 
38
+ model = AutoModelForCausalLM.from_pretrained(base_model)
39
+ model = PeftModel.from_pretrained(model, adapter_model)
40
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
41
 
 
 
 
 
 
 
42
 
43
+ model = model.to("cuda")
44
+ model.eval()
45
+
46
  messages = [
47
+ {"role": "user", "content": "Hi!"},
48
  ]
49
 
50
  input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
51
+ output_ids = model.generate(input_ids.to('cuda'),max_new_tokens=1000)
52
  response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
53
 
54
  # Model response: "Hello! How can I assist you today?"
55
  print(response)
56
+
57
+ ```
58
+
59
+ # Model Trained Using AutoTrain
60
+
61
+ This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).