uf-aice-lab commited on
Commit
1ea118a
1 Parent(s): 1fe67bf

Rename README (1).md to README.md

Browse files
Files changed (1) hide show
  1. README (1).md → README.md +4 -4
README (1).md → README.md RENAMED
@@ -4,19 +4,19 @@ language:
4
  - en
5
  pipeline_tag: question-answering
6
  ---
7
- # Llama-mt-lora
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch
15
  import transformers
16
  from transformers import LlamaTokenizer, LlamaForCausalLM
17
- tokenizer = LlamaTokenizer.from_pretrained("Fan21/Llama-mt-lora")
18
  mdoel = LlamaForCausalLM.from_pretrained(
19
- "Fan21/Llama-mt-lora",
20
  load_in_8bit=False,
21
  torch_dtype=torch.float16,
22
  device_map="auto",
 
4
  - en
5
  pipeline_tag: question-answering
6
  ---
7
+ # Llama-2-Qlora
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ This model is fine-tuned with LLaMA-2 with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch
15
  import transformers
16
  from transformers import LlamaTokenizer, LlamaForCausalLM
17
+ tokenizer = LlamaTokenizer.from_pretrained("uf-aice-lab/Llama-2-QLoRA")
18
  mdoel = LlamaForCausalLM.from_pretrained(
19
+ "uf-aice-lab/Llama-2-QLoRA",
20
  load_in_8bit=False,
21
  torch_dtype=torch.float16,
22
  device_map="auto",