Teja-Gollapudi commited on
Commit
6f37e36
1 Parent(s): 2adf457

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -59,7 +59,7 @@ Using Alpaca prompt template might generate better outputs for certain prompts a
59
  import torch
60
  from transformers import pipeline
61
 
62
- dtype = torch.float16 # options are torch.bfloat16, torch.float32
63
  model = pipeline(model="VMware/flan-ul2-alpaca-lora",device_map = 'auto',torch_dtype=dtype )
64
 
65
  prompt_template = """
@@ -94,5 +94,3 @@ The model is based on a large and diverse dataset, but it may still have limitat
94
 
95
 
96
  In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
97
-
98
- # Contribution
 
59
  import torch
60
  from transformers import pipeline
61
 
62
+ dtype = torch.float16 # options are torch.float16, torch.bfloat16, torch.float32
63
  model = pipeline(model="VMware/flan-ul2-alpaca-lora",device_map = 'auto',torch_dtype=dtype )
64
 
65
  prompt_template = """
 
94
 
95
 
96
  In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.