|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
This is llama2 7B finetuned using qlora with bf16 as compute dtype. The dataset has been generated using open-ai api with samples semantics oriented towards abstract explanation of system design. |
|
|
|
|
|
lora has been merged into the original model, 3 peochs have been trained with batch size of 16. |
|
|
|
|
|
```bash |
|
from google.colab import drive |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from transformers import pipeline |
|
|
|
model_path = "SaffalPoosh/system_design_expert" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_path) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
|
|
|
|
prompt = "Design an application like Whatsapp with tech stack you will use" |
|
gen = pipeline('text-generation', model=model, tokenizer=tokenizer) |
|
result = gen(prompt) |
|
print(result[0]['generated_text']) |
|
``` |