Model still under developing
How to Get Started with the Model
https://colab.research.google.com/drive/1eFzHy6eARkhgy4Uxyre5KsdRbQII50jx?usp=sharing
GGUF version
https://huggingface.co/Elkhayyat17/llama2medical-GGUF
Merged version
https://huggingface.co/Elkhayyat17/merge-PEFT-Llama-2-7b-MedText
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.6.0.dev0
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Elkhayyat17/PEFT-Llama-2-7b-chat-MedText
Adapter
this model