Edit model card

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.5.0.dev0

Citation

@misc {shuvam_mandal_2023, author = { {shuvam mandal} }, title = { falcon-med-FT-v1.114 (Revision 87e4f8f) }, year = 2023, url = { https://huggingface.co/shuvom/falcon-med-FT-v1.114 }, doi = { 10.57967/hf/1012 }, publisher = { Hugging Face } }

Downloads last month
13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train shuvom/falcon-med-FT-v1.114

Space using shuvom/falcon-med-FT-v1.114 1