Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: QuantizationMethod.BITS_AND_BYTES
  • _load_in_8bit: False
  • _load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16
  • bnb_4bit_quant_storage: uint8
  • load_in_4bit: True
  • load_in_8bit: False

Framework versions

  • PEFT 0.5.0

Loading and using the model

from peft import PeftModel from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf") model = PeftModel.from_pretrained(base_model, "CarDSLab/HeartDX-LM")

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code = True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = 'right'

instruction = "Convert the report given below to structured format for the columns 'GLS%','IVSd','LVDiastolicFunction','AVStructure','AVStenosis','AVRegurg','AIPHT','LVOTPkVel','LVOTPkGrad','MVStructure','MVStenosis','MVRegurgitation','EF','LVWallThickness', 'AVPkVel(m/s)', 'AVMnGrad(mmHg)', 'AVAContVTI', 'AVAIndex'. Give the result in json format with key-value pairs. If any value for a key is not found in the data, use 'nan' to fill it up. Donot fill up data that is not present in the given report."

prompt = instruction + tte_report instruction = f"###Instruction:\n{prompt}\n\n###Response:\n" pipe = pipeline('text-generation', model = model, tokenizer = tokenizer, max_length = 2048) result = pipe(instruction) result = result[0]['generated_text'].split('###Response:')[1].split('}')[0] + '}'

structured_data = json.loads(result) print(structured_data)

Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.