--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). Dataset used: codeparrot/xlcost-text-to-code github: https://github.com/manishzed/LLM-Fine-tune # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "kr-manish/Mistral-7B-autotrain-text-python-vf1" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path) #input_text = "Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ;" input_text ="Program to convert Centimeters to Pixels | Function to convert centimeters to pixels ; Driver Code" # Tokenize input text input_ids = tokenizer.encode(input_text, return_tensors="pt") # Generate output text output = model.generate(input_ids, max_length=1024, num_return_sequences=1, do_sample=True) # Decode and print output generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) #Program to convert Centimeters to Pixels | Function to convert centimeters to pixels ; Driver Code [/INST] def cmToPixels ( cm ) : NEW_LINE INDENT return ( ( cm * 100 ) / 17 ) NEW_LINE DEDENT cm = 105.25 NEW_LINE print ( round ( cmToPixels ( cm ) , 3 ) ) NEW_LINE ```