--- library_name: transformers license: apache-2.0 --- ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** https://github.com/xbmxb/RAG-query-rewriting - **Model type:** google/t5-large ### Downstream Use [optional] ```from transformers import T5Tokenizer,T5ForConditionalGeneration,BitsAndBytesConfig import torch from google.colab import userdata os.environ["HUGGINGFACE_TOKEN"] = userdata.get('HF_TOKEN') quantization_config = BitsAndBytesConfig( load_in_8bit=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained('catyung/t5l-turbo-hotpot-0331', quantization_config=quantization_config) tokenizer = T5Tokenizer.from_pretrained('catyung/t5l-turbo-hotpot-0331') input_ids = tokenizer(rewrite_prompt, return_tensors="pt").input_ids.to(device) outputs = model.generate(input_ids,max_new_tokens=50) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ```