--- license: mit pipeline_tag: text-generation tags: - ocean - text-generation-inference - oceangpt language: - en datasets: - zjunlp/OceanBench --- ## 💡 Model description This repo contains a large language model (OceanGPT) for ocean science tasks trained with [KnowLM](https://github.com/zjunlp/KnowLM). It should be noted that the OceanGPT is constantly being updated, so the current model is not the final version. OceanGPT-14B is based on Qwen1.5-14B and trained on a bilingual dataset in Chinese and English. ## 🔍 Intended uses You can download the model to generate responses or contact the [email](bizhen_zju@zju.edu.cn) for the online test demo. ## 🛠️ How to use OceanGPT We wil provide several examples soon and you can modify the input according to your needs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "zjunlp/OceanGPT-14B-v0.1", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("zjunlp/OceanGPT-14B-v0.1") prompt = "Which is the largest ocean in the world?" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## 🛠️ How to evaluate your model in OceanBench We wil provide several examples soon and you can modify the input according to your needs. *Note: We are conducting the final checks on OceanBench and will be uploading it to Hugging Face soon. ```python >>> from datasets import load_dataset >>> dataset = load_dataset("zjunlp/OceanBench") ``` ## 📚 How to cite ```bibtex @article{bi2023oceangpt, title={OceanGPT: A Large Language Model for Ocean Science Tasks}, author={Bi, Zhen and Zhang, Ningyu and Xue, Yida and Ou, Yixin and Ji, Daxiong and Zheng, Guozhou and Chen, Huajun}, journal={arXiv preprint arXiv:2310.02031}, year={2023} } ```