--- license: apache-2.0 datasets: - laion/Anh library_name: transformers pipeline_tag: text-generation tags: - pytorch - casual-lm - multilingual - instruct - xglm --- ### Model description This model is [`xglm-7.5b`](https://huggingface.co/facebook/xglm-7.5B) model finetuned on instruct dataset `cross_lingual.jsonl` from [`laion/Anh`](https://huggingface.co/datasets/laion/Anh). ### How to use anh-xglm-7.5b-cross-lingual model can be loaded and used via the following code: ```python import re from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "laion/anh-xglm-7.5b-cross-lingual", ) tokenizer = AutoTokenizer.from_pretrained( "laion/anh-xglm-7.5b-cross-lingual", ) whitespace_tokens_map = {'\n': '', ' ': ''} text = "User: Apa yang terjadi pada pertempuran Cannae? Jawab dalam bahasa China.\n" for k, v in whitespace_tokens_map.items(): text = text.replace(k, v) inputs = tokenizer(text, return_tensors="pt") tokens = model.generate(**inputs) output = tokenizer.decode(tokens[0], skip_special_tokens=True) for v in whitespace_tokens_map.values(): output = re.sub(rf"{v}\s+(\S+)", rf"{v}\1", output) for k, v in whitespace_tokens_map.items(): output = output.replace(v, k) ```