--- language: vi tags: - vi - vietnamese - gpt2 - text-generation - lm - nlp datasets: - oscar widget: - text: "生命、宇宙、そして万物についての究極の疑問の答えは" --- # How to use the model ~~~~ from transformers import GPT2Tokenizer, AutoModelForCausalLM tokenizer = GPT2Tokenizer.from_pretrained("nhanv/vi-gpt2") model = AutoModelForCausalLM.from_pretrained("nhanv/vi-gpt2") ~~~~ # Model architecture A 12-layer, 768-hidden-size transformer-based language model.