Vecteus-Constant / README.md
umisetokikaze's picture
Update README.md
51b1422 verified
---
license: apache-2.0
language:
- en
- ja
tags:
- finetuned
library_name: transformers
pipeline_tag: text-generation
---
<img src="./veteus_logo.svg" width="100%" height="20%" alt="">
# Our Models
- [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1)
- [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1)
- [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW)
- [Ninja-v1-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k)
- [Ninja-v1-NSFW-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k)
## This is a prototype of Vecteus-v1
## Model Card for VecTeus-Constant
The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
VecTeus has the following changes compared to Mistral-7B-v0.1.
- Achieving both high quality Japanese and English generation
- Can be generated NSFW
- Memory ability that does not forget even after long-context generation
This model was created with the help of GPUs from the first LocalAI hackathon.
We would like to take this opportunity to thank
## List of Creation Methods
- Chatvector for multiple models
- Simple linear merging of result models
- Domain and Sentence Enhancement with LORA
- Context expansion
## Instruction format
Freed from templates. Congratulations
## Example prompts to improve (Japanese)
- BAD:ใ€€ใ‚ใชใŸใฏโ—‹โ—‹ใจใ—ใฆๆŒฏใ‚‹่ˆžใ„ใพใ™
- GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใงใ™
- BAD: ใ‚ใชใŸใฏโ—‹โ—‹ใŒใงใใพใ™
- GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใ‚’ใ—ใพใ™
## Performing inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Local-Novel-LLM-project/Vecteus-Constant"
new_tokens = 1024
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
system_prompt = "ใ‚ใชใŸใฏใƒ—ใƒญใฎๅฐ่ชฌๅฎถใงใ™ใ€‚\nๅฐ่ชฌใ‚’ๆ›ธใ„ใฆใใ ใ•ใ„\n-------- "
prompt = input("Enter a prompt: ")
system_prompt += prompt + "\n-------- "
model_inputs = tokenizer([system_prompt], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
````
## Other points to keep in mind
- The training data may be biased. Be careful with the generated sentences.
- Memory usage may be large for long inferences.
- If possible, we recommend inferring with llamacpp rather than Transformers.