File size: 3,023 Bytes
e60a9a6 52d2203 e60a9a6 52d2203 e60a9a6 52d2203 e60a9a6 52d2203 e60a9a6 52d2203 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- role-play
- fine-tuned
- qwen2.5
base_model:
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
---
![Oxy 1 Small](https://cdn-uploads.huggingface.co/production/uploads/63c2d8376e6561b339d998b9/fX1qGkR-1BC1EV_sRkO_9.png)
## Introduction
**Oxy 1 Small** is a fine-tuned version of the [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling.
Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Small aims to provide an accessible and efficient language model for creative and immersive role-play experiences.
## Model Details
- **Model Name**: Oxy 1 Small
- **Model ID**: [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small)
- **Base Model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B)
- **Model Type**: Chat Completions
- **License**: Apache-2.0
- **Language**: English
- **Tokenizer**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Max Input Tokens**: 32,768
- **Max Output Tokens**: 8,192
### Features
- **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues.
- **Efficient**: Compact model size allows for faster inference and reduced computational resources.
- **Parameter Support**:
- `temperature`
- `top_p`
- `top_k`
- `frequency_penalty`
- `presence_penalty`
- `max_tokens`
### Metadata
- **Owned by**: Oxygen (oxyapi)
- **Contributors**: TornadoSoftwares
- **Description**: A Qwen/Qwen2.5-14B-Instruct fine-tune for role-play trained on custom datasets
## Usage
To utilize Oxy 1 Small for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-small")
model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-small")
prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Performance
Performance benchmarks for Oxy 1 Small are not available at this time. Future updates may include detailed evaluations on relevant datasets.
## License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Citation
If you find Oxy 1 Small useful in your research or applications, please cite it as:
```
@misc{oxy1small2024,
title={Oxy 1 Small: A Fine-Tuned Qwen2.5-14B-Instruct Model for Role-Play},
author={Oxygen (oxyapi)},
year={2024},
howpublished={\url{https://huggingface.co/oxyapi/oxy-1-small}},
}
``` |