Edit model card
drawing

SniffyOtter-7B

Description

This repository hosts SniffyOtter-7B, an advanced Japanese language model specifically trained for generating erotic novels. SniffyOtter is designed to excel in creating engaging and captivating erotic content while maintaining context and complexity in its responses.

In extensive testing and benchmarks, SniffyOtter has proven to be an exceptionally strong model. It builds upon the strengths of its predecessor models while adding enhanced intelligence and a focus on eroticism. Please note that SniffyOtter is tailored for erotic content generation and may not perform optimally on other tasks.

Benchmark Results

Model average eroticism complexity contextual maintenance
Antler-RP-ja-westlake-chatvector 49.17 5.5 47.1 94.9
SniffyOtter-7B 48.80 5.7 46.2 94.5
Sabbath-2x7B* 48.10 4.8 45.8 93.7
Antler-7B 47.62 5.25 45.3 92.3
Nocturn-7B 47.25 5.15 44.7 91.9
Sapphire-7B 46.90 4.9 43.5 92.3
LightChatAssistant-2x7B* 46.43 4.2 43.1 92.0
japanese-stablelm-instruct-gamma-7b 46.02 2.85 44.3 90.9
chatntq-ja-7b-v1.0 45.12 2.55 41.4 91.4
Calm2-7B-Chat 45.07 3.4 40.2 91.6

*tested in 8bit version because of lack of GPU memory

Benchmark Metrics:

  • Eroticism: Measures the frequency of erotic words in the generated text. Calculated using a predefined set of words considered erotic.
  • Complexity: Evaluates the model's ability to produce non-repetitive responses. Higher scores indicate more diverse and less repetitive text, calculated using zlib.compress, which I find effective at detecting significantly repetitive texts.
  • Context Maintenance: Assesses how well the model maintains the given topic. Responses that stray from the context result in lower scores. Calculated using japanese-reranker-cross-encoder-large-v1 to measure relevance between the input and the generated response.

The benchmark is a refined version of what I used in Sapphire7B. While it provides some insights, it is important to consider that the specific set of erotic words and the undisclosed details of the benchmark may introduce biases. Therefore, it is recommended to take this result with a grain of salt for now.

Downloads last month
23
Safetensors
Model size
7.24B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.