A cutting edge model for anime character gender classification.
Update: Our new models ARE ALREADY RELEASE, with supreme performance, a brand new technique called transgressivability, and a wide range of model choices. To get the latest release, you can follow us and like this repo.
👇👇👇👇👇 Updated Models: https://huggingface.co/DOFOFFICIAL/animeGender-dvgg-0.8.
- Our proposal model, animeGender-dvgg-0.7, which is a fine-tuned binary classification model created by DOF-Studio (2023) and based on the pre-trained model vgg-16, aims to identify the gender, or sex of a particular animation character (particularly designed for Japanese-style 2D anime characters). It is trained by DOF-Studio in July, 2023, on an organizational private data set that is manually collected and tagged by our staff. Although this model has shown an unprecedentedly successful and charming result of our test and verification data set, please note that this model is still not the final version of our character-gender identification model series, but only a phased result (Version 0.7) of our open-source project, which means upgraded versions will be soon released by our team in the near future, and we are confident to tell that as we have improved the network structure, so that there is going to be a magnificent amelioration in the up-coming ones. Thank you for all of your appreciation and support for our work and models.
Modification: This model, animeGender-dvgg-0.7, uses all weights from the convolutional neural network of the original vgg-16 model released by an Oxford team, but has changed the network structure of the last sequantials, say, the dense layers, which means we have modified it into a binary classification model with two nodes (activated by a softmax layer) output the possibility of each gender, namely female, and male. Note that although the overall network structure, particularly the convolutional neural layers have been left untrained, in the future, we have planned to deeply modify this base model, vgg16, to achieve a higher score and precision in this classification task.
Input: While the original model vgg-16 has been designed with an input with 224 * 224 in terms of resolution, and 3 dementions in RGB colorspace, in our model animeGender-dvgg-0.7, we aim to only use 64 * 64 with RGB colorspace only, as the classification task is not too tough. Please note when feeding a picture into the model, please ensure that the input illustration only consists of the head and face of the character you want to identify, in order to make the result from the model most precise and reliable. Moreover, we have designed some Python functions in our open source codes to help you resize, crop, and transform your pictures into 64 * 64 RGB ones, and more information is available in the file folder.
Output: This model, animeGender-dvgg-0.7, has an original output with a one-dim tensor, which length is 2, respectively shows the possibilities of each result of your input, namely female and male. In our open source usage example, see in the file folder, we have conveniently transformed the raw output into a readable result, for example, "male", with a numerical number showing the possibility, or the confidence. Note that our model does not have the background knowledge of a certain character, or the context of an animation, so some gender-neutral characters may still be misclassified, or correctly matched but with a confidence that is around 0.5.
Checkpoint: We have provided the final and proposal model with the name "animeGender-dvgg-0.7", however, to satisfy some further requirements, for example, research, we have provided checkpoints on process, while they have been proved to have an inferior capability compared to the proposed model. More models available, please see the file folder.
- Based on the training data and testing data, the proposal model has achieved a result shown below:
- name = animeGender-dvgg-0.7
- epoch = 50
- trainSet = 20k
- trainLoss = 0.0019
- trainAcc = 0.9640
- testSet = 1.4k
- testLoss = 0.0024
- testAcc = 0.9267
- Here are some sample-exteral tests that are conducted by our staff with the corresponding results shown below:
- We have uploaded the usage with Python in the file folder, and please note you should download them and run locally using either your CPU or with CUDA.
- Note that only the provided codes can be regarded as the only recommended approach to use this model, while other ways including those are automatically shown on this website are not guaranteed to be valid and user-friendly.
- Note that ".bin" or ".pth" models should be used with the pre-defined function modelload() in the provided codes, but ".safetensors" models (baked with built-in configs) can otherwise be simply loaded with the function torch.load().
- Only does the proposal model named "animeGender-dvgg-0.7" has been avaiable in different formats, for example, ".bin", ".safetensors", ".onnx", and ".pb", while the phased ones have not been converted yet.
- We confidently claim that all data we trained on were from real-human illustrations, but our model is also suitable for generative graphs, for example, those generated from stable diffusion, with a relatively high accuracy correspondingly.
- We confidently claim that all model related files excluding this README.md file will never be changed ever since the initial release, so please follow and like us to chase an updated version.
- New improved models have already been officially released by our team, and please refer links shown below:
- Version 0.8: https://huggingface.co/DOFOFFICIAL/animeGender-dvgg-0.8.
Team DOF Studio, July 6th, 2023.
- Downloads last month