File size: 6,192 Bytes
66388d5 1b6648d 66388d5 1b6648d 83f966b c42dae7 8e1c908 1b6648d a7099cc 1b6648d 5638dc5 1b6648d dc9b09f 1b6648d dc9b09f 13cb42f dc9b09f 29ad987 dc9b09f 42c21d5 dc9b09f d14fc78 dc9b09f b4d7d2e b54e4d2 b4d7d2e 6d16450 bcd6849 b4d7d2e d1ec5b5 172f367 82f6e2b 3cf1b49 b183296 c991b6a 53c4640 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
license: mit
tags:
- thispersondoesnotexist
- stylegan
- stylegan2
- mesh
- model
- 3d
- asset
- generative
pretty_name: HeadsNet
size_categories:
- 1K<n<10K
---
# HeadsNet
The basic concept is to train a FNN/MLP on vertex data of 3D heads so that it can then re-produce random 3D heads.
This dataset uses the [thispersondoesnotexist_to_triposr_6748_3D_Heads](https://huggingface.co/datasets/tfnn/thispersondoesnotexist_to_triposr_6748_3D_Heads) dataset as a foundation.
The heads dataset was collecting using the scraper [Dataset_Scraper.7z](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/Dataset_Scraper.7z?download=true) based on [TripoSR](https://github.com/VAST-AI-Research/TripoSR) which converts the 2D images from [ThisPersonDoesNotExist](https://thispersondoesnotexist.com/) into 3D meshes. _(using [this marching cubes improvement](https://github.com/VAST-AI-Research/TripoSR/issues/22#issuecomment-2010318709) by [thatname/zephyr](https://github.com/thatname))_
Vertex Normals need to be generated before we can work with this dataset, the easiest method to achieve this is with a simple [Blender](https://www.blender.org/) script:
```
import bpy
import glob
import pathlib
from os import mkdir
from os.path import isdir
importDir = "ply/"
outputDir = "ply_norm/"
if not isdir(outputDir): mkdir(outputDir)
for file in glob.glob(importDir + "*.ply"):
model_name = pathlib.Path(file).stem
if pathlib.Path(outputDir+model_name+'.ply').is_file() == True: continue
bpy.ops.wm.ply_import(filepath=file)
bpy.ops.wm.ply_export(
filepath=outputDir+model_name+'.ply',
filter_glob='*.ply',
check_existing=False,
ascii_format=False,
export_selected_objects=False,
apply_modifiers=True,
export_triangulated_mesh=True,
export_normals=True,
export_uv=False,
export_colors='SRGB',
global_scale=1.0,
forward_axis='Y',
up_axis='Z'
)
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False)
bpy.ops.outliner.orphans_purge()
bpy.ops.outliner.orphans_purge()
bpy.ops.outliner.orphans_purge()
```
_Importing the PLY without normals causes Blender to automatically generate them._
At this point the PLY files now need to be converted to training data, for this I wrote a C program [DatasetGen_2_6.7z](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/DatasetGen_2_6.7z?download=true) using [RPLY](https://w3.impa.br/~diego/software/rply/) to load the PLY files and convert them to binary data which I have provided here [HeadsNet-2-6.7z](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/HeadsNet-2-6.7z?download=true).
It's always good to [NaN](https://en.wikipedia.org/wiki/NaN) check your training data after generating it so I have provided a simple Python script for that here [nan_check.py](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/nan_check.py?download=true).
This binary training data can be loaded into Python using [Numpy](https://numpy.org/):
```
load_x = []
with open("train_x.dat", 'rb') as f:
load_x = np.fromfile(f, dtype=np.float32)
load_y = []
with open("train_y.dat", 'rb') as f:
load_y = np.fromfile(f, dtype=np.float32)
```
The data can then be reshaped and saved back out as a numpy array which makes for faster loading:
```
inputsize = 2
outputsize = 6
training_samples = 632847695
train_x = np.reshape(load_x, [training_samples, inputsize])
train_y = np.reshape(load_y, [training_samples, outputsize])
np.save("train_x.npy", train_x)
np.save("train_y.npy", train_y)
```
_632,847,695 samples, each sample is 2 components for train_x (random seed & 0-1 unit sphere position index) and 6 components for train_y (vertex position [x,y,z] & vertex color [r,g,b])._
The basic premise of how this network is trained and thus how the dataset is generated in the C program is:
1. All models are pre-scaled to a normal cubic scale and then scaled again by 0.55 so that they all fit within a unit sphere.
2. All model vertices are reverse traced from the vertex position to the perimeter of the unit sphere using the vertex normal as the direction.
3. The nearest position on a 10,242 vertex icosphere is found and the network is trained to output the model vertex position and vertex color (6 components) at the index of the icosphere vertex.
4. The icosphere vertex index is scaled to a 0-1 range before being input to the network.
5. The network only has two input parameters, the other parameter is a 0-1 model ID which is randomly selected and all vertices for a specific model are trained into the network using the randomly selected ID. This ID does not change per-vertex it only changes per 3D model.
6. The ID allows the user to use this parameter as a sort of hyper-parameter for the random seed: to generate a random Head using this network you would input a random 0-1 seed and then iterate the icosphere index parameter to some sample range between 0-1 so if you wanted a 20,000 vertex head you would iterate between 0-1 at 20,000 increments of 0.00005 as the network outputs one vertex position and vertex color for each forward-pass.
* 1st input parameter = random seed
* 2nd input parameter = icosphere index
More about this type of network topology can be read here: https://gist.github.com/mrbid/1eacdd9d9239b2d324a3fa88591ff852
## Improvements
* Future networks will have 3 additional input parameters one for each x,y,z of a unit vector for the ray direction from the icosphere index.
* The unit vector used to train the network will just be the vertex normal from the 3D model but inverted.
* When performing inference more forward-passes would need to be performed as some density of rays in a 30° or similar cone angle pointing to 0,0,0 would need to be performed per icosphere index position.
* This could result in higher quality outputs. |