Update README.md
Browse files
README.md
CHANGED
@@ -55,5 +55,29 @@ for file in glob.glob(importDir + "*.ply"):
|
|
55 |
bpy.ops.outliner.orphans_purge()
|
56 |
bpy.ops.outliner.orphans_purge()
|
57 |
```
|
58 |
-
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
bpy.ops.outliner.orphans_purge()
|
56 |
bpy.ops.outliner.orphans_purge()
|
57 |
```
|
58 |
+
_Importing the PLY without normals causes Blender to automatically generate them._
|
59 |
|
60 |
+
At this point the PLY files now need to be converted to training data, for this I wrote a C program [DatasetGen_2_6.7z](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/DatasetGen_2_6.7z?download=true) using [RPLY](https://w3.impa.br/~diego/software/rply/) to load the PLY files and convert them to binary data which I have provided here [HeadsNet-2-6.7z](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/HeadsNet-2-6.7z?download=true).
|
61 |
+
|
62 |
+
It's always good to NAN check your training data after generating it so I have provided a simple Python script for that here [nan_check.py](https://huggingface.co/datasets/tfnn/HeadsNet/resolve/main/nan_check.py?download=true).
|
63 |
+
|
64 |
+
This binary training data can be loaded into Python using:
|
65 |
+
```
|
66 |
+
load_x = []
|
67 |
+
with open("train_x.dat", 'rb') as f:
|
68 |
+
load_x = np.fromfile(f, dtype=np.float32)
|
69 |
+
|
70 |
+
load_y = []
|
71 |
+
with open("train_y.dat", 'rb') as f:
|
72 |
+
load_y = np.fromfile(f, dtype=np.float32)
|
73 |
+
```
|
74 |
+
|
75 |
+
The data can then be reshaped and saved back out as a numpy array which makes for faster loading:
|
76 |
+
```
|
77 |
+
inputsize = 2
|
78 |
+
outputsize = 6
|
79 |
+
train_x = np.reshape(load_x, [tss, inputsize])
|
80 |
+
train_y = np.reshape(load_y, [tss, outputsize])
|
81 |
+
np.save("train_x.npy", train_x)
|
82 |
+
np.save("train_y.npy", train_y)
|
83 |
+
```
|