Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
kingabzpro
/
savtadepth
like
15
Paused
App
Files
Files
Community
0b86a0a
savtadepth
6 contributors
History:
15 commits
Dean
Training stage seems to work, creating a non-run commit to use colab as an orchestration machine
0b86a0a
over 4 years ago
.dvc
Hopefully finished with the requirements debacle, now using conda but freezing requirements with pip as usual
over 4 years ago
Notebooks
Transition to MLWorkspace docker and setup makefile with environment commands
over 4 years ago
src
Training stage seems to work, creating a non-run commit to use colab as an orchestration machine
over 4 years ago
.dvcignore
Safe
139 Bytes
Finished data import and processing setup, bug in training step
over 4 years ago
.gitignore
Safe
93 Bytes
Migrated to fastai2, creating the DataLoader now works but I'm stuck on not being able to change the batch_size or num_workers as the interface seems to have changed
over 4 years ago
Makefile
Safe
1.43 kB
*Split commands of preparing environment in README file > *added -y for conda env creation for less interactions for the user
over 4 years ago
README.md
Safe
3.33 kB
*Split commands of preparing environment in README file > *added -y for conda env creation for less interactions for the user
over 4 years ago
dvc.lock
Safe
474 Bytes
Finished data import and processing setup, bug in training step
over 4 years ago
dvc.yaml
Safe
472 Bytes
Training stage seems to work, creating a non-run commit to use colab as an orchestration machine
over 4 years ago
requirements.txt
Safe
1.84 kB
Migrated to fastai2, creating the DataLoader now works but I'm stuck on not being able to change the batch_size or num_workers as the interface seems to have changed
over 4 years ago
run_dev_env.sh
Safe
227 Bytes
Migrated to fastai2, creating the DataLoader now works but I'm stuck on not being able to change the batch_size or num_workers as the interface seems to have changed
over 4 years ago