mywslHFrepo / README.md
joaoxg's picture
Update README.md (#2)
9efa5ee verified
metadata
license: apache-2.0

Set the Cookiecutter-MLOps in Hugging Face

1 - Create Model repository in Hugging Face (e.g. myHFrepo)

2 - Clone your Hugging face repo to your local directory:

cd /path/to/parent directory of project folder
git clone git@hf.co:USERNAME/myHFrepo

For ssh connection check here

3 - Create your virtual environment (e.g. jointvenv)

cd myHFrepo
python -m venv jointvenv
source jointvenv/bin/activate

4 - Transfer and set DagsHub's cookiecutter template employing MLOps best practices to your Huggingface repo

git clone https://dagshub.com/DagsHub/Cookiecutter-MLOps.git

4.1 - Delete git files cloned from Cookiecutter-MLOps repo

rm -r /path/to/myHFrepo/Cookiecutter-MLOps/.git

4.2 - Resolve conflicts with .gitattributes (add explanation? e.g. what's in .gitattributes?)

cat /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes >> /path/to/myHFrepo/.gitattributes
rm /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes
git add .gitattributes
git commit -m "Concatenate .gitattributes info from DagsHub/Cookiecutter-MLOps"

4.3 - Resolve conflicts with README.md (simplified steps, do we actually need to keep it?)

mv /path/to/myHFrepo/Cookiecutter-MLOps/README.md /path/to/myHFrepo/README.md
git add README.md
git commit -m "Get README info from DagsHub/Cookiecutter-MLOps"

4.4 - Move remaining files from DagsHub/Cookiecutter-MLOps to your Hugging Face repo .gitattributes and README.md

cd /path/to/myHFrepo/Cookiecutter-MLOps
mv * .[^.]* ..
cd /path/to/myHFrepo
rmdir /path/to/myHFrepo/Cookiecutter-MLOps

5 - Add venv folder to.gitignore

echo '' >> .gitignore
echo '#'Virtual Environment >> .gitignore
echo jointvenv/ >> .gitignore
git add .
git commit -m "add remaining DagsHub/Cookiecutter-MLOps repo content"

6 - Run step 2 from DagsHub/Cookiecutter-MLOps

make dirs

7 - Run step 4 from DagsHub/Cookiecutter-MLOps

make requirements

8 - Keep record of your own requirements

mv requirements.txt requirementsCookiecutter-MLOps.txt
git add requirementsCookiecutter-MLOps.txt
git commit -m "external requirements from Cookiecutter-MLOps"

pip freeze > requirements.txt
git add requirements.txt
git commit -m "First report venv requirements"

9 - Push your changes to the remote Hugging face repository

git push origin main

10 - Optional Create Model repository in your Hugging Face organization (e.g. myHFrepo)

git remote add dcc git@hf.co:MYORG/mywslHFrepo
git pull dcc main --allow-unrelated-histories

Resolve conflicts in .gitattributes and README.md

git add .
git commit -m "Merge HuggingFace individual and organization repos"
git push dcc main

============================== Cookiecutter-MLOps

A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while having MLOps best practices applied.

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make virtualenv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source env/bin/activate to activate the virtualenv.
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc add data/raw
  7. Edit the code files to your heart's desire.
  8. Process your data, train and evaluate your model using dvc repro or make reproduce
  9. To run the pre-commit hooks, run make pre-commit-install
  10. For setting up data validation tests, run make setup-setup-data-validation
  11. For running the data validation tests, run make run-data-validation
  12. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

β”œβ”€β”€ LICENSE
β”œβ”€β”€ Makefile           <- Makefile with commands like `make dirs` or `make clean`
β”œβ”€β”€ README.md          <- The top-level README for developers using this project.
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ processed      <- The final, canonical data sets for modeling.
β”‚   └── raw            <- The original, immutable data dump
β”‚
β”œβ”€β”€ models             <- Trained and serialized models, model predictions, or model summaries
β”‚
β”œβ”€β”€ notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
β”‚                         the creator's initials, and a short `-` delimited description, e.g.
β”‚                         `1.0-jqp-initial-data-exploration`.
β”œβ”€β”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
β”œβ”€β”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
β”‚   └── figures        <- Generated graphics and figures to be used in reporting
β”‚   └── metrics.txt    <- Relevant metrics after evaluating the model.
β”‚   └── training_metrics.txt    <- Relevant metrics from training the model.
β”‚
β”œβ”€β”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
β”‚                         generated with `pip freeze > requirements.txt`
β”‚
β”œβ”€β”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
β”œβ”€β”€ src                <- Source code for use in this project.
β”‚   β”œβ”€β”€ __init__.py    <- Makes src a Python module
β”‚   β”‚
β”‚   β”œβ”€β”€ data           <- Scripts to download or generate data
β”‚   β”‚   β”œβ”€β”€ great_expectations  <- Folder containing data integrity check files
β”‚   β”‚   β”œβ”€β”€ make_dataset.py
β”‚   β”‚   └── data_validation.py  <- Script to run data integrity checks
β”‚   β”‚
β”‚   β”œβ”€β”€ models         <- Scripts to train models and then use trained models to make
β”‚   β”‚   β”‚                 predictions
β”‚   β”‚   β”œβ”€β”€ predict_model.py
β”‚   β”‚   └── train_model.py
β”‚   β”‚
β”‚   └── visualization  <- Scripts to create exploratory and results oriented visualizations
β”‚       └── visualize.py
β”‚
β”œβ”€β”€ .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
β”œβ”€β”€ dvc.lock           <- constructs the ML pipeline with defined stages.
└── dvc.yaml           <- Traing a model on the processed data.

Project based on the cookiecutter data science project template. #cookiecutterdatascience


To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.

Made with 🐢 by DAGsHub.