--- license: apache-2.0 --- Set the Cookiecutter-MLOps in Hugging Face ============================================== 1 - Create Model repository in Hugging Face (e.g. myHFrepo) 2 - Clone your Hugging face repo to your local directory: cd /path/to/parent directory of project folder git clone git@hf.co:USERNAME/myHFrepo For ssh connection check [here](https://huggingface.co/docs/hub/security-git-ssh#add-a-ssh-key-to-your-account) 3 - Create your virtual environment (e.g. jointvenv) cd myHFrepo python -m venv jointvenv source jointvenv/bin/activate 4 - Transfer and set DagsHub's cookiecutter template employing MLOps best practices to your Huggingface repo git clone https://dagshub.com/DagsHub/Cookiecutter-MLOps.git 4.1 - Delete git files cloned from Cookiecutter-MLOps repo rm -r /path/to/myHFrepo/Cookiecutter-MLOps/.git 4.2 - Resolve conflicts with .gitattributes (add explanation? e.g. what's in .gitattributes?) cat /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes >> /path/to/myHFrepo/.gitattributes rm /path/to/myHFrepo/Cookiecutter-MLOps/.gitattributes git add .gitattributes git commit -m "Concatenate .gitattributes info from DagsHub/Cookiecutter-MLOps" 4.3 - Resolve conflicts with README.md (simplified steps, do we actually need to keep it?) mv /path/to/myHFrepo/Cookiecutter-MLOps/README.md /path/to/myHFrepo/README.md git add README.md git commit -m "Get README info from DagsHub/Cookiecutter-MLOps" 4.4 - Move remaining files from DagsHub/Cookiecutter-MLOps to your Hugging Face repo .gitattributes and README.md cd /path/to/myHFrepo/Cookiecutter-MLOps mv * .[^.]* .. cd /path/to/myHFrepo rmdir /path/to/myHFrepo/Cookiecutter-MLOps 5 - Add venv folder to.gitignore echo '' >> .gitignore echo '#'Virtual Environment >> .gitignore echo jointvenv/ >> .gitignore git add . git commit -m "add remaining DagsHub/Cookiecutter-MLOps repo content" 6 - Run step 2 from DagsHub/Cookiecutter-MLOps make dirs 7 - Run step 4 from DagsHub/Cookiecutter-MLOps make requirements 8 - Keep record of your own requirements mv requirements.txt requirementsCookiecutter-MLOps.txt git add requirementsCookiecutter-MLOps.txt git commit -m "external requirements from Cookiecutter-MLOps" pip freeze > requirements.txt git add requirements.txt git commit -m "First report venv requirements" 9 - Push your changes to the remote Hugging face repository git push origin main 10 - Optional Create Model repository in your Hugging Face organization (e.g. myHFrepo) git remote add dcc git@hf.co:MYORG/mywslHFrepo git pull dcc main --allow-unrelated-histories Resolve conflicts in .gitattributes and README.md git add . git commit -m "Merge HuggingFace individual and organization repos" git push dcc main ============================== Cookiecutter-MLOps ============================== A cookiecutter template employing MLOps best practices, so you can focus on building machine learning products while having MLOps best practices applied. Instructions ------------ 1. Clone the repo. 2. Run `make dirs` to create the missing parts of the directory structure described below. 3. *Optional:* Run `make virtualenv` to create a python virtual environment. Skip if using conda or some other env manager. 1. Run `source env/bin/activate` to activate the virtualenv. 4. Run `make requirements` to install required python packages. 5. Put the raw data in `data/raw`. 6. To save the raw data to the DVC cache, run `dvc add data/raw` 7. Edit the code files to your heart's desire. 8. Process your data, train and evaluate your model using `dvc repro` or `make reproduce` 9. To run the pre-commit hooks, run `make pre-commit-install` 10. For setting up data validation tests, run `make setup-setup-data-validation` 11. For **running** the data validation tests, run `make run-data-validation` 12. When you're happy with the result, commit files (including .dvc files) to git. Project Organization ------------ ├── LICENSE ├── Makefile <- Makefile with commands like `make dirs` or `make clean` ├── README.md <- The top-level README for developers using this project. ├── data │   ├── processed <- The final, canonical data sets for modeling. │   └── raw <- The original, immutable data dump │ ├── models <- Trained and serialized models, model predictions, or model summaries │ ├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering), │ the creator's initials, and a short `-` delimited description, e.g. │ `1.0-jqp-initial-data-exploration`. ├── references <- Data dictionaries, manuals, and all other explanatory materials. ├── reports <- Generated analysis as HTML, PDF, LaTeX, etc. │   └── figures <- Generated graphics and figures to be used in reporting │   └── metrics.txt <- Relevant metrics after evaluating the model. │   └── training_metrics.txt <- Relevant metrics from training the model. │ ├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g. │ generated with `pip freeze > requirements.txt` │ ├── setup.py <- makes project pip installable (pip install -e .) so src can be imported ├── src <- Source code for use in this project. │   ├── __init__.py <- Makes src a Python module │ │ │   ├── data <- Scripts to download or generate data │   │   ├── great_expectations <- Folder containing data integrity check files │   │   ├── make_dataset.py │   │   └── data_validation.py <- Script to run data integrity checks │ │ │   ├── models <- Scripts to train models and then use trained models to make │ │ │ predictions │   │   ├── predict_model.py │   │   └── train_model.py │ │ │   └── visualization <- Scripts to create exploratory and results oriented visualizations │   └── visualize.py │ ├── .pre-commit-config.yaml <- pre-commit hooks file with selected hooks for the projects. ├── dvc.lock <- constructs the ML pipeline with defined stages. └── dvc.yaml <- Traing a model on the processed data. --------

Project based on the cookiecutter data science project template. #cookiecutterdatascience

--- To create a project like this, just go to https://dagshub.com/repo/create and select the **Cookiecutter DVC** project template. Made with 🐶 by [DAGsHub](https://dagshub.com/).