Spaces:
Running
title: Image Matching Webui
emoji: 🤗
colorFrom: red
colorTo: yellow
sdk: gradio
sdk_version: 4.28.3
app_file: app.py
pinned: true
license: apache-2.0
Image Matching WebUI
Identify matching points between two images
Description
This simple tool efficiently matches image pairs using multiple famous image matching algorithms. The tool features a Graphical User Interface (GUI) designed using gradio. You can effortlessly select two images and a matching algorithm and obtain a precise matching result. Note: the images source can be either local images or webcam images.
Here is a demo of the tool:
The tool currently supports various popular image matching algorithms, namely:
- DUSt3R, CVPR 2024
- OmniGlue, CVPR 2024
- XFeat, CVPR 2024
- RoMa, CVPR 2024
- DeDoDe, 3DV 2024
- Mickey, CVPR 2024
- GIM, ICLR 2024
- DUSt3R, arXiv 2023
- LightGlue, ICCV 2023
- DarkFeat, AAAI 2023
- SFD2, CVPR 2023
- IMP, CVPR 2023
- ASTR, CVPR 2023
- SEM, CVPR 2023
- DeepLSD, CVPR 2023
- GlueStick, ICCV 2023
- ConvMatch, AAAI 2023
- LoFTR, CVPR 2021
- SOLD2, CVPR 2021
- LineTR, RA-L 2021
- DKM, CVPR 2023
- NCMNet, CVPR 2023
- TopicFM, AAAI 2023
- AspanFormer, ECCV 2022
- LANet, ACCV 2022
- LISRD, ECCV 2022
- REKD, CVPR 2022
- CoTR, ICCV 2021
- ALIKE, TMM 2022
- RoRD, IROS 2021
- SGMNet, ICCV 2021
- SuperPoint, CVPRW 2018
- SuperGlue, CVPR 2020
- D2Net, CVPR 2019
- R2D2, NeurIPS 2019
- DISK, NeurIPS 2020
- Key.Net, ICCV 2019
- OANet, ICCV 2019
- SOSNet, CVPR 2019
- HardNet, NeurIPS 2017
- SIFT, IJCV 2004
How to use
HuggingFace / Lightning AI
or deploy it locally following the instructions below.
Requirements
git clone --recursive https://github.com/Vincentqyw/image-matching-webui.git
cd image-matching-webui
conda env create -f environment.yaml
conda activate imw
or using docker:
docker pull vincentqin/image-matching-webui:latest
docker run -it -p 7860:7860 vincentqin/image-matching-webui:latest python app.py --server_name "0.0.0.0" --server_port=7860
Run demo
python3 ./app.py
then open http://localhost:7860 in your browser.
Add your own feature / matcher
I provide an example to add local feature in hloc/extractors/example.py. Then add feature settings in confs
in file hloc/extract_features.py. Last step is adding some settings to model_zoo
in file common/config.yaml.
Contributions welcome!
External contributions are very much welcome. Please follow the PEP8 style guidelines using a linter like flake8 (reformat using command python -m black .
). This is a non-exhaustive list of features that might be valuable additions:
- add webcam support
- add line feature matching algorithms
- example to add a new feature extractor / matcher
- ransac to filter outliers
- add rotation images options before matching
- support export matches to colmap (#issue 6)
- add config file to set default parameters
- dynamically load models and reduce GPU overload
Adding local features / matchers as submodules is very easy. For example, to add the GlueStick:
git submodule add https://github.com/cvg/GlueStick.git third_party/GlueStick
If remote submodule repositories are updated, don't forget to pull submodules with git submodule update --remote
, if you only want to update one submodule, use git submodule update --remote third_party/GlueStick
.
Resources
Acknowledgement
This code is built based on Hierarchical-Localization. We express our gratitude to the authors for their valuable source code.