--- license: mit --- # Model Card for EpiDiff [EpiDiff](https://huanngzh.github.io/EpiDiff/) is a generative model based on Zero123 that takes an image of an object as a conditioning frame, and generates 16 multiviews of that object. ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6375d136dee28348a9c63cbf/EE1-k0Ia8gKxQmCSNfAGF.gif) ## Model Details ### Model Description - **Model type:** Generative image-to-multiview model - **License:** [More Information Needed] ### Model Sources - **Repository:** https://github.com/huanngzh/EpiDiff - **Paper:** https://arxiv.org/abs/2312.06725 - **Demo:** https://huanngzh.github.io/EpiDiff/ ## Uses For usage instructions, please refer to [our EpiDiff GitHub repository](https://github.com/huanngzh/EpiDiff). ## Training Details ### Training Data We use renders from the LVIS dataset, utilizing [huanngzh/render-toolbox](https://github.com/huanngzh/render-toolbox).