deepkyu commited on
Commit
2125fe8
1 Parent(s): 6e283ed

Update README.md

Browse files

Add FAQ for our Hugging Face Demo

Files changed (1) hide show
  1. README.md +50 -1
README.md CHANGED
@@ -2,4 +2,53 @@
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
- This is a model card to link the paper [https://arxiv.org/abs/2205.06421](https://arxiv.org/abs/2205.06421) to [HF Space demo](https://huggingface.co/spaces/CVPR/ml-talking-face).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
+ > This is a model card to link the paper [https://arxiv.org/abs/2205.06421](https://arxiv.org/abs/2205.06421) to [HF Space demo](https://huggingface.co/spaces/CVPR/ml-talking-face).
6
+
7
+ Hi, everyone. It's been a while since we presented our demo at CVPR. I found that our demo reached 300+ Likes.
8
+ Thanks for liking our repo, even in the age of Diffusion and LLM 🙂
9
+
10
+ In 2022, I got questions about our demo and paper through
11
+ [Hugging Face Spaces](https://huggingface.co/spaces/CVPR/ml-talking-face),
12
+ [Youtube](https://www.youtube.com/watch?v=toqdD1F_ZsU),
13
+ [GitHub](https://github.com/deepkyu/ml-talking-face),
14
+ and LinkedIn.
15
+ So, as a small gift, I decided to answer frequently asked questions.
16
+ I hope this help with your journey in talking face generation and multilingual TTS research.
17
+
18
+ ### Is your demo made with Wav2Lip?
19
+
20
+ There were questions about whether or not we succeeded in training Wav2Lip with the dataset from a single person. Our demo hasn't started from the Wav2Lip code. We implemented our model from scratch and with PyTorch Lightning.
21
+
22
+ Someone says that our training strategy is the same as Wav2Lip, but this is incorrect. We applied the positive/negative sampling suggested in Wav2Lip, but we never used SyncNet loss in our training, which is the main contribution of Wav2Lip. Our paper contains the detail, so please check once again.
23
+
24
+ Nonetheless, we have a lot of experience with Wav2Lip code and papers. We also failed to train Wav2Lip with the dataset of the seen face. We assume that the following parts might be the reasons for failure.
25
+
26
+ 1. SyncNet loss adversely affects training with the dataset of the seen face.
27
+ 1. The discriminator in Wav2Lip is too shallow compared to other models, which only fit in training with the unseen dataset.
28
+
29
+ As one of the fans of the Wav2Lip research, I wish you good luck with your research.
30
+
31
+ ### Is it impossible to support other languages in Multilingual TTS?
32
+
33
+ That's impossible unless we train our model again. As mentioned in our paper, our demo made it possible to speak four languages with speech data in Korean. If we add a new language, we have to collect utterance data of that language and use it for our baseline training. As stated in the paper, we collected lots of utterance data for four languages.
34
+
35
+ There have been several "pull requests" for testing other languages (such as Arabic and Polish) with the Hugging Face Spaces. While this may be helpful for those who have trouble typing text in one of four languages, it doesn't allow you to make utterances and generate lip movement with the new language.
36
+
37
+ ### Can I run your model in my local setup?
38
+
39
+ Currently, this demo is running on an AWS EC2 instance operated by MINDsLab in Korea. This Hugging Face Demo sends a RESTful request to the instance, and the server sends the video back to the Hugging Face Space.
40
+
41
+ MINDsLab is a Korean startup, and model codes are strongly related to their earnings. For this reason, the executives did not allow to make the code public.
42
+
43
+ Even if you clone or download the demo code, there won't be such information as you expected. However, if you try to make a Gradio demo with the video in/output, I expect this would be a good reference. Again, I hope you understand.
44
+
45
+ ### Would you like to do another project together?
46
+
47
+ As mentioned above, I was in MINDsLab, a company in South Korea. Nowadays, I left the company and recently joined another company to research lightweight models running on an edge device.
48
+
49
+ However, as a side project, I plan to start a new project related to talking face generation. It involves re-interpreting the talking face generation task as an Audio-conditioned VideoINR to create a lightweight model which can support training with seen faces. I am always open to discussion about talking face generation and other related tasks 🙂
50
+
51
+ Lastly, to briefly introduce myself, I labored at MINDsLab for three years as an alternative military service. I will get my bachelor’s degree next month (Feb. 2023).
52
+
53
+
54
+ I hope this article helps you in any way.