File size: 1,257 Bytes
81e69dc ec53eb5 81e69dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
<div style="text-align: center;">
<h1>
Mediapipe 68-points Eyes-Closed and Mouth-Opened
</h1>
<div class="grid-container">
<img src="https://akjava.github.io/AIDiagramChatWithVoice-FaceCharacter/webp/128/00538245.webp" alt="Mediapipe Face Detection" class="image">
<p class="text">
This Space use <a href="http://www.apache.org/licenses/LICENSE-2.0">the Apache 2.0</a> Licensed <a href="https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker">Mediapipe FaceLandmarker</a> <br>
One of json format is from MIT licensed <a href="https://github.com/ageitgey/face_recognition">face_recognition</a><br>
I should clarify because it is confusing: I'm not using dlib's non-MIT licensed 68-point model at all.<br>
This is 10-year-old technology. However, most amazing talk-head models,<br> while often having their core code under MIT/Apache licenses, rely on datasets or NVIDIA libraries with more restrictive licenses.<br>
<a href="https://huggingface.co/blog/Akjava/result-guide-image-eyes-mouth">[Article]</a>Results: Converted Guide Images(eyes-closed and mouth-opened) with Flux.1 schenll img2img/inpaint
</p>
</div>
</div>
|