Papers
arxiv:2307.00804

SketchMetaFace: A Learning-based Sketching Interface for High-fidelity 3D Character Face Modeling

Published on Jul 3, 2023
· Featured in Daily Papers on Jul 4, 2023
Authors:
,
,
,

Abstract

Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this paper, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency. SketchMetaFace are available at https://zhongjinluo.github.io/SketchMetaFace/.

Community

This comment has been hidden

Proposes SketchMetaFace: Model (create mesh) of avatar faces using sketches from inexperience designers; 2D to 3D through proposed Implicit and Depth Guided Mesh Modeling (IDGMM). Coarse shape modeling has profile (contouring), depth editing, and ear modeling; fine detail sketching to carve curvature aware strokes (valley and ridge). Preliminary has Pix2Pix (source to target image domain), implicit learning of SDF and occupancy through MLPs, PIFu (pixel-aligned implicit function) that takes pixel feature and query point (and maps them) to SDF or occupancy values. Coarse modeling of head and two years from sketch using part-separated PIFu (three MLPs, one for head and one for each ear); manipulate position of ears and merge into one mesh using corefine-and-compute-union from CGAL (computational geometry algorithms library); profile contours modeled as Laplacian deformations; supervised training of PIFu heads (smoothened and segmented meshes). IDGMM: render depth (from coarse stage mesh) and user modified sketch (elevation profiles) given to Pix2Pix-1 to get normal map; generate implicit field using PIFu-N (deform mesh using new SDF); another Pix2Pix estimates finer depth (take in normals predicted earlier and older depth prediction); flow-based local depth alignment using FlowNet to get high quality point cloud, locally align with mesh to get final (refined) mesh; train all networks separately (fully supervised). High system usability scale (SUS) and better NASA task load index (NASA-TLX) than SimpModeling, ZBrush, and DeepSketch2Face; higher normal consistency and lower L2 chamfer distance compared to PIFuHD, DeepSDF, Pixel2Mesh, and 3D-R2N2. From CUHK.

Links: website, arxiv,HuggingFace Papers, GitHub

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.00804 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.00804 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.00804 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.