InTeX: Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting

Arxiv 2024

Jiaxiang Tang1, Ruijie Lu1, Hang Zhou2, Xiang Wen3,4, Xiaokang Chen1, Diwen Wan1, Gang Zeng1, Jingdong Wang2, Ziwei Liu5

1 Peking University   2 Baidu   3 Zhejiang University   4 Skywork AI   5 S-Lab, Nanyang Technological University  

Abstract

Text-to-texture synthesis has become a new frontier in 3D content creation thanks to the recent advances in text-to-image models. Existing methods primarily adopt a combination of pretrained depth-aware diffusion and inpainting models, yet they exhibit shortcomings such as 3D inconsistency and limited controllability. To address these challenges, we introduce InTeX, a novel framework for interactive text-to-texture synthesis. (1) InteX includes a user-friendly interface that facilitates interaction and control throughout the synthesis process, enabling region-specific repainting and precise texture editing. (2) Additionally, we develop a unified depth-aware inpainting model that integrates depth information with inpainting cues, effectively mitigating 3D inconsistencies and improving generation speed. Through extensive experiments, our framework has proven to be both practical and effective in text-to-texture synthesis, paving the way for high-quality 3D content creation.

Interactive Texture Generation

Recorded on a laptop with an NVIDIA 4060 GPU.

Generated Textured Meshes

A beautiful anime girl.

A shiba dog.

A leather sofa.

A red pet cat/fox/dragon with flame patterns.

A green pet cat/fox/dragon with grass patterns.

A blue pet cat/fox/dragon with ice patterns.

Citation

@article{tang2024intex,
  title={InTeX: Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting},
  author={Tang, Jiaxiang and Lu, Ruijie and Chen, Xiaokang and Wen, Xiang and Zeng, Gang and Liu, Ziwei},
  journal={arXiv preprint arXiv:2403.11878},
  year={2024}
}