DefinePK hosts the largest index of Pakistani journals, research articles, news headlines, and videos. It also offers chapter-level book search.
Title: Text2Blend: A Framework for Generating Blendshapes for Facial Animation from Textual Input
Authors: Asad Ali, Areej Fatemah Meghji
Journal: Sukkur IBA Journal of Computing and Mathematical Sciences
Publisher: Sukkur IBA University
Country: Pakistan
Year: 2025
Volume: 9
Issue: 01
Language: en
DOI: 10.30537/sjcms.v9i01.1668
Realistic and fascinating digital characters in video games, animated movies, and Virtual Reality (VR) / Augmented Reality (AR) experiences all depend on facial animation. Creating real facial emotions and speech synchronization historically needed time-consuming manual keyframing or costly motion capture. This research investigates the Carnegie Mellon University (CMU) Pronouncing Dictionary-based text-to-viseme system to automate facial animation. The system generates a rule-based algorithm built using the Python notebook to produce facial animation sequences. These sequences are applied to 3D models using a Blender addon. Using ARKit's 52-blendshape system, phonemes are mapped to visemes, and a proprietary dataset is created. This dataset is improved by manual adjustments. The framework is a potential character animation solution as it can automatically generate facial animations from text input. The proposed automated facial animation framework empowers animators, even those with limited expertise, to create quick and efficient animations with ease. It is designed to minimize the need for extensive refinement, streamline the animation process, and enhance the accessibility for users across varying skill levels.
Loading PDF...
Loading Statistics...