Preface
In order to get amazing 3D Lip Sync animations, we need a set of blendshapes for different phonetic sounds our character might make.
Unomi uses machine learning to find the timings of phonetic sounds and create detailed speech animations.
Once we have these blendshapes configured for various phonetic sounds, we’re then able to quickly pump out as many speech animations as we like.
Configuring your model
1. Add a BlendShape Node
This can be for an existing character in your Maya project portfolio or something totally new. The important thing is that you set up a blendshape node which targets all meshes you want to see moving when your character is speaking (such as the tongue, lips, teeth, face, etc).
2. Sculpt BlendShapes for Phonetic Sounds.
We next add blendshapes and sculpt them to match our character making speech related phonetic sounds. The phonetic sounds we want to sculpt blendshapes for are as follows: