I've been developing a version of my facial animation rig for Blender:
As you can see I use a very simple face for my characters, I find 'normal' CG face rigs and lip sync to be very stiff and wooden, due to the 'uncanny valley' effect, best avoided if you don't want your film to look terrible. They've managed to overcome this in recent years, Avatar was the first film that I can recall where the facial mocap / animation system was pulled off convincingly... fun-ortunately this means you have to throw tons of money at the lip sync alone to make it look *acceptable* My approach is to establish a consistent stylized look and concentrate on the story, characters, and action to carry the film.
The eyes are another conscious choice to avoid 'Disney' or 'Anime' style eyes. This is again a stylistic choice with the intent to set my work apart from the 'mainstream' and already well established genres.
I have a few more controls to add to the rig seen above, mouth smiling and frowning, a 'yelling' control... then it's a "simple"* task of creating a Python script to parse lip sync data from Papagayo into the little '+' mouth control seen here. Papagayo is a free lip sync tool that lets you align text with a wav file, then exports a list of frame numbers with corresponding "phenomes" (the yellow U, FV, E etc seen above) I use three simple mouth shape keys which are mixed together using Drivers, controlled by the XY controller I've built, mapping out the various positions where these phenomes appear on the controller lets me lip sync manually, the Python script (which I have yet to learn how to make :( ...) will let me do this automatically...
... because there's nothing more tedious than lipsync :(