Today I read a paper titled “Avatar-independent scripting for real-time gesture animation”
The abstract is:
When animation of a humanoid figure is to be generated at run-time, instead of by replaying pre-composed motion clips, some method is required of specifying the avatar’s movements in a form from which the required motion data can be automatically generated.
This form must be of a more abstract nature than raw motion data: ideally, it should be independent of the particular avatar’s proportions, and both writable by hand and suitable for automatic generation from higher-level descriptions of the required actions.
We describe here the development and implementation of such a scripting language for the particular area of sign languages of the deaf, called SiGML (Signing Gesture Markup Language), based on the existing HamNoSys notation for sign languages.
We conclude by suggesting how this work may be extended to more general animation for interactive virtual reality applications.