Learned Motion Matching
01/08/2020
This year at SIGGRAPH I will be presenting Learned Motion Matching - a drop-in alternative to Motion Matching which scales to very large data sets. The basic idea consists of training three specialized neural networks to replace three specific components of the Motion Matching algorithm. Once trained, these networks can be used to create a controller which almost perfectly emulates the original Motion Matching system it was trained on, but which does not rely on keeping any animation data in memory. As Learned Motion Matching only requires Neural Network weights to be stored in memory it typically has much smaller memory usage, which remains small even as more and more data is added to the system.
Webpage • Paper • Video • Supplementary Video • Article • Code
Abstract: In this paper we present a learned alternative to the Motion Matching algorithm which retains the positive properties of Motion Matching but additionally achieves the scalability of neural-network-based generative models. Although neural-network-based generative models for character animation are capable of learning expressive, compact controllers from vast amounts of animation data, methods such as Motion Matching still remain a popular choice in the games industry due to their flexibility, predictability, low preprocessing time, and visual quality - all properties which can sometimes be difficult to achieve with neural-network-based methods. Yet, unlike neural networks, the memory usage of such methods generally scales linearly with the amount of data used, resulting in a constant trade-off between the diversity of animation which can be produced and real world production budgets. In this work we combine the benefits of both approaches and, by breaking down the Motion Matching algorithm into its individual steps, show how learned, scalable alternatives can be used to replace each operation in turn. Our final model has no need to store animation data or additional matching meta-data in memory, meaning it scales as well as existing generative models. At the same time, we preserve the behavior of Motion Matching, retaining the quality, control, and quick iteration time which are so important in the industry.