Our Autonomous Control System allows our characters to respond to changes in their environment and interact with guests without any human intervention whatsoever. There are approximately eight applications working together in our suite of autonomous software including a face tracking application, a voice recognition system, an RFID detection system, and an inter-process message system through which all the applications communicate.
The most important pieces of software in our Autonomous Control System are the Character State Control System (CSCS) and the Real-time Show Control System (RSCS).
The Character State Control System is a run-time decision mechanism for the character. At the high level, CSCS uses data from environmental sensors as well as the character’s behavioral model to determine when the character should change to a new behavioral state, and what the new behavior should be.
RSCS is the actual software interface to the hardware–it receives messages from CSCS specifying which animation files to load and which channels to play them on, and then sends that data to the hardware hardware. RSCS can linearly blend animations together, as well as seamlessly transition from one animation to another.