evoMotion
Exploring the use of interactive evolutionary computation for motion design -- including character animation and crowd choreography
Friday, August 27, 2010
Pursue Applet -- Rough Draft
I've got a rough pursue behavior going. No evolution yet, I haven't made time to wrangle the variable flow into something controllable by genes yet, but here is at least a start...
Monday, August 23, 2010
Wander behavior with working timeline and trails
The wander applet now has a working (scrubbable) time line and two types of trails (all history and partial history). The partial history is slower b/c it draws all previous positions in the partial history at every frame. The full history mode never clears the buffer, so the screen can get messy. Both are probably useful for different types of analysis. Also, the timeline can have start and end frames (no UI for this yet), they are set to 50 and 150 frames. You can also change the agent display type to cone or sphere.
Wednesday, August 18, 2010
Wander behavior with UI enhancements
Try it out here.
(Note: the timeline isn't working yet, it's just a UI stub for now)
P.S. What is that weird flashing on the right side? It looks like some sort of refresh problem. This only shows up in the applet -- not when run out of Processing. Any ideas?
(Note: the timeline isn't working yet, it's just a UI stub for now)
P.S. What is that weird flashing on the right side? It looks like some sort of refresh problem. This only shows up in the applet -- not when run out of Processing. Any ideas?
Wednesday, August 11, 2010
Tuesday, August 10, 2010
Thoughts on interfaces for directing
Aside from the interactive evolutionary aspects of our system we need to think about interfaces for directing things like target positions, flow fields, and post simulation fixes.
Target positions can be any number of things:
Post simulation fixes are a must with any usable crowd system. Perhaps this is better done in a typical animation environment (i.e. Maya). Maybe we can just provide space-time data output that can be imported easily into Maya or something else for direct animation fixes.
Perhaps if we were actually creating a crowd system to compete with Massive, a layered approach would be nice. (much like layers in Photoshop) The particle motion would form the bottom layer. Fuzzy-logic and motion clip blending would take place at the middle layer. And post-sim fixes could be at the top layer.
Target positions can be any number of things:
- randomly distributed points within a user-defined shape (keyframed interpolated shape controller?)
- particular target points in space with a radius of influence (keyframed sphere controller?)
- a particular individual in the crowd or a keyframed individual not governed by behavior rules (spline controller?)
Post simulation fixes are a must with any usable crowd system. Perhaps this is better done in a typical animation environment (i.e. Maya). Maybe we can just provide space-time data output that can be imported easily into Maya or something else for direct animation fixes.
Perhaps if we were actually creating a crowd system to compete with Massive, a layered approach would be nice. (much like layers in Photoshop) The particle motion would form the bottom layer. Fuzzy-logic and motion clip blending would take place at the middle layer. And post-sim fixes could be at the top layer.
Pixar Crowds Chat #2
These are my notes from a chat I had with Paul Kanyuk, Arik Ehle, and Dave Ryu from Pixar about practical issues in crowd simulation for animation production...
There are two primary software packages used at Pixar for crowd sim:
There are two primary software packages used at Pixar for crowd sim:
- Wilma - FSM based environment used for switching between animation cycles (non-locomoting) with .xls output
- Massive - fuzzy-logic based environment used for blending between animation cycles (locomoting)
- lanes - these are splines with color values that control flow of "traffic"
- vision - each agent has some sort of computer vision
- for animation retargeting they usually just end up making cycles at various speeds and then blending between them to avoid problems with IK and rate changes (i.e. the physicality of the animation is ruined)
- original anim cycle is king! The animators put a lot of work into getting a cycle to look expressive/convincing - don't ruin it during crowd sim
- Meet very specific end poses given the start poses
- How do we get characters to look alive at all LOD? (can we have authoring access to anim cycles w/in the crowd modeling environment?)
- Close-up crowd shots need better choreographic control (especially w.r.t. the camera shot)
- Character jitter from indecision
- Very particular actions at precise locations (i.e. high-speed precise turns w/out collisions at the edge of a cliff)
- Large, swarm-style crowds are easy in Massive. The close-up shots with more constraints are hard.
- Getting agents to start transition animations at the right time (based on desired final position + animation)
- Dressing eye-positions to the camera
Pixar Crowds Chat #1
These are my notes from a chat I had with JD Northrup from Pixar about usability issues in Massive...
There are a few tools that Massive provides for directing crowd motion:
Massive uses the notion of behavior groups (don't quite remember the specific definition of this use of 'group')
Massive does not allow keyframe/direct authoring of motion for hero characters (i.e. especially important for when they are carrying a sound beacon)
There is no notion of state or stored memory for agents in Massive. The fuzzy logic model is purely functional. An FSM would be nice to have.
JD claimed that Massive is pretty much limited to characters on a ground plane.
Subframe calculations do not exist or cannot be accessed.
There are a few tools that Massive provides for directing crowd motion:
- paint input values for fuzzy logic computations (static over time)
- flow fields
- sound beacons (attraction to/repulsion from a location in 3 space)
Massive uses the notion of behavior groups (don't quite remember the specific definition of this use of 'group')
Massive does not allow keyframe/direct authoring of motion for hero characters (i.e. especially important for when they are carrying a sound beacon)
There is no notion of state or stored memory for agents in Massive. The fuzzy logic model is purely functional. An FSM would be nice to have.
JD claimed that Massive is pretty much limited to characters on a ground plane.
Subframe calculations do not exist or cannot be accessed.
Subscribe to:
Comments (Atom)