-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should an animation channel be able to target multiple nodes at once? #1520
Comments
To clarify — each of On a similar note, I think it's a bit limiting that one morph target cannot be animated separately from the other targets on the same mesh. |
Yes, exactly, the sampler output will have IMHO it depends what one wants to do. For carefully crafted game models, where you want fine grained control over animation, glTF could indeed allow animating a subset of the morph target weights (for example allowing lipsync while showing emotions in a face; we achieve this using the typical additive animation, where weighted differences to the setup pose are added together, so we don't need this fine grained control) However if one wants to use glTF to represent a complex animated CGI rig full of constraints like dynamic switching between forward and inverse kinematics, multiple parent constraints, orientation constraints, expressions, etc, this cannot be represented in glTF (after all, it's not aiming to be USD), so sampling every frame is needed, and the fine grained animation channels introduce the large overhead as I described. I understand it is impossible to make a single file format that pleases everyone, JPEG doesn't have an alpha channel either :-) |
The proposal looks very nice! But I think this issue can be closed. You see, as soon as one starts to fit curves to the sampled data, one looses the Off topic, but still related: what I would like to see however, is a animation fitting/compression technique, like the one used in Thief, e.g. based on wavelets. That would be another extension (and certainly another issue 😉) |
Where are you seeing that? I don't see anything in the animation specification that requires samplers to be unique. In fact, judging from the document structure, it seems that samplers were meant to be reusable. What prevents you from de-duplicating identical samplers and using them in multiple animation channels? {
"sampler": 98,
"target": {
"node": 107,
"path": "rotation"
}
},
{
"sampler": 98,
"target": {
"node": 109,
"path": "rotation"
}
},
{
"sampler": 98,
"target": {
"node": 111,
"path": "rotation"
}
}, |
With our current characters that have a lot of nodes (Maya joints, groups, locators), we often end up with
glTF
JSON
files of about 200 thousand lines. This is excluding the nodes that never animate relative to the base-scene, otherwise we get over a whopping 700 thousand lines.Although we can optimize this a bit by deleting some redundant nodes, the biggest reason why our files our so huge, is because in the current spec, each node requires seperate output sampler accessors.
In our case, animation is almost always sampled frame-by-frame, and all output samplers are arrays of the same length, so could be grouped in a 2D array, requiring only a single accessor per translation, rotation, scale, but then the animation channel should be able to target multiple nodes...
So for example, instead of having:
one would have
In our case, this would result in a very long "nodes" text line (say K nodes), but the overal JSON would be a lot smaller, since only 3 accessors are needed for all translation, rotation, scaling, instead of 3*K. And most likely the animation processing itself would be more optimal too, since a lot less accessors need to be handled.
I realize that allowing this scenario would require the keys to be N-dimensional for translation, rotation and scaling, but since weight keys already have this, it doesn't really feel like a big deal, and besides an extra for loop, it wouldn't complicate existing code that much...
The text was updated successfully, but these errors were encountered: