Workaround to get shape key normals faster #2125
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Getting shape key normals with ShapeKey.normals_split_get() is
particularly slow because Blender first has to iterate the data into a
tuple and then NumPy has to iterate that tuple into an array.
To work around these iterations, the shape key coordinates can be copied
into a mesh's 'position' attribute and then the normals can be
retrieved from the mesh's
corner_normals
property after forcing themesh's normals to be recalculated.
This makes use of the new ShapeKey.points property added in Blender
4.1.0 that has faster access through
foreach_get
than ShapeKey.data.For large meshes and/or meshes with many shape keys, this can reduce
export times by a few seconds.
This is very much a workaround, so I understand if this PR is rejected on that basis alone.
Making a copy of the mesh that is being exported isn't strictly necessary because the original 'position' attribute values could be saved and then restored once finished, but I think it's better to create a copy so that the meshes that are being exported are untouched.
I'm not sure if there are some export cases where
self.blender_object.data != self.blender_mesh
, so I added an alternate control flow for when that is the case. Ifself.blender_object.data
is always the same asself.blender_mesh
, then the alternate control flow can be removed.As well as
tmp_mesh.update()
,tmp_mesh.vertices[0].co.x = tmp_mesh.vertices[0].co.x
also causes normals to be recalculated, but that doesn't seem very reliable to me. I couldn't find other ways to force the normals to be recalculated, things that I thought would work liketmp_mesh.vertices.update()
ortmp_mesh.attributes['position'].data.update()
didn't.Setting
self.normals
to the normals ofkey_blocks[0].relative_key
is a bit odd to me because the relative key of the first shape can rarely not be itself (e.g. if another shape key is moved to the top), but I kept the existing behaviour in this PR.On a larger humanoid rigged model I have with a few hundred shape keys per mesh, this patch brings the export duration down from 40s to 29s (before this patch, it exports in 6s if I disable exporting shape key normals).
If the changes in this PR will not be possible to be merged even with modifications, I do have an alternative that simply creates numpy arrays faster from the tuples returned by ShapeKey.normals_split_get() by using
struct.pack_into()
. The export duration with the same humanoid rigged model, but using only this alternative is 35s, which is still a decent change: