I've pushed an update to my fork of pyblish_magenta to add in collecting animations and extracting an Alembic from it, this is the relevant commit.
Consider it a quick draft (it's currently hardcoded to always take frame 1 to 10; just for testing).
I wanted to have a quick discussion for two opposing implementations of collecting what needs to be extracted for an animation scene; or any scene really!
Collect whatever is visible!
Currently the Collector for animation (from the commit) retrieves all visible geometry and non-default cameras and considers that worth extracting. No need to have an artist set up publish sets. Basically the animator's scene (visually) represents the outgoing data. Somewhat WYSIWYG.
- The published content is similar to what we see in the animated scene.
- No need to define what gets published since it's basically filtering to relevant nodes from the normal scene contents.
- The published content is similar to what we see in the animated scene has its clear downsides.
- The animator might be working with a proxy version of the rig, but should publish the high-res version?
- The code to define what is relevant for extraction gets complex quickly (for example looking up whether a node is at least visible once)
Or take control on what will extract!
Another approach is to have the artist define what is relevant for extraction, for example creating a publish_animation objectSet that contains all relevant nodes. The downside is that it becomes prone to human error; what happens if the animator adds 'wrong' objects? (And will we be able to validate?)
- There could be multiple objectSets in a scene to separate between certain extractions, possibly with different settings each.
- Additional settings for the publish can be set on the node in the scene. Example given it could have an attribute defining the start and end frame of the animation, this way it would take that time range even if the current timeline differs.
- Having the artist more in control (at a more granular level) it's easier to perform quick hacks for complex projects that might require a workaround. (eg. separating a super-high-res mesh for Alembic extraction, separating an unsupported mesh format into its own extraction or keeping a procedural system as a procedural network for optimizations)
- What happens if the objectSets contains both the proxy mesh and hi-res mesh of a rig. How do we identify whether what the user did is the correct data to export?
- Related to Alembic: I assume we want to
writeVisibilities so the animator remains in control of animating visibilities. Though if the high-res mesh of a rig was hidden (eg. when working with proxy rig) it would get publish as hadding geometry.
@marcus, @mkolar, @Mahmoodreza_Aarabi I think this is definitely something you all have something to contribute to based on previous experiences from productions?