So in the context of Pyblish Magenta we’ve been discussing workflows for the lighting/rendering Task in the pipeline. To keep things light to read and easy to find I want to raise it here as a separate topic.
To kickstart here are some workflow questions:
-
What kind of naming conventions do you use for the output of your renders?
See Output naming conventions -
How do you currently publish renders? (Do you even publish renders Bro?)
See Publishing Renders
And to put it in perspective I want to continue with some propositions to see what everyone thinks is good or bad practice.
Output naming conventions
An output of a scene can have multiple cameras, layers and passes.
-
camera: The camera used to render from.
-
layer: The render layer used to render.
-
pass: This is a renderpass belonging with the layer (rendered in the same go).
- eg. This would be render elements in V-Ray or AOVs in Arnold.
The layer would be rendered from a camera and the passes would be rendered with that layer, as such we already have a hierarchy. A proposition would be to layout your renders as such (option A):
/<Camera>/<Layer>/<Pass>/<Camera>_<Layer>_<Pass>.####.exr
Parsed to:
/camera1/layer1/pass1/camera1_layer1_pass1.0001.exr
Or with a more descriptive filename (option B):
/<Camera>/<Layer>/<Pass>/<Project>_<Sequence>_<Shot>_<Task>_<Version>_<Camera>_<Layer>_<Pass>.####.exr
Parsed to:
/camera1/layer1/pass1/thedeal_seq01_1000_animation_v002_camera1_layer1_pass1.0001.exr
Publishing renders
Rendering and getting those files checked for the next department could be seen as a two-step publishing:
- Publishing the lighting scenes to start the render.
- Publishing the lighting scenes output to approve or deliver the render (commit) as “published”.
1. Start the render
The first could check whether you scene naming conventions are correct as you submit the render of the scene to a queue. It could also check for common errors that are related to the lighting scenes contents itself. Basically this is what the artist would do to “submit a render”.
This would be where Validators could step in place for things like:
- Are the correct frame ranges being rendered?
- Is it the correct resolution?
- Maybe check whether the ‘motion blur’ samples are within the correct range for a project. Or whether the motion blur frame sample offset is at the correct number.
2. Deliver the render
The next step would be to confirm that the renders are correct, this could only be done as the output has been created. For example check whether there’s no frame that is suddenly completely black or corrupt. This is also the step where a render could go from a “Work” staging area to the “Published” contents folder for another department to pick up.
This is where one would validate anything post-render, for example:
- Validate all frames that were queued are actually saved and rendered
- Check for corrupt or “black” frames
- Or maybe you have an ‘expected minimum size’ that you could validate.
Automation
In many cases step 2 could be partially (or fully) automated as a dependency after the completion of step 1. For example a post-render job (eg. through dependencies with Thinkbox Deadline) could trigger all the necessary validations and the integration of those renders from “work” -> “published” if validations were passed.
This would also allow a so-called ‘Slap Comp’ to be rendered after the frames are completed. This could be a ‘preset’ comp for the shot or project that it goes through to let the artist have a look whether his/her rendered output holds up after compositing without needing to go through the hoops of rendering/setting up that comp himself.