Wanted to note this down, while thinking about it.
I have a vision for this extension to have a tighter integration with Deadline. Currently we rely heavily on Deadline to take over the responsibilities from Pyblish when submitting, which means you have to have a couple of custom event plugins to integrate with the pipeline.
I envision that we “just” send signals, with data, about the state of a render job from Deadline to Pyblish. The signals would be emitted with an event plugin for Deadline. This does involve an extraction queue system for Pyblish.
You could say that we would just be shifting the responsibility from Deadline back to Pyblish, as for integration with the pipeline, but this would mean your pipeline code would be more centralized and sharable. It would treat Deadline as just another tool/host, that Pyblish integrates with.
Could you provide some use case, or simple example of when this would be useful?
The situation would be;
User validates the scene, and passes.
The “job” gets added to the Pyblish extraction queue, which submits it to Deadline.
Deadline renders the job, and signals back to Pyblish.
This is done with a single generic event plugin, that just informs Pyblish about success or failures.
Pyblish gets the signal, that the render/extraction is complete, and continue the extraction queue to integration with Ftrack.
So the amount of code for Deadline is just the event plugin that sends the data. The integration with the asset management, Ftrack, is held within the Pyblish scope and can be used whether you are using Deadline or not. Basically eliminating studio specific code from Deadline.
Could be very useful indeed, however it has a caveat. I assume that pyblish tray would have to be running on the artist computer. If artist turns the machine off at the end of the day, or it get’s restarted you’d most likely loose this connection meaning that nothing would get published.
Could be pyblish-tray, but could also be something else.
Nope, that is very true, and one of the design issues with the extraction queue being artist based. It could also be a more centralized processing. Don’t know atm:)
I think it might be time to get some proper integration for Deadline.
I’d suggest that we have an event plugin, that calls pyblish.util.publish(). There are five stages where you could call pyblish; OnJobSubmitted, OnJobStarted, OnJobFinished, OnJobRequeued and OnJobFailed.
I would leave it to the user when to call pyblish via the event plugins settings. Also in the settings the user could specify the install location of pyblish-base (maybe python as well), and the PYBLISHPLUGINPATH for each of the five stages.
The workflow would be for the submission in the hosts, to append any data needed to the job info for deadline-pyblish. This means we can leave any output settings contained within a scene file, and basically continue the integration when the render has finished.
In theory you could do other validations and extractions, if you wanted to.
Spontaneously, I would attempt having a matching but separate series of plug-ins on PYBLISHPLUGINPATH server-side with compatibility to families on the submitter side.
The submitter collects instances based on data in the scene, runs through all the way to integration.
An integration plug-in triggers a remote operation
The remote operation runs a set of different, server-side plug-ins, collects instances based on what was extracted during submission and completes the publish with it’s own integrators.
Publishing finishes, and whatever happened appears in something like ftrack (or Deadline?) or wherever notifications are made.
That was difficult to articulate… does that make sense?
In the case of rendering, the extracted file would probably be a Maya scene, and the collector on the remote end would be setup to collect a different family than what was collected during discovery, at that point a different set of integrators (or extractors?) would get run, those that perform actual rendering.
Sure:) I would probably leave the actual rendering to the tool, Deadline, itself. That’s what its good for.
I interested in the direct connection between the submitter and remote. Would there be any pros compared to just serializing data for the remote to pick up later?
Maybe the post-deadline pyblish.util.publish() command could then run outside of Maya altogether, on the produced image-sequence?
As in, once deadline finishes, it’ll trigger pyblish.util.publish() in a dedicated Python process with a PYBLISHPLUGINPATH of a collector capable of (1) identifying the resulting image sequence, (2) associating them with a family with one or more integrators and simply (3) running them, thereby integrating the images into the pipeline.
Yup, that was my intention. Hence maybe having complete control of the environment in the settings, like path to pyblish-base and python executable, might be a good idea.
Yup, again:) I was thinking of having a default collector, similar to currentFile in the other hosts, currentJob so there is a starting reference to collect data from.
The design outlined by Toke is exactly what we would expect from this. Essentially we’d use it to replace most/all of the curently used deadline event plugins.
It would help glue the pipeline together more tightly and allow for much easier maintenance of the integration of renders.