Pyblish for Deadline

Wanted to note this down, while thinking about it.

I have a vision for this extension to have a tighter integration with Deadline. Currently we rely heavily on Deadline to take over the responsibilities from Pyblish when submitting, which means you have to have a couple of custom event plugins to integrate with the pipeline.
I envision that we “just” send signals, with data, about the state of a render job from Deadline to Pyblish. The signals would be emitted with an event plugin for Deadline. This does involve an extraction queue system for Pyblish.

You could say that we would just be shifting the responsibility from Deadline back to Pyblish, as for integration with the pipeline, but this would mean your pipeline code would be more centralized and sharable. It would treat Deadline as just another tool/host, that Pyblish integrates with.

I’m not quite sure that I understand what situations you’d like this to be used in.

Could you provide some use case, or simple example of when this would be useful? Just so I can wrap my head around your thinking here.

Could you provide some use case, or simple example of when this would be useful?

The situation would be;

  1. User validates the scene, and passes.
  2. The “job” gets added to the Pyblish extraction queue, which submits it to Deadline.
  3. Deadline renders the job, and signals back to Pyblish.
  • This is done with a single generic event plugin, that just informs Pyblish about success or failures.
  1. Pyblish gets the signal, that the render/extraction is complete, and continue the extraction queue to integration with Ftrack.

So the amount of code for Deadline is just the event plugin that sends the data. The integration with the asset management, Ftrack, is held within the Pyblish scope and can be used whether you are using Deadline or not. Basically eliminating studio specific code from Deadline.

Does that make sense?

Ah I hear you.

Could be very useful indeed, however it has a caveat. I assume that pyblish tray would have to be running on the artist computer. If artist turns the machine off at the end of the day, or it get’s restarted you’d most likely loose this connection meaning that nothing would get published.

Or am I getting it wrong?

Could be pyblish-tray, but could also be something else.

Nope, that is very true, and one of the design issues with the extraction queue being artist based. It could also be a more centralized processing. Don’t know atm:)

But maybe those discussion should happen here; Extraction Queue

I think it might be time to get some proper integration for Deadline.

I’d suggest that we have an event plugin, that calls pyblish.util.publish(). There are five stages where you could call pyblish; OnJobSubmitted, OnJobStarted, OnJobFinished, OnJobRequeued and OnJobFailed.

I would leave it to the user when to call pyblish via the event plugins settings. Also in the settings the user could specify the install location of pyblish-base (maybe python as well), and the PYBLISHPLUGINPATH for each of the five stages.

The workflow would be for the submission in the hosts, to append any data needed to the job info for deadline-pyblish. This means we can leave any output settings contained within a scene file, and basically continue the integration when the render has finished.
In theory you could do other validations and extractions, if you wanted to.

What do you guys think?

I like it.

Spontaneously, I would attempt having a matching but separate series of plug-ins on PYBLISHPLUGINPATH server-side with compatibility to families on the submitter side.

  1. The submitter collects instances based on data in the scene, runs through all the way to integration.
  2. An integration plug-in triggers a remote operation
  3. The remote operation runs a set of different, server-side plug-ins, collects instances based on what was extracted during submission and completes the publish with it’s own integrators.
  4. Publishing finishes, and whatever happened appears in something like ftrack (or Deadline?) or wherever notifications are made.

That was difficult to articulate… does that make sense?

Not entire making sense to me:)

Where does the farm processing come in?

It kinda sounds to me that the submitter machine, has connection with a farm machine?

Farm processing would be the “remote” part.

 _____________                 __________
|             |               |          |
|  Submitter  |-------------->|  Remote  |
|_____________|               |__________|
  
  o- collectorA                o- collectorC
  o- collectorB                o- integratorB
  o- validatorA                o- integratorC
  o- extractorA
  o- integratorA (submit to remote)

In the case of rendering, the extracted file would probably be a Maya scene, and the collector on the remote end would be setup to collect a different family than what was collected during discovery, at that point a different set of integrators (or extractors?) would get run, those that perform actual rendering.

I might still not be getting exactly what you mean, but are you saying that the pyblish plugins are doing the actual rendering?

I think so, but you’re right, maybe that doesn’t work… Either way, I’m mainly just spawning (spewing?) ideas.

Sure:) I would probably leave the actual rendering to the tool, Deadline, itself. That’s what its good for.

I interested in the direct connection between the submitter and remote. Would there be any pros compared to just serializing data for the remote to pick up later?

It’s possible I simply misinterpreted your goals.

I assumed you wanted a particular series of plug-ins, integration, to continue running after rendering, on a separate machine.

Maybe something along these lines?

 _____________         __________         __________
|             |       |          |       |          |
|  Submitter  |------>|  Render  |------>|  Remote  |
|_____________|       |__________|       |__________|
                               
  o- collectorA                           o- collectorC
  o- collectorB                           o- integratorB
  o- validatorA                           o- integratorC
  o- extractorA
  o- integratorA (submit to render)

Where, once the render is finished, it triggers pyblish.util.publish() on whatever was produced by the render, such as integration.

Yeah, that is probably more in line with what I’n thinking about.

There would be a default collector that grabs all essential data from the rendered job, and users can do what they want with it in the integration.

Maybe the post-deadline pyblish.util.publish() command could then run outside of Maya altogether, on the produced image-sequence?

As in, once deadline finishes, it’ll trigger pyblish.util.publish() in a dedicated Python process with a PYBLISHPLUGINPATH of a collector capable of (1) identifying the resulting image sequence, (2) associating them with a family with one or more integrators and simply (3) running them, thereby integrating the images into the pipeline.

Yup, that was my intention. Hence maybe having complete control of the environment in the settings, like path to pyblish-base and python executable, might be a good idea.

Yup, again:) I was thinking of having a default collector, similar to currentFile in the other hosts, currentJob so there is a starting reference to collect data from.

Let’s do this.

The design outlined by Toke is exactly what we would expect from this. Essentially we’d use it to replace most/all of the curently used deadline event plugins.

It would help glue the pipeline together more tightly and allow for much easier maintenance of the integration of renders.

Let me know if there’s anything I can do to help with this!

Thanks @marcus. Just trucking along at the moment.

Got an inital working prototype, but need to explore another implementation first, then actual testing with production plugins.