Pyblish for Deadline

I think so, but you’re right, maybe that doesn’t work… Either way, I’m mainly just spawning (spewing?) ideas.

Sure:) I would probably leave the actual rendering to the tool, Deadline, itself. That’s what its good for.

I interested in the direct connection between the submitter and remote. Would there be any pros compared to just serializing data for the remote to pick up later?

It’s possible I simply misinterpreted your goals.

I assumed you wanted a particular series of plug-ins, integration, to continue running after rendering, on a separate machine.

Maybe something along these lines?

 _____________         __________         __________
|             |       |          |       |          |
|  Submitter  |------>|  Render  |------>|  Remote  |
|_____________|       |__________|       |__________|
                               
  o- collectorA                           o- collectorC
  o- collectorB                           o- integratorB
  o- validatorA                           o- integratorC
  o- extractorA
  o- integratorA (submit to render)

Where, once the render is finished, it triggers pyblish.util.publish() on whatever was produced by the render, such as integration.

Yeah, that is probably more in line with what I’n thinking about.

There would be a default collector that grabs all essential data from the rendered job, and users can do what they want with it in the integration.

Maybe the post-deadline pyblish.util.publish() command could then run outside of Maya altogether, on the produced image-sequence?

As in, once deadline finishes, it’ll trigger pyblish.util.publish() in a dedicated Python process with a PYBLISHPLUGINPATH of a collector capable of (1) identifying the resulting image sequence, (2) associating them with a family with one or more integrators and simply (3) running them, thereby integrating the images into the pipeline.

Yup, that was my intention. Hence maybe having complete control of the environment in the settings, like path to pyblish-base and python executable, might be a good idea.

Yup, again:) I was thinking of having a default collector, similar to currentFile in the other hosts, currentJob so there is a starting reference to collect data from.

Let’s do this.

The design outlined by Toke is exactly what we would expect from this. Essentially we’d use it to replace most/all of the curently used deadline event plugins.

It would help glue the pipeline together more tightly and allow for much easier maintenance of the integration of renders.

Let me know if there’s anything I can do to help with this!

Thanks @marcus. Just trucking along at the moment.

Got an inital working prototype, but need to explore another implementation first, then actual testing with production plugins.

Turns out there are a lot more events the plugin can be triggered from; http://docs.thinkboxsoftware.com/products/deadline/7.1/1_User%20Manual/manual/event-plugins.html

Will accounting for all of them in the plugin code.

Got a working version now, and its pretty cool:)

With this revamp of the pyblish-deadline workflow, I also wanted to clean up and possibly rethink the plugins approach.

Clean up

Apart from updating the code to newer pyblish standards, I’m also going to remove the remote submission feature. I used it on one project, but since then it proven easier to just setup a VPN. Also in the future version of Deadline, remote submission will be better supported, which I’m not sure what entails but should through the deadline executable.

Plugin Approach

Currently the plugin processes all instances looking for a certain data member. I think a better approach would be to limited to a certain family, which is standard Pyblish workflow.
If you guys think its a good approach the next question would be what to call the family? deadline?

Part of the clean up will be to remove any responsibility from the deadline plugin for setting any job specific data. That means the user would have to specify; Name, User, Plugin, Frames for job data, and SceneFile for plugin data. This would simplify what get handled in the plugin, as all it would do is to format the job and plugin data.

The remaining data members required would be instance.data["deadlineData"]["job"] and instance.data["deadlineData"]["plugin"]. The optional data instance.data["deadlineData"]["order"] remains unchanged.
Also if ExtraInfo and ExtraInfoKeyValue dictionaries are found in the job data, they will be formatted correctly.

These are rather large changes, and definitely not backwards compatible, but I’m hoping this will simplify the responsibility of the submission plugin to pretty much just formatting of the data, meaning it would become more flexible for other users to utilize in their pipeline.

Forgot that I’m not changing the instance.data["deadlineData"]["auxiliaryFiles"] feature.

Little update in this.

I’ve experimented with serialising the context and instance data into the job data. This allows us to recreate the context and instance on the farm.

I started by recreating the context and instance by default, before any plugins are run, but when chaining job together you end up having to manipulate the families so not to submit the same job again.
So I’ve decided to only recreate the context by default, and let the user recreate the instance if needed.

I’ve been working with the event plugin for a bit now, and I have to say that its a pleasure to finally have a complete publishing pipeline. All the logic about generating movies and publishing data to Ftrack is all within Pyblish, as it should be:)

One tricky area I had some problems with was cascading publishes. When an image sequence gets rendered, it triggers a movie publish that results in a FFmpeg job on the farm.
Also dealing with render files like *.ifd from Houdini, where when the job is submitted a Mantra job get published and submitted, which then generate a FFmpeg job.
Though I’m hoping that my plugins are versatile enough now, that all jobs from any host will generate movies and any similar render files like *.ass from Arnold will be easy to implement.

1 Like

Thanks for the updates, Toke, and glad to hear it’s working out!

Sounds like you’ve made some amazing steps and I’d love to investigate a move to using this Deadline integration. Are your plug-ins available for inspection publicly?

The getting started in the README seems like a quick start, but I’d love to glance over some real world uses.

Ahh, yes, forgot about that:)

Sorry for the lack of description on the plugins, but everything that is not in a “pipeline_specific” folder should be usable with a bit of tweaking. Using specific families to describe the output.
When submitting dependent or cascading jobs, I’m using pyblish-deadline, so everything should be familiar.

collect_render.py - This submits a frame dependent job, when a render job is submitted. Render jobs are *.ifd from Houdini.
collect_output.py - This will collect any files outputted from a job, given that the job has an output set in its job parameters. The instance produced is a replica of the orginal instance send to Deadline.
collect_movie.py - This will produce an FFmpeg job when an image sequence is rendered.

Looking over the plugins again, they are quite pipeline specific which could be more generic. Like the collect_movie could easily be dependent on the image output extension rather than being family based.

Thanks! This looks like a great starting reference. Hope to find some time soon to jump into this.