Pyblish for Deadline

This is a summary of what has been discussed here; Reformats and mantra addition by mkolar · Pull Request #1 · tokejepsen/pyblish-deadline · GitHub

Goal

To make a universal Deadline package for Pyblish, that can work for all pipelines with minimal configuration.

Selection

We gather all the necessary instances from the work file, like renderlayers in Maya or write nodes in Nuke. On these instances, avanced users can add/modify an “inputPath” and “outputPath” attribute which will change the source path and output path from the hosts default one.
As a TD there will also be the option of creating a selector that sets these paths on the context, so it can be done programmable across a pipeline.

Validation

The exact validators necessary are still up for discussion, but initially the package will be validating paths and making sure they exist.

Extraction

We extract the deadline data to the job and plugin file, which are just text files. These go to a temporary location, ready for conforming.

Conform

The conform plugin picks up the the job and plugin files, and relocates them to the final destination of the renders. The plugin also modifies the output directories in the job file and the scene file in the plugin file, according to the “inputPath” and “outputPath” in the context.

Let me know what you think so far.

3 Likes

Hey @tokejepsen,

Have you given any thought to how to get started using this kit? For example.

Installation

$ pip install pyblish-deadline

Integration

import pyblish_deadline
pyblish_deadline.setup()

Customisation

Modify pyblish-deadline/config.json.

{
  "defaultPath": "c:\renders",
  "sequenceFormat": "###"
}

Each step has some advantages/disadvantages that are good to get out into the open before heading down any route. For example, installing via pip might be less viable for a network install.

I’m not quite sure what we’d be achieving with this. It is just a bunch of plugins so “installing” it might be as simple as: download and place in you plugins folder. Thinking of a way to be distributing and sharing plugins(which is being discussed somewhere else I believe) would surely encompass this case too.

pip is nice as an option, but as you said practically unusable in network install (which I’d assume is what most studios do)

This brings up a very good point.

I’ve taken for granted that kits are to be distributed as Python packages or modules. Some benefits include:

  1. Familiarity, we work with them all the time
  2. Standardisation, it’s proven to work and many deployment methods build upon it, like pip
  3. Room for expansion; it covers both single modules and large interdependent hierarchies of packages.

For example, in the case you mention of a kit being just a directory of plug-ins.

└── pyblish-deadline
    ├── select_instance.py
    ├── validate_instance.py
    └── extract_instance.py

Some of the plug-ins may eventually benefit from shared utilities.

└── pyblish-deadline
    ├── select_instance.py
    ├── validate_instance.py
    ├── extract_instance.py
    └── _util.py

And suddenly, plug-ins and non-plug-ins become mixed. Not to mention the potential for README files or additional configuration data.

└── pyblish-deadline
    ├── select_instance.py
    ├── validate_instance.py
    ├── extract_instance.py
    ├── README.md
    ├── config.yaml
    └── _util.py

A Python package also enables the use of kits for things other than plug-ins, for scripting or extended behaviour.

import pyblish_deadline
pyblish_deadline.submit("custom_things")

Or for configuration.

import pyblish_deadline
pyblish_deadline.path_template = "{project}/published/renders/{pass}"

Installation

At one point or another, a user will need to “expose” plug-ins to the pipeline. A Python distribution simplifies this as well.

In the case of chucking plug-ins into any directory:

import pyblish.api
path = "/some/directory/pyblish-deadline"
pyblish.api.register_plugin_path(path)

But contrast that with this.

import pyblish_deadline
pyblish_deadline.register_plugins()

No absolute paths, and the flexibility rides on an already maintained PYTHONPATH.

# Implementation
import os
import sys

import pyblish.api

def register_plugins():
    """Expose Deadline plugins to Pyblish"""
    module_path = sys.modules[__name__].__file__
    package_path = os.path.dirname(module_path)
    plugin_path = os.path.join(package_path, 'plugins')

    pyblish.api.register_plugin_path(plugin_path)
    log.info("Registered %s" % plugin_path)

Examples

The Napoleon kit is a good example of how plug-ins could be bundled with reusable functionality.

import napoleon
napoleon.register_plugins()

Magenta is another good example of a larger collection of shared utilities.

For distributions as simple as a single plug-in, something like this is equally well suited.

└── magic-selector
    ├── select_magic_instances.py
    ├── setup.py
    └── README.md

I think this is where we should differentiate between an integration and an extension, referencing the Integrations and Expensions discussion

How I see it in short:

  • extension: a set of (SVEC) plug-ins that can be added to your pyblish toolkit. These can be categorized, like a set for Maya, but don’t define the connection with the Host or any additional functionality outside the behaviour of the plug-ins.

  • integration: a package that expands the features for Pyblish and is its own package. The package can contain extra functionality like a connection to a Host (for which it is used most often) and/or deliver its own set of (Python) functions that can be called by the user.

For an extension you’d normally never import the package as a normal python package, but instead would register the extension’s plug-in path with Pyblish. To be convenient as a github package and not mix with other files I would recommend a default structure where plugins are still nested in its own plugins folder.

I agree. To put it in simple man’s terms. Integration give you access to pyblish (publishing, UI) from a given host, but doesn’t do much on it’s own (because it’s missing plugins), Extension provides pyblish integration with the necessary plugins and utilities to be actually able to SVEC (did I just turn it into a verb? yeah).

1 Like

SVEC that sh*t!

I would think you would be in favour of extensions as packages, especially as you are doing this already.

If we put it another way, what do you consider the disadvantage of sticking to a Python package layout for extensions? And what benefit do you see from using something custom?

Being able to import doesn’t necessarily remove the ability to manually expose things via environment variables. This is the layout I’m proposing.

pyblish-extension  # Repository
├── pyblish_extension  # Python Package
│   ├── plugins  # Plug-ins
│   │   ├── select_instance.py
│   │   ├── validate_instance.py
│   │   ├── extract_instance.py
│   │   └── conform_instance.py
│   └── __init__.py
├── README.md
└── setup.py

Is this what we are disagreeing about?

Future

Down the line, the intent is much like how @BigRoy has started discussing in the Magenta topic; to be able to encapsulate an entire publishing pipeline in an extension. Ideally, I would like to see smaller/simpler extensions conform to a similar layout for the purposes of maintenance and familiarity, but I’d be happy to hear about alternatives.

I wasn’t necessarily ditching the layout of a Python package. I guess I was more on board with @mkolar that I think being able to pip install an extension is less important than easily getting plugins into your pyblish toolkit. I also wonder how familiar small studios are with getting stuff up and running with pip install, especially when they need to run it over a network server.

On the other hand if extensions end up having a similar installation method as integrations that could ease installation tutorials?

Not much against it, but I really don’t use pip install that much at the studio.
I do like that you’re able to pack your plugins with things like documentation, utility methods, etc.


@marcus is there any easy way to set up the folder structure formatting you’re doing here? or how are you formatting that?

Ah, so it’s the pip'ing you guys are talking about. :slight_smile: Again, it’s just an example. We can’t rely on pip for the reasons we’ve covered here.

That’s good to hear. I thought this was what we were arguing at first.

In a more elaborate extension, like Magenta and Napoleon, I think these things become critical and allow an extension to form it’s own ecosystem. The documentation transitions from “How to use a plug-in” to “How to produce film” using a particular style of working. It’s the style that is being codified in an extension, and documenting it is as important - if not more so - as the implementation itself.

In terms of Pyblish for Deadline, I think the same is true. What every package needs is a one-click install with supporting docs.

How can we make this happen?

Not that I know of, I’m copying the results from running tree on Linux. It takes a bit of tinkering.

$ apt-get install tree
$ tree
.
├── nginx_test
│   ├── build
│   ├── Dockerfile
│   ├── nginx.conf
│   ├── run
│   └── sites-enabled
│       └── gollum.conf
├── site
│   └── index.html
└── temp
    └── test.py

Nice to see the discussion here😀

To give an update on where this package is at, I completed an initial working version on Saturday before I went on holiday. Will be back on it next week.

Current version

There is initial support for Maya and Nuke.
I have used the selectors for gathering all non-customizable data.

The customisation of the input, which is the file that Deadline renders from, can be done through the data member “deadlineInput” on the context. When this data member isn’t present, the “currentFile” is used.

The other customisation is for the output path, which is where Deadline renders to, can be modified through the data member “deadlineOutput” on the instance. The existence of this path gets validated.
The modification of these data members could be done at; selection, validation or extraction, although as a practise I’m encouraging to do it at extraction.
At the conforming stage the package picks up all nessecary data, writes the job and plugin files to the “deadlineOutput” path and submits to Deadline. The initial value for this output is taken from the host.

Examples of the customisation can be seen here; https://github.com/tokejepsen/pyblish-deadline/tree/master/pyblish_deadline/examples
The only other validation is for Nuke writes nodes render order.

I have initial support for ftrack and draft as well. One of the coolest moments of making this, was when I could make the ftrack support in half hour and realised that it was cross host compatible. Granted its not cross host in the current version, but possible with some minor tweaks I’ll do when I’m back on it. The same applies to draft and potentially shotgun.
To my knowledge Deadline themselves are using host specific code to get the ftrack related data, so it seems Pyblish is winning there😀

Todo

My main area I want to focus on and solve is the customisation step. Although customising at the extraction stage is perfectly functionable, I am leaving any validation up to the user so in theory they aren’t gaining anything from the built-in path validator, which is a shame.
The only alternative I can think of is to validate the path again at the conforming stage.

There is currently no way of customising the input\output without writing plugins. I’m not entirely sure it is nessecary to look into this, as the user could just change the output in the host.

Maya has the special prefix where the user can customise the output path relative to the project settings. Current I’m just passing the entire output path along, which includes this prefix. I would like to keep this prefix outside of the “deadlineOutput” so there is no need for additional code compensation for render layer etc . paths.

Discussion

The current setup is just based on plugins with no external library. I guess I could expose the submitting but I don’t know what use it would be?
I would probably rather focus on supporting “python” as a host to submit arbitrary jobs.

The folder structure is the standard extension. I have not split the plugins into host specifics as I would like the user to just point to one directory to get it to work.
Still not sure whether to call it an extension or integration though.

1 Like

I’m forking and going to try it. I have a less robust implementation running in pyblish-kredenc right now, that works well in production, but is only nuke currently and a bit hardcoded in places, so I’ll try to switch to this to get on the same page.

I have a few comments though.

ftrack extension
selec_deadline_ftrack: I’m going to separate my ftrack plugins (mostly conformers but selector as well) to it’s own extension repo. So I was thinking we could probably make it work so pyblish-deadline is dependent on pyblish-ftrack. ftrack selector injects ftrack data into context and if pyblish-deadline finds it it can process it. If not it runs as normal, without the ftrack part.

You’re currently writing ftrack data directly into FT_TaskId, FT_VersionId etc in selector. I suggest delaying this step into extraction, the same way as you’re doing with draft. extractor_deadline_ftrack would just look for ftrack_data in context and writes it’s content into correct extraKeyInfoValue.

That way we could keep extensions clean of code that might be more relevant to another extension and rather work with their dependencies. It’s only question of agreeing what we call the ftrack date member that contains all the needed info. Currently I’m using ftrack_context which holds everything that I can gather about the current task so you wouldn’t need to make any calls to ftrack server from deadline extension.

for reference, this is currently what I have for ftrack (very simplistic for now), pyblish-kredenc/plugins/ftrack
I’m going to open a thread for this so that we could discuss it there.

I think this is clearly an extension. It doesn’t allow us to publish from deadline or allows us to publish from deadline, but merely works with data in one integration and sends it to deadline. Might be mighty extension but still extension. (In my opinion),

If we could invoke pyblish from deadline monitor and publish renders from there, that I would see as integration.

I’d say this is not necessary. The host specific code is no too extensive so I would keep it all in one.

1 Like

I just created an empty repo for it here.

1 Like

I suggest delaying this step into extraction

Yeah, will do. This was also my next step to make the ftrack part host agnostic.

I don’t mind what we call the ftrack data member. Guessing its a dictionary? Maybe ftrack_data to not get any confusion with Pyblish use of the word context?

Yes it’s a nested dictionary looking like this. Currently there are a few extra key that are custom to our pipeline, but those could be separated out and added in a separate file like you’re doing with deadline. If the task is within and episode, that get’s added too of course.

{'project': {'code': 'dev',
             'id': '4afb9a16-1cac-11e4-a406-040121b9e701',
             'name': 'Pipe dev',
             'scode': 'D001'},
 'sequence': {'description': 'pyblish',
              'id': 'f4fe0f78-ed88-11e4-bf08-040121b9e701',
              'name': 'sq01'},
 'shot': {'description': '',
          'id': 'fe911418-ed88-11e4-8278-040121b9e701',
          'name': 'sh001'},
 'task': {'code': 'anim',
          'id': '1ce8dbf8-ed89-11e4-8a24-040121b9e701',
          'name': 'Animation',
          'type': 'Animation'}}

This should be more than enough data to be able to meet minimum requirements for deadline-ftrack-event plugin to pick up on.

Make sure to stick to the naming convention for data.

As we start seeing more integrations and extensions, it will become increasingly important that data remains familiar and predictable.

1 Like

Ah. Good point I’ll go through my code to reflect this. Currently I’m using exactly the opposite everywhere. (‘data_member’ instead of ‘dataMember’)

So I have a current implementation of pyblish-deadline, that works with the latest pyblish-ftrack. The main change being that pyblish-ftrack works on instances only, instead of the context.

What this means for pyblish-deadline is that we need to inject ftrack data members into the instance in the selection stage for it to be processed with pyblish-ftrack. Since we are dealing with known outputs, which are image sequences (can be other things in the future, but that would be other selectors), we can specify the ftrack asset type as “img”.

I also inject an empty ftrackComponents data member for the processing, as this will be populated later when the job is with Deadline.

My main problem is how to expose ftrackAssetName for users to customize to their needs. I’m currently injecting this with a validator, but that is obviously not very SVEC. What I would preferably want to do, is to have a ftrackAssetName selector, that somehow injects into an existing instance, but that is impossible?

I’m afraid I don’t understand Deadline and FTrack well enough how to give an educated answer to this. If you can find some way of breaking it down into baby-bits, I might be able to be of more help.

For example, how come deadline isn’t also operating on instances? An Instance is meant to represent something to be published, in this case a shot/still. The fact that it’s an image sequence is a detail, a product of an extractor, just as an .mb file is the product of extracting a rig.

If the output isn’t an image sequence, but rather a “task” for Deadline or FTrack, then the task is what would be extracted and transmitted to the receiver, Deadline or FTrack, as opposed to written to disk like with a rig. That would at least be the SVEC/CVEI method of doing things.

Either way, the task should be based on the Instance, a shot or still (or something else?).

The only reasonable reason I’ve heard so far for extracting the Context, is when extracting the working file itself. Have you found another reason to extract the Context?

pyblish-deadline is operating on instances as well. So in order to customize the ftrackAssetName on these instances, I’ll need to inject it in another plugin. As pyblish-ftrack is validating the instances, the ftrackAssetName data member needs to be set before the validation stage.
Currently I’m working around it by offsetting the pyblish-ftrack validator, and injecting the ftrackAssetName into the instance with a validator that has an order less than the pyblish-ftrack validator, which isn’t ideal.

Ah, I see.

Which extension must always run first, Deadline or FTrack?