Deadline has the selector.
Aha, ok, I had a quick look at those selectors and suspect it might be where things get dicey.
I assumed you were producing validators and extractors, possible integrators, and that you would let users of pyblish-deadline determine how their scene is organised. A selector, almost any selector, is bound to be very personal.
What I would suggest, is that instead of distributing selectors, write a contract.
The contract would specify the family for which Pyblish Deadline plug-ins - validators and extractors, - support along with any data the plug-ins require. In a nutshell, turn each of the selectors into contracts that studios can choose to implement themselves.
For example.
The Deadline Write Node Contract
For Deadline to validate your Instance
's, it must contain the following.
family: deadline.render
deadlineJobData:
outputFileName0: # Absolute path to where a rendered sequence is to be written
deadlineFrames: # Dash-separated string of start and end frame, e.g. 1-10
deadlinePluginData:
EnforceRenderOrder: # Whether to enforce the order in which renders occur
NukeX: # Absolute path to Nuke X
Version: # Version of Nuke X
As EnforceRenderOrder
and others are fixed, I would suggest you strip these from the selector/contract and instead insert them implicitly right before anything is sent off to Deadline. The smaller the contract, the easier it is to get started, and also means less maintenance cost for both you and the user.
The contract should not only specify which members are necessary, but also optional, along with their effect on the end result. Additionally, every member must be defined in detail exactly how Pyblish Deadline expects it to be formatted, like in the case of deadlineFrames
.
The less specific you require them to be, the better. There’s a saying when it comes to designing Contracts/API’s that you should be as tolerable as possible for any kinds of input, but only output the utmost critical and be as consistent as possible. I think that balance applies well here too. Let your users be liberal, but provide only predictable and rational outputs.
So you are basically suggesting a config file that users customize to their needs?
The reason why I didn’t do config files to begin with, was for easy of use for pyblish users. They already know the api, so they can easily write plugins.
No, I’m talking about documentation.
You provide them with the information they need to write their own Selector that works with Pyblish Deadline.
Hmmm, sounds good. I would like to have it as plug-n-play as possible. I was thinking about subclassing for customization as well.
This seems unrelated to the problem (?), but do elaborate. Do you mean to support others that may want to subclass the plug-ins?
The selector plugins are currently a bit lengthy, and I wouldn’t want other to start from scratch, so I have the option of including the selectors in an “examples” folder, which the users then copy into their own extensions and customize.
The idea behind subclassing some the selectors in the extension, would be that people benefit from the work being done with the extension without having to copy/paste code.
Their selectors would be quite small;
import pyblish_deadline
class SelectDeadlineRenderlayers(pyblish_deadline.select_deadline_render_layers):
""" Gathers all renderlayers
"""
ftrackAssetName = 'something'
This is a bit tricky to solve as we have a bit of a chicken and egg situation.
But in general I would prefer if deadline-extension worked based on instance data, rather than instance family. That way user could just append this data to their own selector of any family and have it sent to deadline (similar to what ftrack extension is doing)
You could simply check if instance has deadlineJobData
and deadlinePluginData
and operate on instances that do.
To make it easy for new users to implement it into their selector, we could provide a piece of boiler code (a function) that they just drop into their selector, fill a few basic details and of they go.
It sounds like you expect the requirements to be quite large, but I don’t think it has to be. I would strive to make the contract as succinct as possible and try to trim away any rough edges, making the requirements nimble as possible.
If the contract did end up large and being cumbersome to implement, I would consider it a sign of some problems in the design.
About your “examples” folder, this sounds to me like something better suited for for a workflow extension. Smaller examples seem better suited for the documentation.
KISS.
That is a good point. That’s probably as close as you can get to “plug-and-play”.
It could be actually fairly simple to impletment. For example;
If you want deadline extension to process your instance append following function to your selector and supply it with necessary data.
class SelectDeadlineMantraNodes(pyblish.api.Selector):
"""Selects all write nodes"""
hosts = ['houdini']
version = (0, 1, 0)
def process(self, context):
output_file = 'path/to/render/output.exr'
plugin = 'host_app'
instance = 'freshly created instance'
self.dl_job_data(instance, output_file, plugin)
def dl_job_data(self, instance, output_file, plugin):
# setting job data
job_data = {}
if instance.has_data('deadlineJobData'):
job_data = instance.data('deadlineJobData').copy()
paddedNumberRegex = re.compile("([0-9]+)", re.IGNORECASE)
paddedOutputFile = ""
# Check the output file
matches = paddedNumberRegex.findall(os.path.basename(output_file))
if matches != None and len(matches) > 0:
paddingString = matches[len(matches)-1]
paddingSize = len(paddingString)
padding = "#"
while len(padding) < paddingSize:
padding = padding + "#"
paddedOutputFile = self.right_replace( output_file, paddingString, padding, 1 )
job_data['OutputFilename0'] = paddedOutputFile
job_data['Plugin'] = plugin
instance.set_data('deadlineJobData', value=job_data)
take it with a pinch of salt, I just took it straight from the current selector.
True, and I like that idea, which I’ll have a look at implementing.
Then some documentation with convenience method examples, should make it easy enough.
That’s a good start, but I would take it one step further.
class MySelector(pyblish.api.Selector):
def process(self, context):
instance = context.create_instance(name="My Instance", family="myFamily")
# Integration starts here, as documented
instance.set_data("outputFile", "c:\myfiles\file.###.exr")
instance.set_data("plugin", "myplugin")
This way, you can keep the dl_job_data
function in any of the Pyblish Deadline plug-ins, or as a utility function, and not have to pester the users with any boiler-plate as all.
def dl_job_data(self, instance):
# Do deadline specific things
Of course, that’s practically what we have with ftrack. The problem I see here is that dealine plugins will be appearing in the UI even if there is no instance to be processed by them, But we discussed that before
In this case, I think it’s safe to assume that if Pyblish Deadline is on your PYBLISHPLUGINPATH
, then it’s meant to run.
If it is on the path, and a plug-in can’t find these attributes, I would throw an exception. And if an asset isn’t meant to be sent to Deadline, I would keep them off the path.
That’s not the point though. I might have a scene that doesn’t have any render nodes in it, It’s only animation scene for example, but because deadline is installed on my machine, I will see it in the pyblish UI, even though there is nothing in the scene that should be published to deadline.
It’s exactly the point. Don’t include Pyblish Deadline when working with animation.
A wrapper can handle this. If FTrack can provide environment variables on a per-item basis, then you could append Pyblish Deadline when starting Maya from any render-related task. This is the part where such decision should be made.
Most of the time that is correct thinking, but far from all cases. For instance we have animation tasks, where we send to deadline if we’re doing opengl render for better looking animation previews, other times this is not needed. Variables like this happen all the time and it’s almost impossible to determine them when launching work on a task. Animator might very well want to sent and animation review that is pure black and white render if we’re doing some silhouette work this would be sent to deadline straight from animation, but a shot after that might not.
When we work on more projects and smaller teams, where people might do more tasks at the same time to save time, it’s not very desirable to be micromanaging everything.
I totally see where your thinking comes from, but have to stand in opposition for this one.
EDIT:
but that is all for another discussion, so I’ll drop the off-topic here
So I have been experimenting with Deadline REST api, and successfully submitted a job remotely. Fortunately our way of formatting the job and plugin data fits exactly with how Deadline submits jobs over REST.
Now the question is how to safely configure and store sensitive data for Pyblish for Deadline. Deadline requires a username and password, along with an exposed url and port.
Initially I was thinking environment variables, or a config file located in the repo (obviously not commitable).
Let me know what you think. Pretty cool to submit render jobs remotely, without changing the publishing process:)
That sounds great.
Environment variables is common when working with authentication. If you can somehow launch the host with username and password in it’s environment, then you can simply fetch those during publishing.
Out of curiosity when you say “remotely”, do you mean without a local Deadline client? Is that how it works? Does that meant you can’t follow progress on a render, or what is the advantages of doing this?
When I say remotely, it did mean you can submit jobs without have Deadline in installed, yes. But the main point being that I can submit jobs from a different location like home, to the render farm at the office.
When you expose the web service for Deadline, you can also setup the mobile application to monitor progress.
With a good syncing setup, like BitTorrent Sync and a decent connection this makes working from any location easy.