Pyblish Magenta


Rebusfarm submits through its own script from within Maya that ‘automagically’ (annoyingly) collects all loaded files into the sourceImages folder, then changes all loaded paths to that. Then saving the scene as a copy with a _rebus suffix and submitting all those contents in one go.

I’ve not seen any way of customizing it with their tools. The issue is mostly that these render services try to be user friendly for a lighting artist… providing tools that work in a certain way. You don’t actually have access to their server, but you also download the rendered files through their tools. In reality there’s not even a way of knowing how files will be structured on their end.

But yes, on a local farm this is a trivial issue to be solved.



Behold, the glorious render.

We can now publish look development scenes, and re-create the look development in lighting on an unlimited number of copies of any asset.


This brings us to an interesting new area; a way to automatically publish lookdev and lighting results.

They’ll need to be renders, obviously, which brings us to solving “the rendering problem”. Basically, not having to wait for a render to complete when publishing.

I’m thinking the solution can be broken down into 3 steps.

  1. Publish a render locally, in the same process, awaiting completion
  2. Publish the task to another process, and await the results
  3. Publish the task to another process, and have this process publish the results asynchronously.

(1) Is clearly not optimal, but implementation-wise it just means a regular extractor, rendering to a regular location on disk. As will become obvious, this will take a while.
(2) This is good, we can publish, and move on with our task, while the render occurs in a background thread.
(3) Finally, what is most common, but also most complex.



Ok, so me and Roy was just discussing where Rendering fits into publishing.

I was thinking that it would make the most sense holistically if rendering involved publishing of an image sequence, as opposed to the publishing of a job to a farm, which would in turn deliver image sequence.

In practice, rendering would involve writing to your /work or development folder until you are happy with the results. From there, you publish, and the image sequence is run through validation.

  1. Render however; locally, on the farm, remotely, by hand
  2. Publish the resulting images, when you are happy.
$ cd maya/images
$ pyblish
$ # GUI Appears

From this perspective, publishing remains simple, without having any effect on how these images are actually made. It means artists are free to use existing farm control software, such as Deadline, without having to deal with the “middle-man” Pyblish would otherwise become.


Not having had too close of a look at Pyblish Deadline yet, the existence of this package tells me this isn’t how things are currently done. I know of at least one studio that does (ping @lars) and I’m curious as to what benefits and sacrifices will have to be made in favour of one over the other.

@mkolar and @tokejepsen, what do you think about the above approach, and how does your current approach differ?


We don’t publish image sequences through Pyblish, but it makes sense. In other applications which I haven’t looked at supporting yet (TV Paint), the artists render locally and then publish with the pyblish-ftrack integration.

For the most part we publish a job to the render farm, and Deadline takes care of the integration afterwards, meaning publishing to ftrack.


Do you think it could make sense to render on your own, through Deadline or in any other way, but to your /work or development folder, and then publish the resulting image sequence and then integrate with ftrack from there?

Basically shifting swap places on where Pyblish comes in, and where Deadline comes in.

# From 
 ______    _________   |   __________     ________  
|      |  |/////////|  |  |          |   |        | 
| Maya |--| Pyblish |--|--| Deadline |---| ftrack |
|______|  |/////////|  |  |__________|   |________| 

# To
 ______    __________     _________   |   ________  
|      |  |          |   |/////////|  |  |        | 
| Maya |--| Deadline |---| Pyblish |--|--| ftrack |
|______|  |__________|   |/////////|  |  |________| 


Quite possibly could, but there are quite a few parameters that the artist needs to be aware of when submitting to Deadline. This is currently validated and extracted here.
This is though just to make it easy for the artist, and also from a management point of view. Less variables to go wrong for a successful render.

This also turns into a two step process rather than one step. The two-steps will be to wait on the renders to finish, then publish them. Where as right now the artists will just publish, and when the renders are done, they appear in Ftrack with a “Artist Review” status.


That is interesting; that you validate the job itself and the convenience of having it appear automatically once finished… Clearly beneficial. Thanks for sharing.


we are doing exactly the same as Toke.

Artist just press publish from nuke or houdini and that’s it. Once renders are done, they appear in ftrack with status render complete. Artist then checks them and if he’s happy changes status to Pending review. If not reviews it, makes tweaks and sends new version. They don’t need to know anything about our farm at all theoretically.

Also one thing to keep in mind is, that renders can be big. I’m not very keen on rendering somewhere and then copying them, because of the network load. Moving is quick, but I’m too paranoid to move renders that took hours to complete.


+1. Second that.


Good point, something to keep in mind.

This is the part I’m not fond of. :frowning:

It means sharing (publishing) of content happens before it has passed validation. Because how could it pass, unless Pyblish has got all of the information?

I see the convenience; you’re basically validating by hand when flicking that “Pending” switch. I just find it difficult to think beyond this step and how to build upon it.


Not quite. The only thing that we validate outside of pyblish is corrupt renders. All the other technical details are validated when submitting. They still see pyblish GUI with all the plugins, Once it passed this validation the only thing they need to check once renders are done is that all the frames are there, and that it look good.We would have to validate artistically by hand anyways, whatever amount of validation pyblish would do on the files.

I’m however about to add option to publish already rendered frames from nuke. In case compositor did a local render, they will be able to pick those up and publish them instead of sending to farm.


Cool, would love to hear how you get along with that.

From a completely purist perspective, what I’d like to see happen is for submission of renders to be just as you say, a click away. But for those renders to basically end up in your local working directory. The flicking of the “Pending” switch would then be moved to being the actual publish; as you’ve already seen it at that point and can submit it as “Pending review” right away.

The way Toke describes, launching Pyblish with the image sequence in the current working directory, having collectors there to tally up the sequences into Instance's and extractors and integrators to do what they do, including actually registering with ftrack.

About publishing local sequences, out of curiosity, from which directory are you expecting your artist to navigate to in order to publish a sequence? I’m struggling with this right now.


Most likely from it’s final location. I should probably clear up what I mean by publishing in this case, which is not moving rendered frames to their final location (we always render directly there), but rather making an entry in the database, thus letting other people know this has been rendered and published. So this plugin will merely check if all frames are there, and publishes them to ftrack database.

However say artist rendered to his local drive for whatever reason, that it should take these frames and move them to correct location during publishing.


Sounds like a good compromise.

On a large show, I would probably hard-link renders either way, not actually copy.


to be honest, I need to look at doing that on windows as that would be a very elegant solution.


It would mean instant, zero-cost copies. Definitely something worth digging into. Windows is as capable of this as Linux.

The crux I suppose would be that you probably need to/should do your file-management server-side, which may not be obvious or pleasant at first.


Magenta Contract

Here is a summary of how the different families in Magenta are currently identified and collected. Call it “the contract”

Table of families

  • metadata
  • model
  • rig
  • pointcache
  • animation
  • layout
  • lookdev
  • renderlayer

Each family is given a series of attributes so we can more easily compare and contrast them against each other.

tasks: Which task typically produce instances of this family
hosts: Hosts used for this task
identifier: How collection separate this instance from everything else
inputs: Optional inputs to the task, such as a model for rigging
outputs: Type of information leaving a host, such as a mesh
representations: File formats of outputted files
destination: Absolute path of output after integration
file: Fully qualified filename of output after integration

I’ll update this post as we go.


  • 2015-08-19 15:00 - Added sections metadata and layout.
  • 2015-08-20 13:85 - Added table of families


Generic task and content information.

tasks: ["any"]
hosts: ["any"]
identifier: -
inputs: []
outputs: ["data"]
representations: [".json"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


For every published asset, metadata is published regarding context-sensitive information relevant to other pipeline steps.

  • Author
  • Date
  • Project
  • Item
  • Task
  • Contained Assets

Most importantly, this information can be used to track assets used in other assets. Such as, figuring out from where a model was originally imported in a rig, or which rig was used in a pointcache.

This information is used to associate a pointcache with shaders from its associated lookdev.


The geometry of an asset

tasks: ["modeling"]
hosts: ["maya"]
- assembly with ITEM name and "_GRP" suffix, e.g. "ben_GRP" +
- environment variable TASK == "modeling"
inputs: []
outputs: ["mesh"]
representations: [".ma"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


Artists model in Maya using only polygons. Any other node, such as NURBS, history of any kind and keyframes are allowed, but are not included on publish.

room for improvement

The two-factor identifiers. Ideally I’d like a model to appear as a model, regardless of its environment. The assembly could, for example, have a user-defined attribute on it saying “I’m a model”. Or an assembly with a uniquely identifying suffix, such as _MODEL or even prefix such as model_GRP.

To not burden the artist, an initialisation step could be added in place of starting fresh. For example, a “Create Asset” dialog with drop-down indicating what sort of asset is to be created. The procedure could then automatically apply the appropriate user-defined attribute along with setting up the appropriate assembly into which the model is to be created.


The encapsulation of content into an interface in preparation for use/animation by other artists.

tasks: ["rigging"]
hosts: ["maya"]
- assembly with ITEM name and "_GRP" suffix, e.g. "ben_GRP" +
- environment variable TASK == "rigging"
inputs: ["model"]
outputs: ["any"]
representations: [".ma"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


Namespaced as {instance}_. E.g. ben_. {instance} is important to distinguish it from other referenced assets whereas the underscore is pure-cosmetics and makes namespaces appear as a regular separator. ben_:ben_GRP versus ben:ben_GRP.

Once referenced, the rig is stored in an assembly much like it’s model equivalent. The internal structure is as follows.

▾ ben
  ▾ implementation
    ▸ inputs
    ▸ ...
    ▸ outputs
  ▾ interface
    ▸ controls
    ▸ preview
▸ controls
▸ pointcache


# The assembly, suffixed _GRP for collection
▾ ben

  # The bulk of a rig, including, but not limited to, deformers,
  # auxiliary geometry, dynamics and constraints
  ▾ implementation

    # The source geometry in it's undeformed state
    ▸ inputs

    # The various operations to be performed on a rig
    # in top-down order. I.e. the top-most operation
    # happens before the next. Similar to the channel-box.
    ▸ ...

    # The resulting geometry/nodes from the above operations
    ▸ outputs

  # The implementation is always hidden from view.
  # The interface is what an artist interacts with
  # and sees.
  ▾ interface

    # Standard NURBS controllers, driving the above
    ▸ controls

    # Visible, non-editable DAG nodes, including meshes and helpers
    ▸ preview

# An objectSet containing all animatable controllers
▸ controls

# An objectSet containing all cachable geometry
▸ pointcache

References are baked into the file upon publish; i.e. no nested references.

room for improvement

  1. The duplicate information in the full name, ben_:ben_GRP, it doesn’t reflect from which task or family the content is from. It could say e.g. ben_:model_GRP
  2. To prevent artists from re-creating this hierarchy, or creating it wrongly, an initialisation phase could be added; similar to with the above suggestion for Modeling. Essentially the same dialog box, selecting “Rig” and it would assign the proper user-defined attribute and create the hierarchy fixture.


The look of an asset.

tasks: ["lookdev"]
hosts: ["maya"]
inputs: ["model"]
- Object set called exactly "shaded_SET"
outputs: ["shader", "link"]
representations: [".ma", ".json"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


An artist references the model of an asset and assigns shaders to it, along with user-defined attributes onto the meshes themselves. Shaded meshes are added into an object set called exactly shaded_SET.


Meshes in shaded_SET are examined for their associated shaders. These shaders are exported, along with their shading groups and graph; such as textures and light links.

As the outputted Maya scene file does not include the meshes themselves, we need a third-party to link a mesh with it’s relevant shader. This is where .json comes in.

Each shaded mesh and component is serialised to JSON and linked with the corresponding mesh via a unique identifier. The mesh and shader can then later be re-assembled during e.g. lighting.

room for improvement

As opposed to previous steps, the identity of shaders and it’s correlation to meshes is independent of the environment. Once a set called shaded_SET is found, it’s meshes are probed for shaders and all of the above happens automatically.

I’m considering whether it could be a good idea to discard the suffix _SET as it is such a key node. In fact in general, I’m thinking key nodes are to be without suffix, to highlight their importance and singularity.


Assets associated and positioned within a shot, with optional animation.

tasks: ["layout", "animation"]
hosts: ["maya"]
inputs: ["rig"]
identifier: ?
outputs: ["inventory", "transforms", "animation"]
representations: [".json", ".atom"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"



Layout generates a plain list of assets associated with a shot - a.k.a. inventory. For example.


The inventory may then be used to populate the shot for final animation.

In addition to a shotlist, a static position of each relevant asset is produced, such that it may be positioned according to the layout artists design, who may in turn have followed a guide such as a storyboard or animatic.

Optionally, and additional animation may also be produced from Layout which may then be applied during final animation as a starting point for animators. The animation is stored as .atom.


Animation serialised into point-positions and stored as Alembic.

tasks: ["animation"]
hosts: ["maya"]
inputs: ["rig"]
- Object set called exactly "pointcache_SET"
outputs: ["mesh"]
representations: [".abc"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


Artists reference rig instances and apply keyframe animation. Each rig is referenced under a pre-determined namespace.

Namespaced as {instance}{occurence}_. E.g. ben01_. {instance} is important to distinguish it from other referenced assets, and {occurrence} allows to reference the same asset multiple times. The underscore is pure-cosmetics and makes namespaces appear as a regular separator. ben01_:ben_GRP versus ben01:ben_GRP.


Only meshes and point-positions are present, along with the .uuid of each shape node. This means no transform values or animation curves.


Animation serialised into curves and store as Atom.

tasks: ["animation"]
hosts: ["maya"]
inputs: ["rig"]
- Object set called exactly "pointcache_SET"
outputs: ["curves"]
representations: [".atom"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

It hasn’t been implemented yet.


Like with pointcaches, a rig is referenced and animated and the same rules apply, only in this case the animation can be applied afterwards to the original rig; no point-positions are involved.


Only transforms are considered, those present in the controls_SET.


Combined pointcache and lookdev into a rendered sequence of lit images.

tasks: ["lighting"]
hosts: ["maya"]
inputs: ["rig"]
identifier: ...
outputs: ["images", "quicktime"]
representations: [".png", ".mov"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"


An artist references the relevant pointcaches produced by animation and applies shaders from lookdev. Shaders are deduced and applied automatically from the given pointcaches. That is, a pointcache of ben for shot 1000 are given shaders from the lookdev of ben the asset.

Rendering occur without the influence of publishing, as plain files on disk. The artist then publishes the resulting files directly off disk.


For The Deal, output is plain png, though any format should do.

room for improvement

  • @tokejepsen mentioned that sometimes, job submissions to a farm may need to be validated. See how this could fit into this methodology.


I’ll keep this post updated with changes as we go. Ultimately, this should represent each and every detail of how each individual family is first identified and finally outputted, minus validation.

Feel free to comment on things, we’ll rely on the version control of this post to track what each reply was in regards to (by looking at the date of the reply, and scanning edits of the post).


Version Control

Here’s how you currently update assets to their latest version; preserving environment variables in final paths, such as $PROJECTROOT such that no absolute paths are ever involved.


The idea is to loose the dependency on assets being referenced and to involve a form of management GUI to properly list assets alongside their current and available versions.

Extraction Queue


I think it might be best to set a standard where everything is contained in an objectSet. The nice thing about it over having attributes on a node that is part of the actual content is that it’s a separate visual object in the outliner. Plus the contents of the objectSet can be limited (instead of a full hierarchy) or forced to contain more. It has a lot of extra possibilities. Also by using the same method in all Tasks we simplify the workflow of the overall pipeline. (Only rendering would be hard to define in an objectSet maybe?)



This sounds like it ‘depends’ on the name of the namespace, but it doesn’t do that? Correct? It’s only there for cosmetic purposes?


We’ll arrive at a point where multiple shaded_SET's should be allowed in a single scene. For example when publishing shader variations. This is just a note for now. Just to ensure we’re not locking into allowing only one single set to be available of a family in the scene. I know rig pointcache sets already allow multiple, but they require namespaces for that. But I can see this happening in lookdev:


We currently have a really nice workflow with something similar. Tom (our lighting artist) will hopefully join us anytime soon in the discussion. He’ll be able to tell you more about his preferred workflow/setup.


In some (or many?) cases there’s no need to bake everything down into geometry changes. For example a truck could be rigged in such a way that it’s just constrained. Baking the transforms down is much more efficient (and actually what would be expected). Also by eliminating any hierarchies to come through might make it harder to let the lighting artist secretly constrain a light to moving object.

Actually, there is currently one object that will have animCurves… the camera? :wink: Or is that part of animation as opposed to pointcache?


Just checking. You’re not necessarily saying to ditch having references in the scene, but more so that the artist doesn’t “reference” an something himself yet uses ready-to-go tools to do so. The assets being loaded would still be references? Or what are you aiming at?