Pyblish Magenta

Most likely from it’s final location. I should probably clear up what I mean by publishing in this case, which is not moving rendered frames to their final location (we always render directly there), but rather making an entry in the database, thus letting other people know this has been rendered and published. So this plugin will merely check if all frames are there, and publishes them to ftrack database.

However say artist rendered to his local drive for whatever reason, that it should take these frames and move them to correct location during publishing.

Sounds like a good compromise.

On a large show, I would probably hard-link renders either way, not actually copy.

to be honest, I need to look at doing that on windows as that would be a very elegant solution.

It would mean instant, zero-cost copies. Definitely something worth digging into. Windows is as capable of this as Linux.

The crux I suppose would be that you probably need to/should do your file-management server-side, which may not be obvious or pleasant at first.

Magenta Contract

Here is a summary of how the different families in Magenta are currently identified and collected. Call it “the contract”

Table of families

  • metadata
  • model
  • rig
  • pointcache
  • animation
  • layout
  • lookdev
  • renderlayer

Each family is given a series of attributes so we can more easily compare and contrast them against each other.

tasks: Which task typically produce instances of this family
hosts: Hosts used for this task
identifier: How collection separate this instance from everything else
inputs: Optional inputs to the task, such as a model for rigging
outputs: Type of information leaving a host, such as a mesh
representations: File formats of outputted files
destination: Absolute path of output after integration
file: Fully qualified filename of output after integration

I’ll update this post as we go.

Changelog

  • 2015-08-19 15:00 - Added sections metadata and layout.
  • 2015-08-20 13:85 - Added table of families


metadata

Generic task and content information.

tasks: ["any"]
hosts: ["any"]
identifier: -
inputs: []
outputs: ["data"]
representations: [".json"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

For every published asset, metadata is published regarding context-sensitive information relevant to other pipeline steps.

  • Author
  • Date
  • Project
  • Item
  • Task
  • Contained Assets

Most importantly, this information can be used to track assets used in other assets. Such as, figuring out from where a model was originally imported in a rig, or which rig was used in a pointcache.

This information is used to associate a pointcache with shaders from its associated lookdev.



model

The geometry of an asset

tasks: ["modeling"]
hosts: ["maya"]
identifier:
- assembly with ITEM name and "_GRP" suffix, e.g. "ben_GRP" +
- environment variable TASK == "modeling"
inputs: []
outputs: ["mesh"]
representations: [".ma"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

Artists model in Maya using only polygons. Any other node, such as NURBS, history of any kind and keyframes are allowed, but are not included on publish.

room for improvement

The two-factor identifiers. Ideally I’d like a model to appear as a model, regardless of its environment. The assembly could, for example, have a user-defined attribute on it saying “I’m a model”. Or an assembly with a uniquely identifying suffix, such as _MODEL or even prefix such as model_GRP.

To not burden the artist, an initialisation step could be added in place of starting fresh. For example, a “Create Asset” dialog with drop-down indicating what sort of asset is to be created. The procedure could then automatically apply the appropriate user-defined attribute along with setting up the appropriate assembly into which the model is to be created.



rig

The encapsulation of content into an interface in preparation for use/animation by other artists.

tasks: ["rigging"]
hosts: ["maya"]
identifier: 
- assembly with ITEM name and "_GRP" suffix, e.g. "ben_GRP" +
- environment variable TASK == "rigging"
inputs: ["model"]
outputs: ["any"]
representations: [".ma"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

Namespaced as {instance}_. E.g. ben_. {instance} is important to distinguish it from other referenced assets whereas the underscore is pure-cosmetics and makes namespaces appear as a regular separator. ben_:ben_GRP versus ben:ben_GRP.

Once referenced, the rig is stored in an assembly much like it’s model equivalent. The internal structure is as follows.

▾ ben
  ▾ implementation
    ▸ inputs
    ▸ ...
    ▸ outputs
  ▾ interface
    ▸ controls
    ▸ preview
▸ controls
▸ pointcache

in-depth

# The assembly, suffixed _GRP for collection
▾ ben

  # The bulk of a rig, including, but not limited to, deformers,
  # auxiliary geometry, dynamics and constraints
  ▾ implementation

    # The source geometry in it's undeformed state
    ▸ inputs

    # The various operations to be performed on a rig
    # in top-down order. I.e. the top-most operation
    # happens before the next. Similar to the channel-box.
    ▸ ...

    # The resulting geometry/nodes from the above operations
    ▸ outputs

  # The implementation is always hidden from view.
  # The interface is what an artist interacts with
  # and sees.
  ▾ interface

    # Standard NURBS controllers, driving the above
    ▸ controls

    # Visible, non-editable DAG nodes, including meshes and helpers
    ▸ preview

# An objectSet containing all animatable controllers
▸ controls

# An objectSet containing all cachable geometry
▸ pointcache

References are baked into the file upon publish; i.e. no nested references.

room for improvement

  1. The duplicate information in the full name, ben_:ben_GRP, it doesn’t reflect from which task or family the content is from. It could say e.g. ben_:model_GRP
  2. To prevent artists from re-creating this hierarchy, or creating it wrongly, an initialisation phase could be added; similar to with the above suggestion for Modeling. Essentially the same dialog box, selecting “Rig” and it would assign the proper user-defined attribute and create the hierarchy fixture.


lookdev

The look of an asset.

tasks: ["lookdev"]
hosts: ["maya"]
inputs: ["model"]
identifier:
- Object set called exactly "shaded_SET"
outputs: ["shader", "link"]
representations: [".ma", ".json"]
destination: "{project}/assets/{asset}/{task}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

An artist references the model of an asset and assigns shaders to it, along with user-defined attributes onto the meshes themselves. Shaded meshes are added into an object set called exactly shaded_SET.

contribution

Meshes in shaded_SET are examined for their associated shaders. These shaders are exported, along with their shading groups and graph; such as textures and light links.

As the outputted Maya scene file does not include the meshes themselves, we need a third-party to link a mesh with it’s relevant shader. This is where .json comes in.

Each shaded mesh and component is serialised to JSON and linked with the corresponding mesh via a unique identifier. The mesh and shader can then later be re-assembled during e.g. lighting.

room for improvement

As opposed to previous steps, the identity of shaders and it’s correlation to meshes is independent of the environment. Once a set called shaded_SET is found, it’s meshes are probed for shaders and all of the above happens automatically.

I’m considering whether it could be a good idea to discard the suffix _SET as it is such a key node. In fact in general, I’m thinking key nodes are to be without suffix, to highlight their importance and singularity.



layout

Assets associated and positioned within a shot, with optional animation.

tasks: ["layout", "animation"]
hosts: ["maya"]
inputs: ["rig"]
identifier: ?
outputs: ["inventory", "transforms", "animation"]
representations: [".json", ".atom"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

contribution

Layout generates a plain list of assets associated with a shot - a.k.a. inventory. For example.

ben01
ben02
table01
sofa01
sofa02

The inventory may then be used to populate the shot for final animation.

In addition to a shotlist, a static position of each relevant asset is produced, such that it may be positioned according to the layout artists design, who may in turn have followed a guide such as a storyboard or animatic.

Optionally, and additional animation may also be produced from Layout which may then be applied during final animation as a starting point for animators. The animation is stored as .atom.



pointcache

Animation serialised into point-positions and stored as Alembic.

tasks: ["animation"]
hosts: ["maya"]
inputs: ["rig"]
identifier:
- Object set called exactly "pointcache_SET"
outputs: ["mesh"]
representations: [".abc"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

Artists reference rig instances and apply keyframe animation. Each rig is referenced under a pre-determined namespace.

Namespaced as {instance}{occurence}_. E.g. ben01_. {instance} is important to distinguish it from other referenced assets, and {occurrence} allows to reference the same asset multiple times. The underscore is pure-cosmetics and makes namespaces appear as a regular separator. ben01_:ben_GRP versus ben01:ben_GRP.

contribution

Only meshes and point-positions are present, along with the .uuid of each shape node. This means no transform values or animation curves.



animation

Animation serialised into curves and store as Atom.

tasks: ["animation"]
hosts: ["maya"]
inputs: ["rig"]
identifier:
- Object set called exactly "pointcache_SET"
outputs: ["curves"]
representations: [".atom"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

It hasn’t been implemented yet.

workflow

Like with pointcaches, a rig is referenced and animated and the same rules apply, only in this case the animation can be applied afterwards to the original rig; no point-positions are involved.

contribution

Only transforms are considered, those present in the controls_SET.



render

Combined pointcache and lookdev into a rendered sequence of lit images.

tasks: ["lighting"]
hosts: ["maya"]
inputs: ["rig"]
identifier: ...
outputs: ["images", "quicktime"]
representations: [".png", ".mov"]
destination: "{project}/film/{sequence}/{shot}/public/{version}/{instance}/{file}"
file: "{project}_{sequence}_{shot}_{version}_{instance}.{repr}"

workflow

An artist references the relevant pointcaches produced by animation and applies shaders from lookdev. Shaders are deduced and applied automatically from the given pointcaches. That is, a pointcache of ben for shot 1000 are given shaders from the lookdev of ben the asset.

Rendering occur without the influence of publishing, as plain files on disk. The artist then publishes the resulting files directly off disk.

contribution

For The Deal, output is plain png, though any format should do.

room for improvement

  • @tokejepsen mentioned that sometimes, job submissions to a farm may need to be validated. See how this could fit into this methodology.


Notes

I’ll keep this post updated with changes as we go. Ultimately, this should represent each and every detail of how each individual family is first identified and finally outputted, minus validation.

Feel free to comment on things, we’ll rely on the version control of this post to track what each reply was in regards to (by looking at the date of the reply, and scanning edits of the post).

Version Control

Here’s how you currently update assets to their latest version; preserving environment variables in final paths, such as $PROJECTROOT such that no absolute paths are ever involved.

Future

The idea is to loose the dependency on assets being referenced and to involve a form of management GUI to properly list assets alongside their current and available versions.

model

I think it might be best to set a standard where everything is contained in an objectSet. The nice thing about it over having attributes on a node that is part of the actual content is that it’s a separate visual object in the outliner. Plus the contents of the objectSet can be limited (instead of a full hierarchy) or forced to contain more. It has a lot of extra possibilities. Also by using the same method in all Tasks we simplify the workflow of the overall pipeline. (Only rendering would be hard to define in an objectSet maybe?)

+1

rig

This sounds like it ‘depends’ on the name of the namespace, but it doesn’t do that? Correct? It’s only there for cosmetic purposes?

lookdev

We’ll arrive at a point where multiple shaded_SET's should be allowed in a single scene. For example when publishing shader variations. This is just a note for now. Just to ensure we’re not locking into allowing only one single set to be available of a family in the scene. I know rig pointcache sets already allow multiple, but they require namespaces for that. But I can see this happening in lookdev:

default_shader_SET
red_shader_SET
blue_shader_SET

We currently have a really nice workflow with something similar. Tom (our lighting artist) will hopefully join us anytime soon in the discussion. He’ll be able to tell you more about his preferred workflow/setup.

pointcache

In some (or many?) cases there’s no need to bake everything down into geometry changes. For example a truck could be rigged in such a way that it’s just constrained. Baking the transforms down is much more efficient (and actually what would be expected). Also by eliminating any hierarchies to come through might make it harder to let the lighting artist secretly constrain a light to moving object.

Actually, there is currently one object that will have animCurves… the camera? :wink: Or is that part of animation as opposed to pointcache?

Just checking. You’re not necessarily saying to ditch having references in the scene, but more so that the artist doesn’t “reference” an something himself yet uses ready-to-go tools to do so. The assets being loaded would still be references? Or what are you aiming at?

Yeah, pure cosmetics at the moment, but consistent with how they are namespaced later, with the added {occurence}.

Good, I also want this.

Exactly.

Whenever transform values are preferable, you could use the animation curves, as opposed to the pointcache.

In cases where a pointcache would never be used, the pointcache instance could simply be de-toggled. But I’d like to encourage full output each time. Both points and animation.

That’s kind of what I’m saying. Not as an absolute either-or, but to have the option to. It basically boils down to my master plan involving shot-building, which we haven’t gotten to yet.

For now, all that matters is that we shouldn’t get too dependent on the fact that assets are Maya-referenced into a scene; for example, we don’t want to consider building upon offline edits directly for common tasks.

Could you elaborate briefly on this? Not sure what you mean.

It was just an example of something reference-specific. If we depend on reference edits in any way, we won’t be able to import assets into the scene as they won’t support that feature.

With reference from your reply in Publishing Renders for Lighting @BigRoy, the only other problem to be solved at this point is…

  • How to format the innards of a renderLayer?

For example, pointcaches, models, rigs etc are all formatted into the published directory, but they are also named appropriately. They include the project, topic and version.

When it comes to renderLayer, since we can’t be sure about exactly what is in there it can be difficult to format? For example, the render layer might have only one pass, a single sequence of images, in which case we can apply the same “formula” as we are with pointcaches and simply name the resulting files accordingly, appending the relevant frame number.

But if render layers also contain any of the other variables relevant to rendering, I’m not sure how to format this.

The alternative might be to not format it, but let whatever innards of that directory remain as-is, but then we would lose the benefit of an integration step and be forced to include schemas and formatting and such during extraction. Not good.

Another alternative, one I am reluctantly inclined to prefer, is to lock down a particular layout such that we can make strict assumptions about the innards of the render layer directory, until it has become clear what is missing or needs to loosen up.

Thoughts?

It’s hard to exactly pinpoint this because the format differs. And it differs a lot. In some case you render only a single layer (instance) and in others many. Sometimes there’s many passes or variables, and sometimes close to None. It’s this loose because it’s just required to be adaptive in a sense.

I think each instance should just transfer with its inner structure for Extraction/Integration (work -> publish). If a certain project, shot or a specific pipeline ends up having specific rules on the output format I’d say that’s up to adding a Validator to ensure their formatting is valid.

You’d likely want to validate these exact rules before rendering anyway, because there’s just too much output (taking long time to render) to have it mess up during rendering.

@tomhankins, what do you think is a way forward? Can we structure our render outputs and lock it down? Or do you require it to be less undefined?


The other open discussion point would still be the versions per instance.

We’re doing something similar to your second diagram here. We have a publish scripts that we can run through Deadline directly tho, which is sort of neat.

Hey @panupat, this sounds interesting. Could you elaborate on this?

The script is quite simple. Too simple really. After a render is completed, we right click on the task in Deadline’s interface and choose to run our script. The script would check if the frame numbers matches what we have in Shotgun and then simply switch the shot’s status to “Ready for Comp”. Other wise, it just fails.

It doesn’t check anything else at the moment, just frame numbers.

I have a lot of idea for this, like, instead of needing manual execution, this job should be created as a dependent job under its corresponding render job. So it’d execute itself when the render is complete.

2 Likes

Is this using Pyblish’s power already? Or is it a single function running over the files?

We might be hopping into this with Magenta too. If you’re willing to hop in that would be great. It means we can work towards a setup that will work for Magenta but also for you based on your previous experience.

In short the idea is to:

  • Set up a dependent job in Deadline
  • That job runs a Pyblish job (could even be a task per frame so invalidations would even show in Deadline!)
  • A report would be created based on the Validations.

The idea is that this runs the same steps of CVEI (Collect, Validate, Extract, Integrate). This would mean it would just be as easy to add additional Validators (like your frame number matching) or Extractor (maybe you’d want a ‘Slap Comp’ to be rendered if the frames passed a simple Validation).

instead of needing manual execution

Might be an idea to look into event plugins in Deadline for this. This is how the standard Shotgun and Ftrack integrations work. Even draft work like that.

2 Likes

Quick Overview of Current Workflow

Here’s a few summaries along with video of how assets are made with Magenta today.

The goal is to reduce and simplify the steps, both individually and as a whole.


Modeling

Description

  1. Setup
    A working directory is automatically created along with project-dependent environment variables used during publish.

  2. Create
    Bulk of the artist work, does whatever is necessary to produce shareable model.

  3. Format
    A hierarchy is created in accordance to Magenta convention; name of asset followed by “GRP” separated by an underscore.

  4. Integrate
    Surrounding/related asset is referenced and matched to the model. This is how we can ensure that assets actually fit together.

  5. Add output
    Additional outputs, in this case a turntable.

  6. Publish
    Finally, run it through the tests and export it into an appropriate location.


Rigging

Description

  1. Setup
    A working directory is automatically created along with project-dependent environment variables used during publish.

  2. Reference
    Source geometry is referenced into the scene for rigging.

  3. Rig
    Bulk of an artist’s work; the rig is created.

  4. Format
    Internal layout is formatted according to convention.

  • Assembly named {asset}_GRP
  • pointcache_SET contains all cachable geometry
  • controls_SET contains all animatable controls
  1. More output
    Additional outputs, in this case a turntable.

  2. Validate
    Run it through the tests, fix problems.

  3. Publish
    Finally, share your work with others once tests all pass.


Look Development

Description

  1. Setup
    A working directory is automatically created along with project-dependent environment variables used during publish.

  2. Reference
    Source geometry is referenced into the scene for shading.

  3. Shade
    Bulk of an artist’s work; the model is shaded.

  4. Format
    Internal layout is formatted according to convention.

  • shaded_SET contains all shaded geometry
  1. Publish
    Finally, share your work with others once tests all pass.

Animation

Description

  1. Setup
    A working directory is automatically created along with project-dependent environment variables used during publish.

  2. Reference
    Source rigs are referenced into the scene for animation.

  3. Layout
    Rigs are laid out in the scene, according to storyboards/animatic.

  4. Animate
    Rigs are animated

  5. Publish
    Finally, share your work with others once tests all pass. Note that there is no formatting required, formatting inherited from rigs.


Lighting

Description

  1. Setup
    A working directory is automatically created along with project-dependent environment variables used during publish.

  2. Reference
    Source pointcaches are referenced into the scene for lighting.

  3. Assign
    Shaders from look development are automatically deduced from source pointcaches and assigned from their latest versions.

  4. Light
    Scene is lit

  5. Render
    Scene is rendered

  6. Publish
    Not finished

1 Like

Transposed Version in Hierarchy

As we talked about in the thread on publishing for lighting, I’ve made an attempt at moving the version under each Instance, as opposed to above where it was previously.

This means:

  • Instance's can now be published and incremented individually, which then also means…
  • Instance's no longer maintain a connection to each other

This connection is what has thus far been used to link, say, a Quicktime to a Rig, the Quicktime representing the rig in a format compatible with media players.

# Example
publish
▾ v001
  ▾ rig
    ▸ ben.ma
  ▾ review
    ▸ turntable.mov

The link was made by instances residing in a common version directory, which is now no longer the case.

publish
▾ rig
  ▾ ben
    ▸ v001
▾ review
  ▾ ben
    ▸ v001

The benefit of course being that we can now update individual instances in shots, such as an individual render layer or animation of a character, without actually having to either publish everything at once, or somehow linking missing Instance's to their closest available version.

Formalism

With this new approach, it’s easy to get lost in translation (I know I am) so I figured I’ll provide some formal, high-level labelling on the object model and methodology currently employed in Magenta.

Content Orientation

The workflow as it exists today is content-oriented as opposed to role-oriented.

In role-orientation, tools are built upon the expertise present in an organisation, such as some being an animator, rigger, modeler, or lighter etc. Each role is fixed and it’s input/output determined by what is commonly associated with each role. For example a rigger isn’t expected nor supported to provide geometry just as a lighter isn’t usually meant to deliver character rigs.

The disadvantage of role-orientation is that roles change but pipelines don’t. A pipeline is like a rail-road track. It makes delivering a hundred similar shots easy, but does either nothing or harm to the odd ones out.

Conversely, content-orientation means facilitating the type of content that can be created within production and allowing anyone to contribute with any type of it without being limited by the particular sub-pipeline built for around their main expertise.

With this approach, a model is always a model no matter who publishes it and tools built around models work across the board. The same then applies to any other type of content.

Current types of content

  • model
  • rig
  • lookdev
  • pointcache
  • curves
  • review

Where each type is represented in Pyblish as a family. Types are currently a mixture of abstract content used by another artist and it’s physical representation on disk. For example, both pointcache and curves may be used by the lighter, even though they both refer to “animation” which is ultimately what the lighter is looking for, regardless of their representation.

Suggested types of content

  • model - Static points
  • rig - Animation interface to static points
  • puppet - Skeleton and relationships
  • animation - Dynamic points
  • layout - Content associated with a shot
  • look - Shader network
  • element - An individual image sequence (render layer)

And here’s their configuration with respect to which role typically produces the output given an input, along with a few proposed representations.

Result

Currently, each family is capable of outputting an arbitrary amount and type of Instance's.

▾ rig
  ▾ ben
    ▸ ben.mb
  ▾ review  
    ▸ turntable.mov
  ▾ metadata
    ▸ metadata.json

Which means great flexibility but an awkward directory in between family and file that most often ended up containing a single file.

With the above approach, families contain a single output in various representations.

▾ rig
  ▸ ben.mb 
  ▸ ben.mov
  ▸ ben.json

Where the representation is present solely in the suffix of the filename.

This way, the inputs and outputs of artists are the abstract notion of content, such as “animation” and what they actually import and physically make use of is the content’s various representations.