Publishing renders (lighting)

So in the context of Pyblish Magenta we’ve been discussing workflows for the lighting/rendering Task in the pipeline. To keep things light to read and easy to find I want to raise it here as a separate topic.

To kickstart here are some workflow questions:

  1. What kind of naming conventions do you use for the output of your renders?
    See Output naming conventions

  2. How do you currently publish renders? (Do you even publish renders Bro?)
    See Publishing Renders

And to put it in perspective I want to continue with some propositions to see what everyone thinks is good or bad practice.

Output naming conventions

An output of a scene can have multiple cameras, layers and passes.

  • camera: The camera used to render from.

  • layer: The render layer used to render.

  • pass: This is a renderpass belonging with the layer (rendered in the same go).

    • eg. This would be render elements in V-Ray or AOVs in Arnold.

The layer would be rendered from a camera and the passes would be rendered with that layer, as such we already have a hierarchy. A proposition would be to layout your renders as such (option A):


Parsed to:


Or with a more descriptive filename (option B):


Parsed to:


Publishing renders

Rendering and getting those files checked for the next department could be seen as a two-step publishing:

  • Publishing the lighting scenes to start the render.
  • Publishing the lighting scenes output to approve or deliver the render (commit) as “published”.
1. Start the render

The first could check whether you scene naming conventions are correct as you submit the render of the scene to a queue. It could also check for common errors that are related to the lighting scenes contents itself. Basically this is what the artist would do to “submit a render”.

This would be where Validators could step in place for things like:

  • Are the correct frame ranges being rendered?
  • Is it the correct resolution?
  • Maybe check whether the ‘motion blur’ samples are within the correct range for a project. Or whether the motion blur frame sample offset is at the correct number.
2. Deliver the render

The next step would be to confirm that the renders are correct, this could only be done as the output has been created. For example check whether there’s no frame that is suddenly completely black or corrupt. This is also the step where a render could go from a “Work” staging area to the “Published” contents folder for another department to pick up.

This is where one would validate anything post-render, for example:

  • Validate all frames that were queued are actually saved and rendered
  • Check for corrupt or “black” frames
  • Or maybe you have an ‘expected minimum size’ that you could validate.

In many cases step 2 could be partially (or fully) automated as a dependency after the completion of step 1. For example a post-render job (eg. through dependencies with Thinkbox Deadline) could trigger all the necessary validations and the integration of those renders from “work” -> “published” if validations were passed.

This would also allow a so-called ‘Slap Comp’ to be rendered after the frames are completed. This could be a ‘preset’ comp for the shot or project that it goes through to let the artist have a look whether his/her rendered output holds up after compositing without needing to go through the hoops of rendering/setting up that comp himself.

1 Like

Great summary, Roy.

I’m eagerly awaiting input from @tokejepsen and @mkolar.

1 Like

Great topic. I’ll write a summary of what challenges we’re facing here to see how it compares.


To start lightly. My rule of thumb for every single shot output is that it must reference it’s originating scene in the name and add any modifier after that. So if maya scene is, renders produced by it will always be prj_sh010_light_v01_layer1.exr, prj_sh010_light_v01_layer2.exr .

If we need to rerender only layer2, then maya scene gets iterated and it’ll become prj_sh010_light_v02_layer2.exr prj_sh010_light_v02_layer2.exr

to be completely honest I’ve never rendered multiple cameras from the same scene, and we always do multichannel exr, so mostly we just have one modifier in the filename, which is layer, but otherwise I’d go with your last option. prj_sh010_light_v01_cam1_layer1_channel1.exr. It is long, but it also makes it much easier to take out of context. For example when giving to external freelancer to do a shot at home.

The thinking here is, that even though layer1 could be generated from scene v02, it’s real origin is scene v01, so I want that tracked in case we need to recreate it precisely some day.

This is where we currently take a very simple approach as described in another thread. We publish when submitting render to the farm, which also creates entry in the database with version which is empty to start with and has status of on farm. Once frames get rendered, this status changes to render complete notifying the artist that he needs to check it. When he does, which is a process I’d like to keep on human side (to be able to check for visual problems in frames too), he changes the status of version to approved which triggers start of compositing task and completes lighting task. Keep in mind that he’s not publishing frames, because they’ve already been ‘pre-published’ for him.

That’s what we do technically. However there is another aspect to this which is CG light approvals. This is area where I see tons of possible improvements. The problem we have is, that our CG render are practically never reviewable raw. So the process is that lighter render one frame of each layer (usually locally), does a quick slapcomp and submits this image for review, together with the nuke script (whic can be picked up by compositor as a start point), and a light rig that can be reused in other shots. That way supervisor has better idea of what’s going on there.

If this get’s approved, lighting task changes status to to render so artist knows he can send full range to the farm. This is usually done once per sequence (a collection of shots in the same environment and lighting condition) and then all the shots in the sequence can be sent to farm.

It’s a lot of back and forth, but it does save on rendering time quite a bit.

This is something I’m thinking about for very long time. The problem we usually run into is that standardizing renders enough to be able to produce these slapcomps automatically tends to be…tricky, to say the least. But with a bit of determination it should be possible and would be great.

1 Like

Hey guys, first reply for me.

I’m a lighting and compositing artist at Colorbleed. Had a brief discussion with @BigRoy about the rendertokens. Since every shot will in a way have it’s own workspace I think the following setup would give us less of a headache.

I think the margin of error will be smaller like this and browsing time will be limited. We won’t use the camera variable often, but it’s a safe bet to keep it in here like this.

This is what I would prefer as naming convention:


By the way, nice work on Pyblish so far!


Just to add to the conversation:

Had a thought about when a Camera from a single scene actually becomes a requirement: stereographic rendering. We don’t have much experience with it (I think only 1 project so far), but it’s one that would need rendering multiple cameras from a single scene.

@mkolar Thanks for hopping in, great information. So is the output name based on the scene’s name or are you using other data to format the output? (eg. the active Task or something like that).

Just the scene name, but that is generated from the task information. If task is in the pipeline (paths generated by the scripts, task launched from ftrack etc.) then the name for scene and render gets formatted from scratch everytime it gets published (integration step). If they are odd render which are outside of pipeline (happens), then it’s a guideline in the studio to use scene name + modifiers we’ve mentioned.

Yeah… never done that :slight_smile:

1 Like

Good to know. That’s actually exactly what we’re doing now. It’s hard parsing what the latest version would if someone steps outside of the “versioning” naming conventions…

We had a small discussion here that it does happen sometimes that we render multiple cameras from within the same Task, just not within the same scene. As such we would have two different lighting scenes (named differently) within the lighting task’s folder. Nice here is that the output is named accordingly so gets differentiated in a way. For example this happens if the same shot needs to be rendered from multiple angles.

Does that make sense?

In some scenarios we do something similar for multiple variations of a poster having different deliverables. (eg. different resolutions/aspect ratios; and in some cases different lighting/angles). Taking an extreme example would’ve been a recent image we did that ended up in 300+ variations (based on around 5-6 different renders/angles that were sliced into variations). Here we’d prefer keeping these close together as opposed to making 300+ different “shots”.

@marcus, any opinions on the above “workflows”?

Been there, done that. That also makes me think that taking the name of the scene among the top hierarchy is a good identifier.

Point here is that at any time when working with “workarounds/hacks/sandbox files” they should stay conform in such a way that they can still be processed in the pipeline. Stepping “outside” of the pipeline often means it’s easier to work without than with the pipeline… and we should definitely get to a point where working within the pipeline is easy and fast.

Proposition update

From the above combinations I’m looking at the following. If there are multiple cameras (or scenes) we want these variations to be identified and relatively separate, so they become a top hierarchy identifier.


If multiple cameras would be rendered from a single scene this could become:


The bundling by scene_camera would also mean stereoscopic projects would end up with something like:


I guess a Selector in Pyblish could identify the shot has having two active renderable cameras, thus collecting instance data like multiCamera=True. If it’s True this could trigger a Validator (or the Extractor/Integrator) to ensure the camera name is included so both cameras don’t override each other in file output.

Looking forward to hearing what @tokejepsen will bring to the table here. Great contributions everyone!

Yes, definitely great contributions, thanks for that.

In an effort to try and tally up and put a name on the various qualifiers we’ve identified thus far.

  • IdentifierExamples
  • scenev001.mb,
  • cameramain, left, right, persp, face
  • versionv001, r03_351, 0001
  • topicthedeal_seq01_1000, s079s_c49, hulk_ra2400_5600
  • wedgetinted, grayscale, lowquality
  • renderLayerbg, fg, heroModel, face
  • renderPassao, beauty, sss, motion
  • take?

A note on wedge, I saw it referred to in a Houdini context, not sure it applies across all software. Here’s the definition.

Wedge: Render multiple times with varying parameters

  • Reference

  • A: Does this apply to your scenario @BigRoy, about the “300+ variations”?

  • B: Are there any identifiers missing?

What goes into a name?

The naive directory structure for all of these identifiers might end up looking like this.

# / Root                                 / Version 

# / Scene                              / Topic

# / Camera   / Layer    / Pass           / Wedge 

# / Filename

At which point, ladies and gentlemen, we are 5 characters away from the 260 character limit for paths on Windows operating systems. Yes, there is a limit, and it is low. Any directory after this point will become unbrowsable and unreadable.

I think we can all agree that it’s TMI, too much information. So what can we do?

Ultimately the goal is to enable tools built on-top of whatever convention we choose, so we’ll need to find a method of encapsulating all relevant information, without getting bogged down in directories or ultra-long names.

Some options

  1. We could include everything always, as in the example above (obviously not).
  2. We could build the hierarchy dynamically, excluding that which isn’t changed across renders, such as wedge.
  3. We not rely on paths at all, storing all relevant information elsewhere, such as a sidecar .json. In this case, we still need a unique separator for the number of renders will happen, in which case we could introduce build; an infinitely increasing number per render, irrelevant of task or logic, just an identifier of which render comes after another to avoid paths from clashing.

An example of (3) could look like this.

▾ lighting
  ▾ v024
    ▸ b00033  # Build
    ▸ b00034
    ▾ b00035
      ▸ 0001.exr
      ▸ 0002.exr
      ▸ 0003.exr
      ▸ meta.json

For which meta.json could look like this.

    "scene": "myfile_v154_mottosso.mb",
    "camera": "main",
    "version": "v013",
    "topic": "thedeal_seq01_1000_lighting",
    "wedge": "grayscale",
    "renderLayer": "bg",
    "renderPass": "ao",

The problem with this is that it’s difficult to manually browse to or even interpret what it could mean, without being rather tech-savvy and dig into the JSON or, alternatively, have a dedicated tool to make this separation for them and visualise it more elegantly.

In short, (3) is great when you can rely on tools 100%, which is rarely if ever the case. (1) is great when you can never rely on a tool, which is safe, but clearly overzealous (and dangerously close to the Windows path-limit).

Dynamically allocated paths - publishing w/ template

Roy showed me that he typically enters a “template” of sorts into the Maya render editor and I take it this is a common thing to do?

If so, maybe we could leverage this and export this template alongside each render. That way, we could use this template in order to parse what each part of a path means?

Basically, it would facilitate (2).


I was also thinking, especially in context of what @mkolar talked about in Shot and Task publishes, what if only a single render layer needs work and is incremented to the next version?

Basically, would it make sense to publish versions on a per-layer basis? Or more generally, to publish versions on a per Instance-basis?

▾ lighting
  ▸ heroCharacters
  ▾ foreground
    ▸ v001
    ▸ v002
  ▾ background
    ▸ v001
    ▸ v002
    ▾ v003
      ▾ mainCamera
        ▾ ambientOcclusion
          ▸ wedge1_0001.exr
          ▸ wedge1_0002.exr
          ▸ wedge1_0003.exr

It would mean a lot of duplicated content initially, as each layer would get v001, v002 etc. in unison. But eventually, things may disperse into individual updates, and this might facilitate this.

1 Like

Had never heard of Wedge before. It sounds/looks different. Probably @tomhankins has a better “feeling” about whether that’s spot on or not.

Yes, in Maya these are called Render Tokens. I never knew but it even has a <Version> token!

Anyway, parsing this could be tricky if you would only split these eg. by _ (underscore) since these could also be part of the token itself. A camera could be called main_left or a scene may be called scene_v01 so to ‘find’ what token represents what data solely based on the filename might become problematic.

Combining it with a meta.json could of course avoid this issue for pipeline tools.

+1. Yes!

This makes a lot of sense to me when looking at renders. It happens a lot of times (like @mkolar also mentioned) that only a specific layer is incremented and rendered as a new version, yet it would still be important to easily trace back the originally used file to create it.

Per-instance versioning across all tasks

Looking at versions per instance for Tasks like animation or modeling would make it so that a “playblast”-instance version isn’t necessarily the same version number as the published scene’s content since the families are incremented separately. Would that be expected behavior?

If not, how do we decide what/when should be combined or separate as instance data/versioning?

Anyway, it could bring the rendered sequences to:

# Task publish

# Family /  Instance (Scene_Layer)       / Version

# Layer / filename (Topic_Layer_Pass)



Does this look ok? Is this too much?
Is it unique enough to hold our variations of how the output is generated?
Can we trace the file easily through the pipeline?

Passes for a layer are within the same folder. Is this preferred @tomhankins?
This would be true for @mkolar since they use multichannel .exr. that embeds these channels.

That would be bad. I don’t know to be honest, haven’t thought about it like this before. I suppose it wouldn’t be too difficult to include the name of another Instance as a sort of “parent”, and then consider this relationship during integration.

What’s more, we would need to start keeping track of which versions “belong” together and which are simply “latest”. In the simplest of cases, latest always wins but in practice it’s quite hazardous as every move could cripple a working combination.

For example.

▾ instances
  ▸ heroCharacters
  ▾ foreground
    ▸ v001
    ▸ v002
▸ latest.json
▸ approved.json

Where the JSONs are instance/version pairs, updated during each integration.

  "heroCharacters": "v034",
  "foreground": "v031",
  "background": "v014"

Alternatively, we could to hard-link each approved/latest Instance into a version-less directory.

▾ approved
  ▸ heroCharacters
  ▾ foreground
    ▸ wedge01_0001.exr
    ▸ wedge01_0002.exr
    ▸ wedge01_0003.exr

That could then be something others could reference directly so as to never have to manually update. Including a link to push/pull for completeness.

About the two-step publishing, considering the importance of validating lighting before it is sent off for hours to be rendered, how about this.

  1. From ligthing, e.g. inside Maya, a job is published. Like other publishes, a job is meant for sharing except in this case the receiver is one or more computers.
  • The job would result in a .job file, a JSON-formatted series of instructions for the other computer(s), which would be in whichever language these computers speak; such as Deadline or Tractor.
  1. A job is either automatically picked up by other computers, e.g. dispatched via farm control software, or manually submitted afterwards by a human.
  • Like any task, it is started before it can be published, except in this case the worker is a computer and as such it can’t make decisions about what to do in case validation fails. Instead, a report is generated and distributed to where it can be seen and addressed; such as an email message, a notification in Slack or ftrack.

The key points here are that (3) a job isn’t mandatory for rendering and (4) notifications fall beyond the scope of publishing, as does addressing any of the problems. This frees up space for independent tooling and workflow.

Finally, the end result is that renders end up in a users working directory, not in a published directory. The job was published, not the images.

▾ lighting
  ▸ publish
  ▾ work
    ▾ maya
      ▾ images
        ▾ heroCharacters
          ▸ v001
          ▾ v002
            ▾ mainCamera
              ▾ ao
                ▾ 0001.exr

Publishing of the resulting image sequence then is an optional third step. This should encourage testing and hacking without interfering with or overflowing central information and means any image sequence can be published, even those that have not gone through steps 1 and 2.

Simplifying and bridging these steps can then be left to workflow, best practices and tools.


That’s what we do when necessary, too. We’d save the scene as another take with different name and render camera2 from there. Same task, multiple scenes and outputs.

Agreed, but this is a tricky one. The way I see it looking at the artista I’ve met, it’s practically always preferred for them to work outside pipeline, Simply because they each have different ideas about what is good naming, workflows etc. Which is fine, but then nobody is able to find any files from other people. I have the feeling that if I gave choice to people, all of them would choose to work to a much looser pipeline. Easier to work doesn’t always mean better and more manageable in a bigger picture.

Wedge in houdini is great but rarely used for final renders. It is based on principle that you can vary parameters on something (anything) and render all the outputs from the same scene, with minimal effort. Amazing for testing different settings on simulation, rendering variation of client to choose from and such. Multiple camera angles would be very tedious to setup for example so it’s not suitable for that. randomized parameters on a digital asset that creates a hairstyle though, would be exactly the thing. You can get 100s of style from one scene.

I’ve never considered using this term for other things in rendering, and frankly don’t think I will. It’s not very self explanatory. If we have more variations of the same thing it’s just a new modifier to output for us. But then if people weren’t so used to ‘version’ I’d call those takes as well. (mind that i very much agree with Isa A. Alsup in this old blog post

uff a bit long. The same thing in our system would look like this

That doesn’t need to be a problem. We handle this by always incrementing scene version and it’s output inherit this. So you can end up with.




Some outputs might skip version, but that doesn’t bother me at all, it just means that v02 of FG in my example would be identical as v01. Important thing is, I can trace it back if I want to recreate it.

Absolutely. In situations where we don’t combine channels into multichannel exrs, we dump them to same folder. Most compositing softwares can show sequences as a single entry when browsing so it’s really fast to import them then.

So we’re talking two things here:

  1. versions-per-instance (like with the versions of a render layer)
  2. multiple families in a version of an instance (eg. a playblast combined with the latest pointcache)
Versions per instance (1)

We’ve seen now that for (1) there’s definitely a need in production. It makes much more sense to sometimes increment only one of the extracted outputs, especially when the outputs could be large (caches) or take long to create (renders).

For example this would allow you to increment only the character’s point-cache from animation or queue only one render layer and publish it as an increment on its own. This also means that not always the full contents are published in bulk and the instances will end up having it’s own different version numbers.

A solution (@mkolar mentioned something like that) is that the versioning is also dependent on the name/version of the source file, so versions would be skipped if publishing from v39 if the latest publish was done from v36. This ensures that when doing a new bulk update versions are joined into the same version number. Another benefit is this eases tracing back what the source was for a certain output version.

This could be a good option! Especially since this also ensures that work files are incremented for new publishes, ensuring that the source content to create the output still exists in the work folder.

Multiple families per version (2)

This is the other end of the spectrum where you closely couple together new published content under a single version.

When updating only certain instance through a new scene version this would mean the playblast created from a scene could be incorrect with the existing output. (eg. a table could have moved but haven’t been pointcached). This would mean there’s no one to one correspondence between the work file and the output versions (unless doing the “version from work file trick as mentioned above”).

Any ideas on other direct benefits gained from this?

The “Final” render is solely based on which was used to create the latest/correct output from Compositing. Tracing back which are the latest used versions means tracing the “origins” of the output files. I don’t think Extraction defines what is the final version, because if so the latest would always be the final?

This would revert it back to a version working with a live master that automatically updates. Combining this with a Versioned pipeline means you’ll end up having the downsides of both Versioned (Pull) and Unversioned (Live/Master; Push) pipelines?

Funny that you post this here just before I posted mentioning you while I completely missed your post, hehe. Thanks for bringing it up.
This makes a lot of sense to me from an Artist’s perspective!

Looking great. How would that output differ from the different scenes? Where would it differentiate itself as opposed to overwriting?

Great. +1

just realised I forgot to add task token into filename before it should have been this:

Ideally it would be another task within the same shot, so it would be

if it really was in the same task, then we would use a different modifier instead of ‘fg’

the way I see it modifier in the render can encompass more information than just layer. I’d include a collection of variables and keep them as one token. Maybe this comes from me being used to houdini and mantra, where I use the name of the mantra node to describe the render. within this node you specify everything, samples, camera, resolution, objects, etc. so it’s a full package and then I call it. cam1-char1

the only thing I keep changing is formatting of this token. using _ in the modifier is no good, because you can’t tell what’s a separate token and what belongs together. Normally I go with either mixedCase or dash-separated words.

so to simplify:

1 Like

What’s going on over there? :smile:

I think there’s a flaw in this logic, in that it doesn’t scale. It assumes that each task is worked on by either a single artist ever, or that artists share scenes without any form of handover.

I think that this is the sort of requirement that forces people to work outside the pipeline, because they are limited not only in how they publish, but in how they work.

See, to me, I think working should be personal. I should be able to enter a studio and work on assets my way. As a freelancer, I have little interest in learning all the ins-and-outs of a company before I can actually contribute, and the company cares very little about this as well. I just want to come in, and do good work; that’s what companies want too.

I also think it inhibits an artist’s ability to innovate. To come up with something new. Rather than a pipeline showing them the door and asking them to reach it, it grabs their arm and pulls it.

Yeah I think you’re right, I think we need both the versions-per-instance, and the higher-level publishes for playblasts.

I’m thinking we can start subdividing our /publish directory.

▾ lighting
  ▾ publish
    ▾ instances
      ▸ bg
      ▾ fg
        ▸ v001
        ▸ v002
    ▾ ?
      ▾ captures
        ▸ 2015-08-20_20-43  # Versions are irrelevant
        ▸ 2015-08-21_10-24

Where ? represents a level in which the full scene is considered, as opposed to individual instances. captures then contains playblasts of every contained instance and their current (latest) versions. In the case of lighting, it doesn’t seem to make much sense, but in animation it might?

Oh, sorry, I wasn’t referring to anything being “final”, but rather that you might at some point have a working combination of versions, but that you still might want to output new versions either to try and improve upon a working combination or to replace it due to some change. The key thing is that a working combination could be referred back to at any point as being “the latest working combination” a.k.a. “approved combination”.

Ooor, upsides? Eh? :wink: Not saying either is better, just throwing it out there.

There can be more people working on the same task, but never at the same time. If they need to work on something simultaneously, it’s two tasks in our. If task is handover to another person, they simply continue working from the last version. We’ve had plenty of shots handover when animator starts blocking it, but then we change the person on the task and someone else does the cleanup. No problems with handovers. On the contrary, with everyone working to the same guidelines, it’s easy to pick up files after other people.

Hmm. Will give this some thought. In general this makes sense, but I’m not sure I agree with this completely.

Hehe. From my observation, most of the people here (Prague) are used to working on their own, freelancing from home (tiny market). When we pull them into the studio they are simply not used to collaborating with multiple people and are not realising things such as, naming photoshop layers: layer1 … layer234 doesn’t help anyone down the line. This comes with the feeling that complying to our standards slows them down. This is true, but the point is that it slows the person creating work by 5 minutes and saves someone picking up his work 15-30 minutes, just trying to figure out what is going on with his inputs.

well what can you do… work with it :slight_smile:

I’m referring to what you mentioned earlier, about artists prefer working outside of the pipeline. I’m thinking that what you gain with this requirement is taken away from you when artists refuse to do it, that the tools you build on-top are built on promises that can’t or won’t be kept.

Fair enough, makes sense. :smile:

Take it with a grain of salt thought, it’s not a reality so much as it is an ideal to strive for.