Publish separately VS pull newest version

May I have your opinion on this please? I want to try implement Pyblish to help validate this issue but at the same time I think I need a better underlying structure first.

The current pipeline I set up for my studio, when artists publish their asset, the asset would be saved/exported to a separate “hero” file, stored away from the version files. The hero file is what other department would pick up and reference further down the pipeline.

The problem that I’m facing is that it’s quite common for the artist to have newer version that’s not being published which sometimes were not detected until the final render.

Due to this I’ve thought about another approach, which is to reference the newest version without creating separate file out of it. Creating a script to check all references for new version would be easy enough. But the potential problem is that sometimes the newest version may not be ready for reference yet.

What approach are you guys using?

I’ve read somewhere about creating a symlink to the newest version too. I think I experiment with it a little but Maya didn’t work well with the idea.

Thank you for your suggestion.

1 Like

Hey @panupat, of course you may, glad you could join!

Having a good foundation is very important so it’s good that you think about it before diving any deeper.

Before we get started, I just need to get a better sense of how your current pipeline is set up. In general there are two methods of keeping artists up to date with the latest material, sometimes referred to as push and pull.

Pull

To pull means to produce a separate file/folder per publish, and to optionally notify artists to pick it up. Here, the receiving artists is “pulling” in material.

The providing artist have no control over which version the receiving artists eventually ends up using, so the responsibility to keep up to date falls on them.

Push

Whereas to push means to overwrite an existing file/folder that artists are already using, guaranteeing that everyone is always using the same/latest version. Here, the providing artist is “pushing” out material.

In this case, the receiving artists isn’t given a choice over what material to use, but are always using the latest - or in this case the current - version.

By your description, I can’t quite tell whether your are currently pushing or pulling, and are looking to try the other. Is any of these two approaches similar to what you’ve got at the moment?

Hi @panupat. If I understood correctly, currently you’re using something similar to (based on Marcuses naming) Push system.

From my experience this eventually becomes very diffucilt to deal with, if the asset is being used in many shots across the show. Of course it’s nice that you almost certain that everyone uses the latest version of an asset without having to keep an eye on it, however it also means you can never be sure if file’s output will look the same the next time you open the file. This is especially problematic with character rigs.

Let’s say that artist A is animating a scene with char_rig_hero.ma. He’s almost done, only a few tweaks left for tomorrow . He comes back in the morning, only to find out, that things are somehow off in his file. Indeed it was because rigger has remove some parameters, to make animator’s life ‘easier’. This would not have happened if animator was stuck with version# once he starts animating, unless he explicitly updates after asking rigger, whether the changes he’s made might affect his existing animation.

My 2 cents would be, to always go with versioned publishes. We’re using system of version_subversion (Major, Minor) so all the workfiles are named: name_of_the_file_v01_01_comment.ma once it gets published we strip it of the subversion and comments so it becomes name_of_the_file_v01.ma and that is what other artist would be pointing to.
Work folder might look like this then:

Work folder:
file_v01_01_blocking
file_v01_02_blocking
file_v01_03_blocking
file_v02_01_cleaningbody
file_v02_02_cleaningHands
file_v03_01
file_v03_02_stilWorkingOnThis

And publish folder like this. (artist is not yet done with v03 so it’s not published)

Work folder:
file_v01
file_v02
1 Like

Hi @marcus, @mkolar. Thank you for chiming in. It’s been a long time since I last read about the push and pull idea. Thank you for a great reminder!

Now that I think carefully, my current system is a Push. Artist would open the version scene directly (we don’t have any check-out system in place.)

{root}/ep08/seq01/shot0010/version/
    ep08_seq01_shot0010_block_v001.ma

And when they save the file via my script, they’re saving to the next version and then continue to work with the new file.

I have a “local” publish in place for files that are deemed ready for the next step, they are stored in the same directory.

{root}/ep08/seq01/shot0010/version/
    ep08_seq01_shot0010_block_v002.ma
    ep08_seq01_shot0010_block_hero.ma
    ep08_seq01_shot0010_anim_v001.ma

When the shot gets final published, I remove the task name and copy it over to another folder.

{root}/ep08/seq01/shot0010/hero/
    ep08_seq01_shot0010.ma

@mkolar’s example of having version number with the published files give me an idea. For my next implementation I think I’ll do the Pull method and, instead of publishing separate hero files, I may use database (or yaml) to store which version are ready to be referenced. Pyblish validation could be used in this step.

This way artist can keep working newer versions and at the same time have multiple working versions exposed for others to reference into their scene.

Do you think it is a good idea to learn some version controlling repository? I heard a lot about Perforce and Subversion, but as a self-taught TD I’ve never have a chance to use them.

No problem, glad to help.

To map our terminology towards each other, it sounds like development from the illustrations above is your /version directory, and published is your /hero. Does that sound about right?

In your case, where Pyblish comes in is right at the point where a file is moved from /version to /hero. An implementation of that might look something like this.

import shutil
import pyblish.api as pyblish

class ConformShot(pyblish.Conformer):
  def process_instance(self, instance):

    # Get current filename
    source = instance.data("path")  # /version/ep08_seq01_shot0010_block_v002.ma
    base = os.path.basename(source)  # ep08_seq01_shot0010_block_v002.ma
    name, ext = os.path.splitext(base)

    # Compute new filename
    name = name.rsplit("_", 2)[0]  # ep08_seq01_shot001
    base = name + ext  # ep08_seq01_shot001.ma

    # Produce output destination
    dest = os.path.join(source, "..", "..", "hero", base)
    dest = os.path.realpath(dest)

    # Copy it
    shutil.copy(source, dest)

Which is, as you say, a push method as it overwrites the same output each time.

For a pull version, you could add a version number each time the file is about to be written. Building on the example above.

# Compute new filename
name = name.rsplit("_", 2)[0] + "v001"  # ep08_seq01_shot001_v001
base = name + ext  # ep08_seq01_shot001.ma

In which case you increment the v001 each time.

In a pull system, you could simply assert that the latest version available is always ready to be referenced. That way, there wouldn’t be a need for any external influences. Or how come you feel the need for a database?

You can validate either way. Whenever a file is about to be shared, you validate it. Whether it ends up overwriting or incrementing a version happens after it has been deemed valid. The line inbetween development and published in the illustrations above is meant to represent the step at which validation takes place. If validation fails, the file never reaches the other side.

A version control system (VSC) is useful in “push” systems. With pulling, the version control is in your naming convention - such as v001 - and you wouldn’t normally need both.

When pushing, a VCS can act as your versioning in that a user “pushes” a new hero file, overwriting anything that already exists as usual, but in this case, the previous version of the file is stored internally within the VCS and can be reverted back to if needed.

As a side-note, VCS’s like Subversion and Perforce differ from Git in that they both push towards a central repository that everyone references against. With a decentralised VCS like Git, you could potentially benefit from overwriting a hero file where the artist is pulling the latest version on his own behalf, effectively having as much control over versions as in a pulling system. At the cost of having to store an entire project on your local hard drive.

Pushing with VCS is very common in the game development industry, but less so in the commercial and film markets. There are giants who use this approach, ILM and Pixar come to mind. They use it because there are practical benefits such as improved disk space use - as a VCS can do smart things like deduplication - which is important when a production produces massive amounts of data each day. But they still have to work around the fact, as @mkolar put it, that the next time you open your file you might not be getting what you left it as because files may have been updated without you knowing about it.

Pinging @davidmartinezanim as he has more experience with it than I.

At the end of the day, it’s a balancing act and you choose the method that fits your production the best. Some love pulling, whereas others love pushing.

2 Likes

Wow, thank you for your indept explanation Marcus.

Yes development and published sounds right. Are those the words used in larger studio too? I’d love to adapt myself to it.

In a pull system, you could simply assert that the latest version
available is always ready to be referenced. That way, there wouldn’t be a
need for any external influences. Or how come you feel the need for a
database?

You can validate either way. Whenever a file is about to be shared, you
validate it. Whether it ends up overwriting or incrementing a version
happens after it has been deemed valid. The line inbetween development and published
in the illustrations above is meant to represent the step at which
validation takes place. If validation fails, the file never reaches the
other side.

Currently I haven’t implemented any check-out system. So the artists work and save their scenes directly on the server. This means that the newest version may still be WIP and not ready to be referenced. That why I came up with the idea of artist flagging the ones that they consider ready. So they could flag v003, v012, v024 for example while their current version may already be v027. Other artist will know as well which version they can fall back to in case the newest one has problem.

About check-out system, I have an issue I don’t understand. From what I gathered it seems when artist pull files from repository, they are creating a local copy on their own drive right? They keep working and saving locally, until it’s time they publish it back to the server as a new version. What I’m worried is the path conversion. Since all textures and etc will point to the artist’s local drive while the file is checked out, how do you deal with this? Textures may be easy enough to deal with, but I can’t imagine converting paths pf FX, cache, Alembic and all the crazy stuff.

Again thank you for your great insight @marcus

I don’t think it’s so much about a local drive, but more about a separation of location. That is possibly even within the project. For example one could publish one folder up:

# working folder (development)
project/assets/character/hero/work/hero_v001.ma
project/assets/character/hero/work/hero_v002.ma
project/assets/character/hero/work/hero_v003.ma
project/assets/character/hero/work/hero_v004.ma
project/assets/character/hero/work/hero_v005_mayaCrashedAndNowINeedToFixEverything.ma
project/assets/character/hero/work/hero_v006.ma
project/assets/character/hero/work/hero_v007.ma
# published folder
project/assets/character/hero/hero_v001.ma
project/assets/character/hero/hero_v007.ma

Of course the difference between where the development file resides and the published one can be much bigger than only going one folder up.

# working folder (development)
project/development/character/hero/hero_v001.ma
project/development/character/hero/hero_v002.ma
# published folder
project/assets/character/hero/hero_v002.ma

But as long as they remain within the project boundaries you should never have to convert paths right?

Does that clarify things?

1 Like

Not sure I follow you here, didn’t you already flag which files were ready by putting them in your /hero directory?

When does a file go from /version to /hero, could you take me through the baby-steps?

Yes, you’re right, files can be copied locally (though that’s not always the case) in which case paths can potentially be changed around.

//server/project/mytexture.png
c:\local\mytexture.png

What some people do is map a common root drive.

//server/project/mytexture.png
x:\mytexture.png

For example, everyone maps X:/ to a local folder on their drive, and check files out onto it. That way, everyone references files on X:/, but everone’s X:/ is different. When it comes time to render, the render node can then checkout everything it needs to render, and it does so to X:/.

But you should know that this is relatively complicated compared with traditional versioning, which is more common and works equally well with both push and pull methods and has got less moving parts. Versioning is hard either way, unfortunately.

They might, but they don’t typically expose hard drive contents to artists, but rather wrap it up in an asset management software of sorts in which case all they see are the assets and not their parent directory. And at the end of the day, it doesn’t really matter what you call it, so long as you stay consistent and understand what it means.

I’d say the key thing that separates the two is that one is shared (public), and the other one is not (private). Shared data is typically immutable; i.e. it isn’t changed once written.

2 Likes

@panupat I was wondering whether you’ve been able to take the information provided here and make some decisions on your own.

Would love to see this topic evolve into more useful snippets of information from everyone since it’s a good topic with some many different solutions!

I can add something about what I’ve learned since the last post, about paths.

Within a working file, you can reference a file like this.

//server/project/mytexture.png
c:\local\mytexture.png

Which is an absolute path, making things immobile.

But in Maya, and hopefully others, you can also reference it like this.

$PROJECT/mytexture.png

Which then resolves to an absolute path, based on this environment variable. The great thing about this is that anyone launching Maya can set this variable beforehand to his/her local absolute path to the project, and the definition can remain the same in the Maya scene.

Machine A

$ set PROJECT=c:\hulk
$ maya

Machine B

$ export PROJECT=/projects/hulk
$ maya

For example, in The Deal, we’re all working through Dropbox which has a different root on each of our machines, e.g. c:\users\marcus\Dropbox. The environment variable PROJECTROOT is automatically set when entering the project, and referenced files then look like this.

$PROJECTROOT/assets/ben/modeling/publish/v001/model.ma

Meaning that wherever we open this file, the path will automatically resolve to wherever out Dropboxes are, never breaking any internal references.

This is absolutely essential in my eyes to any flexible and effective setup. We’ve been doing it for long time and it works great. It can get a little tricky to keep an eye on all the paths to make sure they are not absolute, but it’s definitely worth it.

Well I wouldn’t call it essential, I’ve been in plenty of successful productions without it, and the above methods are of course also equally valid.

But it’s good to know you’ve got good experiences with it. Do you know if other software also offer this?

Spontaneously I was thinking of commandeering the save function, and resolving absolute paths to environment variables where possible.

variables = {
  "\\server\projects\hulk": "$PROJECT",
  "\\server\projects\hulk\assets\Bruce": "$ASSET"
}

# psuedo
# Turn \\server\projects\hulk\mytexture.png
# Into $PROJECT\mytexture.png
for node in pm.ls(type="file"):
  for path in variables:
    node.path = node.path.replace(path, variables[path])
    

What do you think about something like that?

Not sure if you could always replace it within every save functionality. What happens upon export? Or if they use a custom exporter?

I guess it’s a perfect place for a Validator. This way the Artist learns about how the relative thing works and it can come with its own stable repair!

I think Houdini works with this functionality as well. Not sure about Fusion (did a quick test in 6.4, didn’t seem to work), but I do know Fusion has relative paths to the comp. This is similar to how Maya has relative paths from the workspace root directory (which works perfectly as well!) plus Maya makes paths automatically relative to workspace’s project root.

Houdini works out of the box like this, Nuke works as well, however it is tiny bit more involved (super easy to make paths relative to workspace, but needs a python syntax in the path to find envvar)

I’m with BigRoy. Validation might be a better option. There might of course be situations where absolute paths are desirable and tweaking the scene paths without telling the artist while saving might be a bit intrusive.

2 Likes

In our current setup we are doing a dual tracking system where there is both a Push (a published file without version, similar to Hero) and Pull (file name is always have version number), and the pushed file is always the same as the newest pulled file. Artist usually just reference the Push file, however, there is always the options to use version numbered file as well (if the pushed file cause them problem). One issue with this current setup is that there is currently no good way to store metadata (other than using a scripted node) in Maya scene file. So there is no telling of what the good version number file they last used.

I could, in theory, write a callback to update and store such data as separate file and use script node, but it seems overtly heavy handed.

Thanks for sharing!

How often would you say the pulled files, with versions, are used relative the pushed ones without version? It sounds like there is rarely any reason to go with versions when you have a choice, as you would have to handle updating your file whenever an update came out.

Most artist would go with Push file (haha, human nature I guess). However, the Pull file is there as a backup in case they need to revert back to an earlier version of the file. That happens sometimes but it is indeed pretty rare.

We don’t keep Pull file for certain type of data, though. For example, animated Alembic archives, openVDB or Render sequences. For these type of file it is usually pretty binary (they are either Approved or Retake) so there are not much ambiguity.

Some comper will use rendered elements from different takes which we really discourage. We mostly let them handle that manually as it is too difficult handle these type of edge case in the proper pipeline code.

Ah, that makes sense.

Have you considered a compromise between push and pull as described above, something like SVN/Perforce?

With SVN, you could have a versionless pushed file, but like Git, you could update it from a versioned repository. That way, updates can happen globally or locally, and a file would “automatically update” where they were referenced, like in Maya.

That’s what I had in mind as well. However, I need to think about how to track the good version of the published file for that particular working file the artist had used.

This way the artist only need to use the command like “Revert Back To Last Good Published Reference” to get back in good shape.

However this can get complex fast when the reference chain is long. To me this sounds like a job for a good dependency resolution system. (Pyblish-DAG maybe?)

Yes, quite complex.

One system I’ve seen handling such things is the “approved” versus “latest” variants of a version. In a nutshell, each version published would increment the “latest” status of a particular asset, whereas “approved” would only get updated by manually dialling in that this asset is approved.

Me and @BigRoyhave been working on such a system for a few months and I’d be happy to share it with you. In it’s current form, we have the foundation for how it could work along with a prototype in the Magenta project.

Here is the worksheet, it’s quite in-depth and potentially cryptic. It builds on previous versions that you can also find linked in there.

The idea is to bundle it up and make the proper introductions here on the forums when the time is right (which is why it’s “private” currently), but if you are interested we might be able to do that sooner.