Workflow for handling paths within a scene

I’ve been trying to think through how to handle paths within a scene, and especially maps (textures). This is Maya related but really applies to all applications.

For example if I have a texture path in a Maya scene. When publishing I collect this path and the file(s) to integrate. When I extract the Maya scene, I don’t actually know where the texture path is supposed to end up because that come later in the integration phase. So now I end up with a copy of the file(s) that isn’t actually used in the Maya scene.

On possible solution would be to have the texture file in the integrated path to begin with, so you would have already published the texture before starting to use it in the Maya scene. It makes sense to do so, but its a cumbersome process to publish twice.

I’m curious as to how other people handle, or intend on handling this?

1 Like

We haven’t yet implemented texture handling in the publish, but in general I came to a conclusion that we simply need to know the final paths for integration before we do any extracting. So my solution was that I prepare all the path in the validation step (while also checking if the destinations are available and such). So every instance the ends up with outputPath and publishPath (could be called extractionPath and integrationPath). Integrator then simply copies from one to another.

It also means then when needed I can change for instance texture paths in extracted maya scene using post-extract mayapy script. That way I’m not altering the scene I’m working on. On the other hand the published scene doesn’t match the work scene anymore.

So far we only needed to use it when sending full scenes as fbx to a studio we worked with. We ran ‘pack’ extractor which collected all the textures, saved them next to extracted fbx and changed all paths in the resulting file to relative, so we could send a folder with fbx and all the textures in one.

When we start dealing with texture I’ll most likely approach it similarly, but give artist the option to preserve the changed texture within the scene he’s working on.

  • collect textures
  • validate texture paths (technically just preparing the final publish path based on the shot or asset)
  • extract repath texture nodes to published ones
  • extract scene
  • integrate scene
  • integrate textures

from that point all the scene textures are pointing to the published files.

There are lot’s of issues with this. For instance if extracting fails somewhere you’ll end up with texture pointing to files that don’t exist yet…and so on.

A safer way would be to publish as you said in 2 steps. Textures first including the repathing and then scene as normal.

Thanks @mkolar for the info :smile:

Another solution I have been thinking about, was to have a post-integration plugin that opens the extracted scenes and changes the paths. It required of the application to be able to work in a standalone fashion, which most applications can but its a very complicated process compared to being in the scene already.

When exactly are textures being extracted? Is this for look development, modeling or?

I would have imagined textures to be published before being used in Maya, such as from Photoshop. Maya wouldn’t have the ability to change them, thus have no need to export them. It simply links to them.

So during extraction, the path specified in the Maya file nodes is kept, and pointing to a globally available, already published texture.

Maybe I’m misunderstanding something.

I solved this recently, see here.

In a nutshell, extraction happens to a local temporary directory, only to be moved during integration. That way, all files end up together, but won’t leak unless all prior plug-ins, including extractors, succeeded. I’m referring to it here as “atomicity” which is database terminology.

That’s possible, especially since you can save Maya scenes in Ascii, but I haven’t seen this actually used in practice anywhere and frankly I’m sceptical.

Then there is Maya’s support for using environment variables in paths. I’ve experimented some with this, and most native Maya nodes support this. I’d be curious to see some production use of it.

# example
$project_root/assets/Hulk/modelDefault/v001/default.ma

You could then establish a PROJECT_ROOT environment variable and others that adapts to where an asset is being used. Such as having a low-res texture for animation, and high-res for rendering. Or simply avoiding the dependence on everyone having the same root, such as w:/ or /server.

Downside being, not all applications support it, so you’ll likely end up with two systems.

Thanks for the input @marcus :smile:

This is for any step in the pipeline. Look development as well as shot based texturing.

That is certainly the ideal workflow. Problem is more with having an easier enough pipeline for publishing and importing textures before Maya.

Then there are also the situation where people aren’t modifying the texture before Maya, like grabbing texture off the internet or getting maps supplied from else where.
Currently for for this ad-hog situation I’ve been just extracting textures to a folder relative to the scene. This is isn’t ideal since published scenes are referencing files that are still in the work area.

I think it still holds.

If textures were assets like any other - like models and rigs - can’t you avoid these problems altogether?

/hulk/assets/Bruce/modelDefault/v021/default.ma
/hulk/assets/Bruce/textureDefault/v005/default.png

It still entails publishing before Maya. Or am I missing something?

People often just reference a texture in Maya that is grabbed from the internet.

Yes, or at least the idea of publishing. Couldn’t you simply download those textures into a pre-defined directory that you know will not change? The idea with publishing the texture first is just that, it’d be given a fixed and permanent home that other assets may then reference. If you have that, then the topic problem of this thread is resolved.

Is the alternative to include all textures with every publish of every asset?

That is certainly an option.

My situation is that I would like to publish the texture to Ftrack, which then with its locations take care of the placement of the files.

I thought about a solution that would work now and later when people are publishing before Maya. If I restructure the pyblish-ftrack and expose the integration methods for publishing to ftrack, I could just have a validator so the paths get corrected before extraction. Not the most CVEI solution though.

Have you considered publishing textures via the command-line? Could be a matter of (1) downloading those textures, (2) naming the files appropriately and (3) running publish command.

I’ve considered something similar with the Ftrack hook; https://github.com/pyblish/pyblish-ftrack/pull/49

The problem is more workflow and getting artists the easiest workflow possible. Before they can even test their textures, while they are still working on them and the scene itself, they have to publish the textures.
This to me seems like a bad workflow, and time consuming as well. While if they were to keep iterating on their textures and scene before publishing anything, they could get a textured model out quicker.

I don’t dispute the need for pyblish-photoshop and publish the textures from there. The issue is more that there are a number of application people use to produce textures with. With Maya being the publishing “hub” there are “support” for all workflows they want.

command-line publishing is an absolute no go in any place I’ve ever worked. If I need to ask every new freelancer who comes to studio, who’s never opened anything more than a photoshop to work in command line, I won’t have anyone coming in very soon :slight_smile:

The point is that unless you have a dedicated texturing people that are well separated from the modelling and lookdev teams (not the best idea imho), then texturing process is very often done by the same person who does lookdev or model, or all of them. Also it’s never as simple as making a texture and publishing it, but rather work on texture, load into maya, pre-render, tweak, re-export, re-render…repeat million times a day.

Once artist is happy he is most likely looking at the texture applied to the model in maya or whatever 3d app. That’s when he decides to publish the textures.

The most straight forward would be, collecting all the textures that are applied to the model and publishing them to their final location. That’s where this problem comes in.

Yes you could argue that a good workflow could be, publishing them like this from the texture task so they go to their final locations and then loading them into shaders in lookdev by hand. However in many workflows (ours included), when working with textured PBR materials. Lookdev and texturing is one process that cannot be separated. So what I really wanna do is publish lookdev along with all the textures that are flowing into it, because up to that point, most of them were unpublish WIP textures.

Here comes the loop, that I need to know where they’re going so I can repath them in the lookdev file, otherwise I’ll publish textures, but lookev will still point to the WIP versions of them.

EDIT: answered with Toke at the same time with similar thing :slight_smile:

Ok, I think we better make a separation between the kind of results we’re facilitating.

The workflow I’m suggesting is meant for projects where textures are produced in dedicated software, such as Photoshop or Mari. This is where textures would be downloaded to, tested, inspected and refined.

Downloading textures straight into Maya isn’t something I’m familiar with and honestly have trouble grasping, so kudos for making that work. :slight_smile:

What do you guys think of exposing the publishing methods of pyblish-ftrack?

To add to this conversation the path handling isn’t solely something that occurs during the look development and texturing process. The same questions arise with transferring cache files when publishing a particle cache or fur (e.g. Yeti) cache. For us this involves extracting a Maya scene that houses the particles and transferring along the caches that were already made by the artist.

The best way I could think of was to do something close to this:

  • In Collectors find all related files to the instance (e.g. caches, images) and store them on the instance.
  • In Validators check whether those files obey the rules required by your pipeline. Do they exist? Correct frame ranges/sizes? etc.
  • In Extractors extract your scene contents.
    • Instead of actually copying files here you could build a dictionary for the file transfers.
  • In Integrators you could use an abstract “Transfer” integrator that takes all files from the instance’s file data (e.g. instance.data("transfers")) and transfer those to the destination path only when no errors were spawned by previous plug-ins and thus never resulting in a erroneous publish.
# pseudocode

class Collector:
    def process(self, instance):
        files = get_files(instance)
        instance.data["collectedFiles"] = files

class Validator:
    def process(self, instance):
        files = instance.data("collectedFiles")
        for f in files:
            assert os.path.exists(f)

class Extractor:
    def process(self, instance):
        files = instance.data("collectedFiles")
        transfers = dict()
        for f in files:
            transfers[f] = get_destination_path(f)
        instance.data["transfers"] = transfers

class Integrator:
    def process(self, instance):
        if has_errors(instance):
            raise RuntimeError("Instance has errors. Cancelling integration...")

        transfers = instance.data("transfers")
        for source, destination in transfers.iteritems():
            copy(source, destination)

Does that make any sense?

Do you then check in the validator whether the files are in the correct place?

Sorry, from you description I can’t see where the paths in Maya gets changed to the final published destination?

Yes, only for the source files and not the destination path.

The Maya file node paths get remapped during the extraction of the Maya scene, e.g. in the Maya Ascii Extractor using the transfers dictionary. Though I just think of it now that thát does infer the Extractor knows the destination path instead of solely the Integrator knowing it. We currently have this solved with a function that’s called from the pipeline library that defines the resulting path (to avoid it being implemented in the Extractor itself.)

Seems like this doesn’t actually solve the problem?

Damn, thought you might have figured it out :smile: No the problem is still present, where the integrators aren’t handling the final destination of the paths.

My suggestion of exposing the publishing methods of pyblish-ftrack is pretty much the same as you are doing.

For leakage in knowledge between extractors and integrators, consider extracting to an intermediate directory first, and then having your integrator simply move files from this location to the final destination.

That way, the extractors can be dumb, out-of-pipeline and independent. Putting files in /tmp or the current Maya workspace and what not. If it stores the location in the context, then the integrator can be the one to figure out the final destination, without needing to interact with Maya, and move the files from there.

As a second step, in order to update the published Maya scene without affecting the current scene, you can search-and-replace the new published paths with the paths currently used in the scene.

I mocked up an example of this here.

from pyblish import api


class CollectModels(api.ContextPlugin):
    order = api.CollectorOrder

    def process(self, context):
        from maya import cmds
        instance = context.create_instance("wholeScene")
        instance[:] = cmds.ls()


class ExtractModels(api.InstancePlugin):
    order = api.ExtractorOrder

    def process(self, instance):
        import os
        from maya import cmds

        stagedir = os.path.join(instance.context.data["workspaceDir"], "stage")

        try:
            os.makedirs(stagedir)
        except OSError:
            pass

        fname = "{name}.ma".format(**instance.data)
        path = os.path.join(stagedir, fname)

        cmds.select(instance, replace=True)
        cmds.file(path, typ="mayaAscii", force=True, exportSelected=True)


class ExtractResources(api.InstancePlugin):
    order = api.ExtractorOrder

    def process(self, instance):
        import os
        import shutil
        from maya import cmds

        stagedir = os.path.join(instance.context.data["workspaceDir"], "stage")

        resources = instance.data.get("resources", {})
        for resource in cmds.ls(type="file"):
            src = cmds.getAttr(resource + ".fileTextureName")
            assert os.path.isfile(src), src
            dirname, fname = os.path.split(src)
            dst = os.path.join(stagedir, fname)

            # Keep tabs on what got remapped
            shutil.copy(src, dst)
            resources[src] = dst.replace("\\", "/")

        instance.data["resources"] = resources


class IntegrateAssets(api.InstancePlugin):
    order = api.IntegratorOrder

    def process(self, instance):
        import os
        import shutil

        stagedir = os.path.join(instance.context.data["workspaceDir"], "stage")

        # Update .ma files with remapped resources
        for fname in os.listdir(stagedir):
            abspath = os.path.join(stagedir, fname)

            self.log.info("Looking at '%s'.." % abspath)
            if fname.endswith(".ma"):
                self.log.info("Updating..")

                new_file = list()
                with open(abspath) as f:
                    for line in f:
                        for src, dst in instance.data.get("resources").items():
                            self.log.info("Replacing '%s' with '%s'" % (
                                src, dst))
                            line = line.replace(src, dst)
                        new_file.append(line)

                # Update file
                with open(abspath, "w") as f:
                    f.write("".join(new_file))

                self.log.info("Updated '%s'." % abspath)

        # Write to final location
        versiondir = os.path.join(
            instance.context.data["workspaceDir"], "v001")

        try:
            # Overwrite the version, remove me
            shutil.rmtree(versiondir)
        except OSError:
            pass

        shutil.copytree(stagedir, versiondir)


api.register_plugin(CollectModels)
api.register_plugin(ExtractModels)
api.register_plugin(ExtractResources)
api.register_plugin(IntegrateAssets)

# Setup scene
from maya import cmds
cmds.file(new=True, force=True)
fnode = cmds.createNode("file")
resource = r"C:\Users\marcus\Desktop\temp.png"
cmds.setAttr(fnode + ".fileTextureName", resource, type="string")

Either include the setup at the bottom, or take any scene containing a file node and watch it get published into a final destination and automatically updating the published Maya scene as it goes.

Main points to note.

  1. Files are extracted into a temporary directory first, called “stage”
  2. Integrator knows nothing of extraction, only files and final destinations
  3. Integrator updates paths given a mapping created during extraction.

This would also work for alembic caches and other resources referenced into Maya.

Here’s an example integrator that does this in a production environment.

I think with your example the need for an intermediate directory could even still be avoided. Instead of reading from a directory the integrator could just read from a list of paths, the resulting behaviour could remain the same. (We use an intermediate staging directory as well but it’s not a requirement for this to work.)

All the integrator needs to know is where to take files from.

Note that this is made more complex because of:

  • Some plug-ins not storing paths as regular ascii (instead encoded attributes) in the maya ascii file format.
  • Differences between relative paths (stored in the ascii in the attributes) and the full paths (for the file transfers)
  • Search/replace for “sequences” might be trickier because they are not stored as their full paths but sometimes encoded with tex.<UDIM>.ext, tex.<f>.ext, tex%04d.exr

Though in essence this point just brings us back to:

It’s basically a post-process remapping the paths. Whether it’s using a standalone version of the host application or is doing a search-replace I think is irrelevant. Of course the post-process step could be as complex as you’d want to make it. Note that this remapping should be done before the files are transferred to their publish destination, otherwise a wrong version of the file (with the wrong mappings) is temporarily at that location.

Note that this only shows transferring files but does not actually define when and where the paths are remapped for caches, textures or other linked resources in the actual extracted scene file.