Pyblish Magenta


Actually we would need three methods parse, format and ls.

Parse would allow us to retrieve the identifiers from a path. This should retrieve as much identifiers as it can as opposed to what lucidity does by default which is returning the first match found. This would mean from any file it would know what it is.

Format would use a dictionary of identifiers to calculate a path. Preferrably 100% of the paths retrieved and created should be using this interface to ensure the file is where it should.

Ls would take a dictionary of identifiers and returns all available identifiers contained within it.

Would these three ensure that we’d be able to find where and what a file is?

These different methods could be implemented in different ways, e.g. schema vs schemaless. Yet the interface should be minimal and it seems this would provide all information to define file locations. If all our plug-ins would solely use this api it means the underlying system (schema vs schemaless) could be swapped by only implementing these three methods.


Sounds like it, but it also sounds a little magical.

For example, where does the root come from? And what about directories in-between the various data points, such as /publish inbetween /model and /v001?

Perhaps a prototype could help convey it’s benefits and better articulate it’s potential?


Face Assignments

Found an issue with the above UUID approach in regards to face assignment.

Currently, (1) a mesh is assigned a UUID and (2) associated with a shading group. The information is stored like this.

  "shadingGroup uuid": {
    "members": ["mesh1 uuid", "mesh2 uuid"]

The lookup then takes the form of:

matches = lsattrs({"uuid": "mesh1 uuid"})

But where are the faces? The UUID is assigned to the mesh node itself, not to the faces, so this information is lost.

The lookdev.json above will need an additional member; components.


That’s correct.

Since the component isn’t an attribute on the node that would of course be a separate query. We do it like this (pseudocode):

for member in members:
    matches = lsattrs({"uuid": member['uuid']})

    # If the member has component add it to the node name
    components = member['components']
    if components:
        matches = ['{0}.{1}'.format(node, components) for node in matches]

    cmds.sets(matches, forceElement=shading_group)


I’m not following this thread, but saw you were having problems with face assignments here.

I would say that it is not just face assignments that’ll be a problem, but there are other attributes you can be setting up in the lookdev file. What we usually do, is to import/reference the pointcaches and get the lookdev to use that point cache rather than trying to copy all the lookdev configuration to the point cache.


Good question.

This is also what somewhat confuses me, because in reality /publish is a different place than /work. So if you’d want to differentiate between these two locations you would have to identify this difference, thus add it to the identifier. Unless it’s hardcoded to choose one folder over the other, so you would never access work and only access publish.

This same difference is actually between our film and assets folder. There should be something that ‘identifies’ it as being different, otherwise the difference could never be made.

The opposite would be to iterate the folders and guess that something with name or metadata X is actually your asset no matter where it is. Yet since it wouldn’t know where to look you’d always iterate a full project. Even if you’d know where the asset is you’d need to identify whether you’re accessing the work or publish folder if you’re differing between those.

What’s your idea?


Thanks for sharing, it sounds like what we’re doing too, but let me just see if I get this straight.

In your case:

  1. Animator publishes pointcache (as an alembic?)
  2. Look development artist references this pointcache
  3. Look development artist publishes… his shaders, or including the meshes?
  4. The lighter imports/references… the shaders/meshes?


@tokejepsen Not entirely sure what you mean here. But we’re separating lookdev and lighting.

Lookdev is the designing of textures and shader setup for an asset. This would also hold custom attributes or displacement sets (like vray attributes or vray object properties nodes) and we’re exporting this data into our own format. Basically within the lighting scene we would recreate the exact conditions.

Actually we’re already exporting this additional data if I’m correct.


lighter assigns the pointcache to the meshes.

Would be great if you can recreate is exactly. Its something I have been chasing for a long time, but could never account for all the attributes and setup. That’s why I’ve always resorted to applying the point cache to the lookdev mesh.


Ah yes. So what I assume you’re doing is you load in the lookdev mesh and apply the deformations of the mesh by applying the ‘pointcache’ onto that lookdev mesh. So the pointcache that is loaded is not a new mesh being created in the scene, but only the deformations are loaded onto the pre-existing mesh.

Are you using Alembics for this? Or another format?


Technically I reference in the Alembic point cache, and the lookdev mesh. I connect the Alembic deformation to the lookdev mesh by attributes, copy the transform values and in some cases setup a live blendshape.

Never liked this workflow, but it has served me well on a couple of projects now. The good thing about it, is you can just replace the Alembic reference with a newer animation file, and everything is preserved.


You’re referencing the Alembic file directly? That’s great to hear; we jumped through some hoops wrapping the alembic cache with a Maya scene, and referencing that instead, because of some bugs @BigRoy had noticed.


The only major issues are that you need to have the Alembic plugin loaded on startup, and when you are pushing to a render farm.

A that loads the plugin quitely, seems to solve it for me.


It’s good that you mention that. Wouldn’t have even thought about that being an issue. I can see how that’s tricky with renderfarms since you can’t control their


I unfortunately only have experience with Deadline. With Deadline you can copy their Maya plugin, and setup the PYTHONPATH. Previously I’ve used custom bat files for this.


Ah, you mean a local renderfarm. I thought you meant pushing it to an online renderer, eg. Rebus Farm. In that case I don’t think you have control on the paths being loaded. Now that I think of it, that would even be an issue with our relative $PROJECTROOT paths.

Either way could always set up a ‘preprocess’ method for a remote renderfarm where you provide them with a ‘specially made scene’. Even though it’s not efficient we’ve done it in some cases.


Haven’t dealt with remote render farms, but yeah in those cases you could import the alembic file instead of referencing.


I can’t imagine a remote rendering system not providing control over it’s environment, this can’t be true.

If we can control the environment within which a process is to be run, such as maya, we can add both a userSetup to PYTHONPATH and additional variables such as PROJECTROOT.

On a local farm, you must be able to do this as well?

Edit: Google brought me here, doesn’t this solve this problem?


Yes, and we are doing it:)


Ok, good. Phew. :slight_smile: