Flow Control Question: Validating multiple times and comparing results

Hey All,

I am attempting to use Pyblish as a framework for validating rigs. One use case is to validate a rig by comparing it to historical versions of the same rig – ensuring not unexpected changes have occurred.

To do this I created the following five plugins:

collectRigVersions : ContextPlugin : api.CollectionOrder – 0.1
Looks in the context for a rig path and a list of source control version numbers to compare against – then extracts each version to a separate file. Calls context.create_instance for each rig file we write out and stores the rig path in the instance.

openScene : InstancePlugin : api.CollectionOrder
Opens the scene in Maya

switchReference : InstancePlugin : api.CollectionOrder + 0.1
Updates the reference path of the rig within the maya scene to the rig path stored in the instance data

validateBoneCount : InstancePlugin : api.ValidateOrder
Counts how many bones are in the rig and checks they do not exceed bone count limits. Stores this value in the instance data so it can be utilised later down the chain

compareBoneCount : ContextPlugin: api.ValidateOrder + 0.1
Cycles over all the instances in the context and compares the value of all the bone count data if there is any

I then create a blank context and populate it with data that I know my collectRigVersions context plugin will be looking for.

cntx = pyblish.api.Context()
cntx.data['rigPath'] = r"//path/to/a/rig.ma"
cntx.data['rigVersions'] = ['local', 3, 6, 14]
cntx.data['romPath'] = r'//path/to/an/animation.ma'

# -- Execute the publishing
result_cntx = pyblish.util.publish(context=cntx)

The resulting output of this is:

Collecting and Extracting Rig Files
Opening Scene 
Opening Scene 
Opening Scene 
Opening Scene 
Re-Referencing Rig
Re-Referencing Rig
Re-Referencing Rig
Re-Referencing Rig
Validating Bone Count 
Validating Bone Count 
Validating Bone Count 
Validating Bone Count
Comparing Results between all tests

This output makes sense, especially after having a look at the publish function, as it grabs all the plugins (sorted by collectionOrder) and cycles over each plugin and then cycles over each instance – meaning each instance moves forward through the collection order together. That certainly makes a lot of sense for a lot of scenarios!

The downside this gives is that it’s opening a scene four times, then updating the reference four times within the same scene (the last scene to be opened), then validating bone counts four times to the last opened scene. This ultimately results in all of our tests being run multiple times over the same scene file. The desired ordering (in this scenario) would be something like this:

Collecting and Extracting Rig Files

Opening Scene 
Re-Referencing Rig 
Validating Bone Count 
Opening Scene 
Re-Referencing Rig 
Validating Bone Count
Opening Scene 
Re-Referencing Rig 
Validating Bone Count 
Validating Bone Count 
Opening Scene 
Re-Referencing Rig 
Validating Bone Count 
Validating Bone Count
Comparing Results between all tests

However, I am unsure how to achieve this with the ordering mechanism and the way the publishing loop works – do you have any ideas how you might approach this? I noticed the thread relating to graph based pipelines, is that in development at all? (https://github.com/pyblish/pyblish-base/issues/143)

Much appreciated.

Mike.

Hey Mike,

Welcome to the Pyblish community :smile:

I may not be understanding your workflow 100 %, but could you not reference all the rigs into one scene and compare them?

Hey Toke,

The scene I am opening is a Range-Of-Motion file which has a rig referenced in. I planned on opening that scene and switching the reference path to minimise chances of prior tests/rigs breaking anything within the scene scope (essentially making sure i start with a clean slate for each rig version).

I could indeed remove the requirement of re-opening the scene, but I would still need to switch the reference then have the subsequent tests run before switching the reference path to the next rig etc. Which would likely result in an output like…

Collecting and Extracting Rig Files
Opening Scene 
Re-Referencing Rig 
Re-Referencing Rig 
Re-Referencing Rig 
Re-Referencing Rig 
Validating Bone Count 
Validating Bone Count
Validating Bone Count 
Validating Bone Count 
Comparing Results between all tests

The main aim is to be able to validate on a per instance basis, but also then validate the collective results for differences. In the example above I look only at bone count, but ideally i’d be looking at global motion of deformers in another test (to check whether functional rig behaviour has changed etc).

In practical terms, the goal is to have something in place where I can take a rig that has various edits, and ensure that the rig changes have not broken the results of pre-existing animation. Therefore the validation result should be the sum of individual tests look specifically at a rig, but also a comparison of data between the same animation with multiple rigs.

To keep the overhead lower I want to avoid having to ‘apply’ the motion to the rig on-the-fly due to the possible complexities that may bleed into the tests.

I guess another approach would be to update the reference path within each Validation Instance, it would be slower as i would be switching reference on every test but it would give the output I need.

Collecting and Extracting Rig Files
Opening Scene 
Validating Bone Count (Re-Referencing Rig)
Validating Bone Count (Re-Referencing Rig)
Testing Global Animation (Re-Referencing Rig)
Testing Global Animation (Re-Referencing Rig)
Comparing Results between all tests

Hi @mikemalinowski, thanks for sharing this.

If I understand your situation, you’re looking to validate a new rig by comparing it to previous rigs. To properties of previous rigs, including but not limited to the bone count.

To accomplish this, have you considered including this information in the published rig?

For example, in your rig exporter, you could include the bone count as an additional side-car file, or into your asset database.

v001/
  rig.ma
  rig.metadata

rig.metadata

{
  "boneCount": 12
}

That way, you could simply generate the data from the rig currently being published, and compare it to this already exported data. No need for additional Maya’s or loading and switching rigs around. And it’d be scalable and work on any amount or type of data you would like to compare with.

class ValidateBoneCount(api.InstancePlugin):
  order = api.ValidatorOrder
  families = ["rig"]

  def process(self, instance):
    import my_pipeline
    asset = instance.data["assetName"]
    previous_version = my_pipeline.ls(asset)[-1]
    assert instance.data["boneCount"] == previous_version.data["boneCount"]

Alternatively, if getting the data by loading individual Maya’s is inevitable, then I would suggest doing it separate from the ordering of plug-ins.

For example, you could dynamically generate a start-up script for the Maya session you launch that performs load of a previous rig and extracts the data you need from it.

Here’s an example of doing something along those lines, utilising subprocess to launch a non-interactive Maya session via mayapy using a dynamically generated Python script.

You could in theory reference the ROM into a clean scene multiple times, and switch the rig on the different references of the ROM file.

Wouldn’t it be a problem to publish the data from previous rigs, when you want something else that you didn’t think of at the start?

Yes, my proposal is a long-term solution. For a short-term solution, I’d try mayapy with dynamically generated start-up script.

Awesome, thanks for the suggestions guys, its very much appreciated!