Ok, so this question touches upon the subtle difference between registering plug-ins via paths and associating plug-ins to content via families, so I thought I’d take the opportunity to properly pin-point each one and highlight how and where each fits into “the Pyblish way”.
In short, families is to registration what a surgical knife is to the hammer. You need both to build a… I mean to operate on a… moving on!
Registering Plug-ins
Pyblish provides two primary mechanism for specifying what plug-ins to take into account during publishing.
- Via the environment variable
PYBLISHPLUGINPATH
- Via
pyblish.api.register_plugin_path()
Both of which accomplish the same goal of including one or more directories of plug-ins to pyblish.api.discover()
during publishing - one statically defined into the application process and one dynamically set at run-time.
The original design intent of plug-in registration is to separate plug-ins that have little or no relation, such as those specific to Project A or personal to Artist B.
Typical use look something like this.
$ set PYBLISHPLUGINPATH
$ launch app
$ # do work
The registered plug-ins hence forth represents the ecosystem within which each are expected to work together towards a common goal. For example, one collector may knowingly gather information for a subsequent extractor which both operate towards a compatible integrator.
One may define more than one ecosystem, and activate each where appropriate. Common examples include per-project, per task and per artist. This is where registration shines.
Families
In contrast to the “heavy handed”, global control offered by registration, families offer a finer level control over your data.
With families, you are able to associate a subset of plug-ins to a subset of content following a given pattern. Families are synonmous to an interface or contract in programming terms.
For example, in a system with 100 plug-ins, three of them may apply to a given asset such as a ShotCamera
- Collector
, ValidateCameraFrustrum
and ExtractCamera
. Other plug-ins fade silently into the background, as they don’t apply. In pyblish-qml, there is a mechanism in place which hides away incompatible plug-ins from the view, such that only plug-ins compatible with an active instance are shown.
This enables a responsive environment in which “content is king”.
In a typical environment, a technical director specifies this contract, whereas the artist adheres to it. For example, a contract may read all models are prefixed "model_"
or rigs contain this special node with our in-house metadata
. Should any subset of data happen to adhere to any of these contracts, an instance is created and corresponding plug-ins are associated with it.
Example
Nothing hits home better than an example.
In this example, I’ll start from a heavy-handed contract (read family) and work my way towards specifics.
- The entire scene is implicitly identified by the mere act of publishing, the family is
scene
- A character rig is identified and associated with a subset of plug-ins, the family is
rig
- A character cache is identified and supercedes the rig, family is
animation
and families ofrig
are made optional - A camera is identified with settings different from those in ftrack for this shot, an instance of family
cameraDelta
is identified
The following plug-ins may run as-is, but are intended as psuedo-code for human consumption.
CollectScene.py
import os
import pyblish.api
class CollectScene(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder
def process(self, context):
instance = context.create_instance("Scene")
# Assumes an external pipeline having inintialised these values.
instance.data.update({
"user": os.getenv("PIPELINE_USER"),
"task": os.getenv("PIPELINE_TASK")
})
The pyblish.api.Context
may optionally itself be considered a scene. This makes sense, whereas a benefit of making it an instance lies in the ability to give it a family and thus associate a series of plug-ins for validation, extraction, etc.
CollectAssets.py
import pyblish.api
from maya import cmds
class CollectAssets(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder
def process(self, context):
for asset in cmds.ls("*_AST", assemblies=True, objectsOnly=True):
# Consider top-level nodes suffixed "_AST"
# to be pipeline convention for publishable content.
instance = context.create_instance(asset)
for attr in cmds.listAttr(asset, userDefined=True):
# Assume assets maintain an attribute containing
# their relevant metadata, such as `family`.
instance.data[attr] = cmds.getAttr(asset + "." + attr)
# (Naively) assume all content under this transform
# group to make up the entirety of this instance.
instance[:] = cmds.ls(asset, allDescendents=True)
Upon launching the GUI, the user is presented with the fruits of this configuration, a single instance checked by default and additional instances optionally available.
Given the available options, it’s likely this is an animator working on a character animation, or not. To Pyblish it doesn’t matter. Pyblish is oriented around getting content out of software and leaves task and environment management at the mercy of the surrounding infrastructure.
Final note
The notion of families may seem alien at first, or worse “optional” and somehow not important to your specific use-case. But they are essential to Pyblish for the same reason Pyblish exists in the first place - to keep bad data from escaping your pipeline.
Giving content an identity is what enables plug-ins, and you the developer, to make clear-cut assumptions about what is moving on from the messy workspace of an artist and into the open world of shared data. Pyblish is designed to heavily guard these gates and encourages you the developer to ask permission rather than forgiveness when it comes to the safety and integrity of your data.
It’s safe to say that Pyblish is less about automation than it is about safety.