Registering plug-ins as files have many benefits, like parallel development, loose coupling and individual versioning through version control software such as Git.
On the other hand, in an environment with hundreds of validators, management and bulk customisation can become more difficult. Those are the concerns that this approach is mean to answer.
Currently, the standard workflow for getting a written plug-in processed by Pyblish is to register its file; either via the environment variable
PYBLISHPLUGINPATH or at run-time via
Here's an exploration on what things might have look like, if we instead wrote multiple plug-ins per file, and managed them like we would any regular class.
It also features two new concepts; search and customisation.
Searching is based on tags. Tags are plain-text keywords added to a plug-in, similar to the
tags = ["geometry", "measurement"]
A search engine then reads these keywords and compares them with a given string provided by the user
validators = pyblish.search("film character geometry")
Searching is either "inclusive" or "exclusive".
Inclusive means any plug-in tagged with any keyword provided via the search string is included, whereas exclusive means only plug-ins matching all of the provided keywords are included.
A potential user interface would likely provide both.
The process illustrated in the two top-most gists includes a "pre-processing" stage, where plug-ins are discovered and then customised before being registered with Pyblish.
A direct advantage of this is that the pre-processing stage can be expanded upon to take input from any arbitrary source, such as a user interface or external file.
For example, in the second gist, plug-ins define an additional attribute on plug-ins,
options. Options is a dictionary of various members, each member being evaluated by the plug-in at run-time. Such as the
ValidateHeight plug-in. The plug-in is validating the height of an
Instance according to the "option" provided by the
option["height"] value can then be adjusted either directly..
ValidateHeight.options["height"] = 0.54
ValidateHeight.options["tolerance"] = 0.1
Or externally, such as from an external file..
with open("external_file") as f:
The external file can then be generated either by hand..
Or by a GUI, that provides knobs for each available option.
A full configuration can then look like this.
This approach reverses the role of how plug-ins are implemented and used today; instead of plug-ins being tailored to a specific cause, they are made general and customisable.
For example, rather than building a
ValidateHeightEquals2 that cannot be modified, you build
ValidateHeight with the ability to be configured.
Batch configuration files can be provided that customises a large amount of them in one go, and different configurations can be provided per project or asset. It opens up the door for plug-ins to scale into the thousands and become generic enough to only ever have to be implemented once.
It also reverses the role of how validated data is gathered. Currently, either a validator gathers it's own information and validates it, or a collector provides data designed for a particular validator. Instead, the gathered data is instead generalised - as though it were a file-format - and validators then adapt to it, instead of the other way around.
This means collections is the only part of validation that remains coupled to a particular host, to a particular asset and that the challenge then is to build "parsers" that "map" a given set of information into something consumable by generic validators.
For example, once a mapping has been built for, say, getting UV overlap information into the respective
Instance, it will never have to be built again and applies to anything everywhere.