Per-instance setting, naturaly give us more control and with a good GUI user wouldn’t even feel the difference.
I’m not sure there is currently a straight forward way to make this work in in pyblish. Disabling an instance makes it not be processed by all the plugins, while disabling a plugin makes it not work on any instance. Unless I missed something of course, but it might be worth looking into supporting this in pyblish-base.
Plugins, extractABC, extractOBJ, extractMA all operating on model family.
I want to extract ben as ABC and OBJ, while extracting bob as ABC and MA
If I disable OBJ extractor, no instance will export as obj, but it would be nice (if supported by a good UI) to be able to extract all models as with one plugin and subset of models with another as well.
In the current “ftrack UI world” I imagine it being up to the OBJ extractor plugin to define a “soft” option to turn it on or off per instance. Makes sense?
That’s an interesting approach with dynamicly assigning families, post collection.
I was thinking you could have the plugins operate on instances with certain data member, similar to how pyblish-ftrack and pyblish-deadline works. The extractOBJ would query for instance.data["settings"]["extractOBJ"], and extract if it results true.
Although I think assigning multiple families dynamically, would be more Pyblishic (?).
The only difference between the two that I see is that with families, you iterate through less instances, because they get excluded/included right away, whereas with another data member, you still have to check for it.
Other than that practically the same. The main family is always the one that shows in the ui anyway.
Another question on this topic; how would you go about persisting a plugin being made active/inactive by the user? As far as I understand it is a runtime thing triggered by user interaction, and to produce “stable” results it should be persisted into the scene. Would you typically save this as metadata somewhere in the scene?
I don’t think there are anyone who has done this before. Saving the metadata in the scene would definitely be a good way forward.
This would obviously be a per-host thing, but could definitely become standardized across all GUIs.
I’m personally don’t see this as a essential feature and saving something into the scene during publishing sounds like the kind of thing that creates hidden data, which is not very nice.
However, if we’re talking about saving anything into the scene, it might be worth considering doing it on a node that’s clearly marked, user can see it, and interact with it.
Such node could easily be made for maya, houdini, nuke (could just be null with custom attributes to avoid extra dependencies)…practically anything. As a user I could look at it and see in it’s attribute editor all the parameters (or at least a json text) saved from publishing. I’d know that as far as it’s in the scene, my next publish will use those settings, if I decide to delete, it will just go from default.
At the moment, the GUI would report any instance for which it didn’t result in True would still appear green. I’ve been thinking for some time now to include an option to “signal” a result, other than success (i.e. nothing) or failure (i.e. raising an exception).
Something along these lines.
...
def process(self, instance):
if instance.data["forAnyReason"]:
self.ignore()
That could then provide the GUI with a hint that "Aha, it didn’t raise an exception, but whatever it did shouldn’t count as success.
I got this from working with events in Qt, that have both an ignore() and accept().
As an aside, accept() in their case stops an event from propagating further. It’s likely we could do this too, where an instance would not pass on to subsequent plug-ins. The thing I don’t like about that is that we become more dependent on the order of things, and I’d preferably like to keep the floor open for multiprocessing down the road.
Ok, this makes sense. I’m just a little weary as to how practical it is.
Let’s keep it in mind for now. What I want are real use cases from the production floor with pros/cons against the only other option. If it turns out that it simplifies things in a majority of cases, then that would be enough reason to build support for it.
We also need to think about how to actually represent this visually, it might require modifying the current check-this, check-that approach.
Permanently active and inactive plug-ins are based on what’s registered and what the plug-in itself defaults to. pyblish-lite maintains state internally inbetween resets - so that they remain as they are when a user resets - and odds are pyblish-qml will too. Neither persists them across sessions at the moment, but I don’t see why not.
I think the important thing is to not make it surprising, and to retain some power to the developer in terms of what the defaults are. For example, it might not be a good idea to store it globally in a file of sorts.
I agree you would need to data to be as visible to the user as possible. Just a word of caution though I’m not a fan of custom nodes, as in dagNodes in Maya, as this quickly becomes tedious to manage with a farm, and when you don’t have access to the right files but alone to the scene file. So vanilla host workflow all the way:)
It sounds like it is not a big deal at the moment and I will not look into it deeper. As with persisting instances being active and their settings, I will probably leave this to the end-users to develop as and add-on.
E.g. as the last integration step you could have an integrator that serialises all this information into the scene (however you want) and in the collectors you could deserialise it and apply the settings back on the instances/context/plugins.
I’ve already added support in the UI to reflect any collected instance.data[‘options’] or instance.data[‘publish’]
About this, there is one thing that I personally consider a big no-no, and that I’d prefer Pyblish (or at least CVEI) wasn’t doing, which is to make modifications during publishing.
In general with CVEI, publishing should (1) have no side-effects, (2) be reentrant and (3) the data being published should be entirely immutable.
I wrote about this elsewhere, but it should have a more prominent place.
So, in this case, I would make the GUI responsible for making these changes to the scene, as the user interacts with it. In that way, the act of publishing is left to follow the above “rules”, whereas the GUI (user, really) is what is making changes.
Great feedback! Our ftrack api comes with an event framework that could possibly be leveraged for this. Or perhaps even better to use pyblish callbacks, as it would allow us to stay in the pyblish realm. The benefit being that we leave it up to the users to manage.
pyblish-qml triggers callback in this way whenever the user toggles anything. The developer is then meant to install their own callback for a bespoke response.
Great, seems to map to what I have in mind for toggling and instance settings changes. I’m a bit undecided at the moment on what way to choose, since many of our other tools are depending on our events - while the pyblish community is used to callbacks. Leaning towards pyblish callbacks since we are in the pyblish landscape now
Does pyblish support multiple families per instance? I find the source code and documentation a bit contradicting…
Here in plugins_by_family there is only one family: source
Data member family is the ‘primary’ one (and it’s the one that shows in the pyblish_qml UI), while families is a list that can have as many as you want.
For example when we collect maya renderlayers, we assign it ass.render and deadline.render, one does a local ass export, while other sends it to the farm.