Integration report and things I've learned so far

So I’ve got pyblish initially integrated into our pipeline. Here’s a few things I’ve discovered/implemented. Interested in hearing any thoughts or feedback, since I’m keen to follow the intended usage as much as possible.

  • I’ve combined several collection plugins to minimize the number of collectors that appear in the UI, since I got feedback from users that it felt like too much. So I’ve a general system level one, which collects things like user and date, and then a general application level one, which collects things like workspace and Maya units.
  • At the end of the publish I have an integrator which writes a json file of the dictionary (minus “results”), and saves it basically as the metadata for the publish. Super easy just to have anything add to that dictionary and it’s accessible as publish metadata. Yay!
  • I’ve wrapped the ui launcher in order to control which plugins are loaded, in order to have different collectors for different departments/publish types. I know to some extent this can be accomplished with families, but I’m not sure they would fully support the required logic. Two examples:
    • Modelling want to prep and publish a super clean, flat scene, so they don’t have any sets or attributes in the scene which identify it as a modelling scene, and basically want to run the publish on assemblies. So that collector is very broad and would overlap with more specific collectors for other asset types. (This could also be fixed by adjusting the modelling workflow but I’m not ready for that battle yet.)
    • Puppets and other assets have a node which identifies them as an asset, but they also contain other assets. A generalized asset collector would return them all, but when publishing puppets, I never want to publish it’s child assets, so unless there’s a place for this sort of logic, controlling which collectors are loaded seemed like the most straightforward solution.

The clutter of the plugins for end-users has been raised several places, and we have been discussing a redesign of the UI to simplify; Simplifying the GUI

Cool:) Have you had any use for it yet?

Yup, the very same reason why our modeling plugins are separated out:)

How has your users generally taken to Pyblish? Was there an existing publishing framework, that you are replacing?

THIS. As Toke mentioned this keeps popping up an I couldn’t agree more.To me the UI is currently the Achilles heel of the project. Pyblish-lite has an extra tab that only shows instances, but that on the other hand is not enough information :). Please add to the GUI thread that Toke mentioned any ideas you have for the UI. Hopefully someone will get to it soon.

Can’t recall. But was it mentioned “what else” was important information that should be on that “simplified” page? Just to be sure this information isn’t lost in the middle somewhere.

Other than that, thanks @morganloomis for sharing your experience so far. Remaining open with development like this will help each other gain insights to make useful improvements to their own code, but also pyblish itself.

Sweet beans, thanks for sharing your insights so far.

I think it all sounds completely natural and in-line so far. I could share some thoughts, but I’d stress that they would mostly be suggestions to an already well defined workflow.

Plug-ins are generally intended to be as small as possible, and work together, rather than become merged, so that fact that this had to happen sound to me like a fault in the GUI and something that should be fixed. As @tokejepsen and @mkolar mentioned, we’ve stumbled upon this problem before.

In addition to the forum post on the topic, there is also this.

At the end of the publish I have an integrator which writes a json file of the dictionary

Good idea. :slight_smile: I’d include the result dictionaries as well, for future auditing or just archival purposes. They capture absolutely everything about any publish, including timestamps and performance counters. You can serialise it to a JSON-compatible dictionary using format_result().

I’ve wrapped the ui launcher in order to control which plugins are loaded,

This may be the only real deviation from Pyblish standards and practices.

The overall design intent of Pyblish is to discover what data is available, and offer the user an option to publish it. As in, if there is a model (the simplest example), a model instance would appear. If that model instance supports extraction to one or more formats and representations - e.g. one .obj, one .ma and one .maProxy - then (optional) extractors would appear to give users the option of publishing it to those.

The same should apply to any and all types of data.

Families are meant to solve this by having your collector identify a model for what it is and then provide support for it via additional plug-ins.

I think the most successful method of solving this gracefully, is having a generic collector run across your scene, and have your assets make an effort in identifying themselves. As in, the artist or technical director could “tag” various parts of their scene and tell them what they are supposed to be. The generic collector could then read this/these tag(s) and apply families as appropriate.

These tags could even be the families themselves. (Both plug-ins and instances may have more than one family)

Napoleon is an example of this in action.

It also has an example Maya project.

I know @BigRoy wrote a visual tool to help TDs make these tags, out of a selection of available tags studio-wide. That might be similar to what you have already with paths, except in this way the specification is associated with the asset itself, rather than with a particular running session of Maya. Maybe you could share some pictures of yours, Roy?

Puppets and other assets have a node which identifies them as an asset, but they also contain other assets.

I’ve run into this problem before, and I think the most graceful solution I’ve seen has been to look at their hierarchy. Is the asset a parent or child? If child, either collect an instance that is unchecked by default, or don’t collect at all.

That way you’ll be able to nest assets arbitrarily whilst still maintaining control.

We had to resort to the same solution of only registering plugins for particular task types. We did try working the way you’re suggesting, but it became too complex for both artists, and me to maintain. It is very likely that we just didn’t try hard enough to accommodate this way of working though. Maybe on the next project :slight_smile:

Oh no, don’t get me wrong. Registering particular plug-ins based on task/project is all well and good. It’s the registering of plug-ins during a working session where families may be a more flexible alternative. Registration is generally intended as a static mechanism, whereas families are their dynamic equivalent, meant to respond to more transient data.

The way I understood it, his artists are working on a project, within a scene, but wishes to publish his work in a particular way and thus alters the PYBLISHPLUGINPATH accordingly. In doing so, I think both they and @morganloomis are missing out on some of the benefits of families.

Instead, I’d expect the scene to be captured into one or more instances and allow his artist to pick which instance to publish, or to expose one or more extractors capable of exporting in different ways; e.g. one “super clean”.

Still early days but it’s easily accessible through the pipeline api so it will be the standard way to get user, etc.

There are a few different tools and paradigms for publishing currently, I’m trying to unify things a bit by filling the gaps first and then slowly eating everything. :slight_smile:

I totally get this, but in this environment at the moment that would result in a lot of extra plugins which would be loaded, and then potentially extra validators to make sure different combinations of things weren’t getting published together, all to basically set a context that every artist already knows right before they hit publish, which is “I want to publish a model” and so forth.

Ok, I see how this could work, I was thinking that families didn’t apply to collectors, but if you have an early collector it could set what other collectors will be run? Even so, at the moment with the legacy workflows I’m dealing with that would mean a lot of logic in that collector to work out what is supposed to be published, it still seems more straightforward to launch the UI already in a certain mode.

Have you considered being able to set the family within the UI?

Sure, I think we’re saying the same things, just in different order of operation. If I get you correctly, you’re on “Workflow B”.

Workflow A

$ set myproject/mytask
$ maya
$ # publish things

Workflow B

$ maya
$ set myproject/mytask
$ # publish things

yeah, pretty much.

Ok, that’s cool.

In my experience, workflow B isn’t unheard of, but there are some long-term benefits of Workflow A, with the advantage of full process isolation. For example, some software, such as Maya, can have trouble unloading plug-ins at run-time. So if your task and/or project depends on dynamically assigning plug-ins, you might find yourself out of luck.

Another example, Maya, Nuke and Houdini to name a few maintain a single Python interpreter session throughout its lifespan, and cleaning up from a prior assignment of project/task can be a challenge. It also limits your ability to make modifications freely, without having to consider the cost keeping track of what should/must be cleaned up in the event of change.

1 Like

I totally understand workflow A, but I think workflow B can still be valid especially in smaller studios, and also when a task involves publishing multiple things. If I want to get super granular with publishing, for example a rigger in the course of working on a character might publish a skeleton, a model variant, skinclusters, a range of motion animation and then different versions of the final puppet. Granted that could all be handled with collectors and families, and I should explore that a bit more.

1 Like

Even though we’re using ‘A’ option. It is absolutely essential for us to have the ‘B’ option on top of it. Unfortunately, I simply didn’t have enough dev time on this to make the switches nice and smooth, however it’s the one most requested feature from artists (Apart from the UI changes.)


About that, this might be helpful for you too!

About the Workflow A vs B discussion; being able to switch “task” during a session without having to restart e.g. Maya is something that we’ve heard from our clients and that we will try to solve in our new ftrack Connect plugins.

Sorry, it wasn’t to say that it can’t be done, just that it has a price tag. Sometimes that tag can be difficult to spot until it’s too late and you’re out of pipeline currency.

1 Like