Sorry for another newbie question, I haven’t found an answer to this in my searching yet. I’ve done a publish where one extractor worked and another one failed. Pyblish went ahead and ran the integrator for the successful instance. How should I get the whole thing to come to a screeching halt instead?
No problem @morganloomis, newbie questions are my favourite!
By default, validation is the only stage with the power to disrupt publishing.
The idea is that nothing should ever exit an application without having been fully tested, this includes testing that it can actually be extracted; i.e. does your user have write permissions? Is there enough space? Is the host configured appropriately?
It’s rare, and it would disable you from sharing your plug-ins with those who have a different test and from you using plug-ins from anyone else unless they also have the same test, but you can customise this by supplying a custom “test”. See the default test for details.
The failure was a bug, so it’s not actually a situation which needs to be supported. Was just curious about standards, thanks for clarifying.
We’ve actually been having some “crashes” here and there during extractions where the application continued and proceeded with half-complete instance integrations. This has actually been more cumbersome than when it would have not continued at all since we were left with still invalid published content. Basically errors that we could not have validated during Validation.
As such we’re looking into adding a test that it stops continuing with Integrations if anything before it failed, like Extraction. Is the idea with registering tests that there’s always one single test. As such registering a new test overrides the default behavior and we’d have to implement the default behavior in the same test? Or can we have a list of tests and just add an additional test alongside the default one?
Another way of doing this might be a revert plug-in post integration that reverts the integration if anything failed. This way even if integration failed it could get cleaned up, which might eventually be even safer. Nevertheless this would mean it would even transfer half-complete instances (which for large caches) can take some time even though it could’ve known beforehand that it’s unusable.
Any thoughts?
Stopping is not the answer.
Once data hits the disk, it will need to be cleaned up regardless. Integration is the place to do that, even if it means not storing it publicly. In the case of an incomplete publish, I would either facilitate a retry/resume behavior, or handle deletion of this data there such that the user can manually try again as though nothing happened.
Was this what you meant by “reverting”?
You can always call the existing test in your test, but I must stress that implementing a custom test at all will make your plug-ins incompatible with all other plug-ins and vice-versa.
I was thinking I would check for errors being reported by the pyblish signals, and add an entry into the context if one failed that was post validation.
Then it would be up to all my plugins to decide whether to check for this context entry and fail if necessary (but not stop the process). That way I could have a clean up plugin that run at the end if the failed entry is added to the context.
I think thats what your suggesting?
edit:
Or even better just check the results in the context, rather than using the signals, don’t why I didn’t think of that first!
Yes, either way sounds reasonable.
Sorry for not replying sooner here. What I meant with revert is to ensure no “incomplete” data would get published.
Or even better just check the results in the context, rather than using the signals, don’t why I didn’t think of that first!
And yes, this sounds like a feasible solution in the “integrator” and we’ll test with this.
We went that route for now and seems to work as expected. Here’s how I went about that.
class Integrator(pyblish.api.InstancePlugin):
"""Integrator base class.
The Integrator will raise a RuntimeError whenever the plugin is processed
and previous plug-ins have given error results for this instance.
This means integration will only take place if up to this point in time
all of publishing was successful and had no errors.
Note:
When subclassing from this Integrator ensure to call this class'
process method using, for example for your CustomIntegrator class:
super(CustomIntegrator, self).process(instance)
"""
order = pyblish.api.IntegratorOrder
def process(self, instance):
"""
Raises an error if any errors have occurred for this instance before
this plug-in.
This does not integrate the file, that is up to the subclassed plug-in.
"""
context = instance.context
results = context.data['results']
# Get the errored instances
errored_instances = []
for result in results:
if result["error"] is not None and result["instance"] is not None:
if result["error"]:
instance = result["instance"]
errored_instances.append(instance)
if instance in errored_instances:
raise RuntimeError("Skipping because of errors being present for"
" this instance before Integration: "
"{0}".format(instance))
We’re now using that as our Integrator base class and inherit our Integrators from it.
Whenever we subclass it we ensure the process method starts with:
class FileIntegrator(Integrator):
def process(self, instance):
super(FileIntegrator, self).process(instance)
# Integrate your files here as usual
That’s pretty much what I’ve done as well.
Although I wonder if the order = pyblish.api.IntegratorOrder
should be something like order = pyblish.api.IntegratorOrder + 1
?
Presumably the clean up should happen last after everything has had a chance to run? In which case it seems like its another section to the already existing CVEI.
Edit: But of course your putting it in the base class so it can’t really happen like that for you.
I wanted create my own base classes but decided not to, based upon the suggestion from Marcus in one of the other posts
Edit: Base classing was fine, I just didn’t follow the post correctly.
Subclassing the api.ContextPlugin
etc. and producing your own base classes are definitely ok! I’d imagine it’s a good fit for pipeline-wide features, like creating a temp directory across plug-ins and things.
Fair enough I may well do that then.
I was going off this post: [wontfix] Derived plug-in won't load packages from original plug-ins file
That’s about subclassing a specific plug-in, which is still bad. For technical reasons, as mentioned in that thread, but also because it couples plug-ins together which is an anti-pattern.
Subclassing the api.ContextPlugin
and api.InstancePlugin
to produce your own Integrator
or Collector
etc. to use whilst building those plug-ins is ok.
Thank you clearing that for me.
Just to make it extra clear, if you ever find yourself needing to import my_plugin
, think of this.