Connection forcibly closed by remote host

I think I have narrowed the search down to an extractor, that is saving the scene. The saving is taking an oddly long time for the size of the scene (~8mb), which is freezing Maya.

Still sounds very curious. Do keep posting updates as you find them.

You could also try this.

for x in range(10):
  print ftrack.Task(ftrack_data['Task']['id'])

It should print the task 10 times, but it’s possible that after 7-8 it chokes. Which could indicate that the connection isn’t closed automatically and that you might have to cleanup by hand somehow.

E.g.

for x in range(10):
  task = ftrack.Task(ftrack_data['Task']['id'])
  print task
  task.close_connection()

I disabled the scene saving extractor, and added this to the ftrack validator;

        for x in range(10):
            self.log.info(ftrack.Task(ftrack_data['Task']['id']))

Which worked fine, and published successfully, so its definitely the scene saving extractor. Don’t get why this would happen, but I think I might work around the problem by having a scene-saved-validator instead of an extractor.

Ah, I think it may add to the confusion that the GUI currently is showing you the wrong current plug-in being processed. It’s showing you the previous item as the current item. Is that why you though it was the Validator at first? It’s a known bug that I haven’t found a clean solution for yet…

Could you post the extractor?

Yeah:)

Could you post the extractor?

import maya
import pyblish.api


@pyblish.api.log
class ExtractSceneSave(pyblish.api.Extractor):
    """
    """

    order = pyblish.api.Extractor.order
    families = ['scene']
    label = 'Scene Save'

    def process(self, instance):

        maya.cmds.file(s=True)

Sorry about that, I know it’s confusing…

What, is that the extractor causing the issue? :open_mouth: I don’t understand how it can cause any harm.

Shot in the dark, but sometimes sockets take a moment to close. You could try adding time.sleep(5) just before you save the file, it will give sockets some time to close and might tell us whether that’s the case or not.

One more thing, it’s possible that the ftrack API is trying to automatically close a connection during garbage collection.

If that’s the case, then it could be that as soon as the plug-in is finished, Python will go ahead and garbage collect all variables within it. It could be the case that ftrack listens for that, and tries to close the connection. If that is the case, then that would explain why your innocent-looking extractor is getting the blame for an error cause by a post-garbage collection trigger.

If you can dig up the Task class in the API, look for a __del__ method on it. That would be where something like this could happen.

Can’t find any __del__ method on the task or its parent class, only a delete method on its parent class;

    def delete(self):
        data = {'entityType':self._type,'entityId':self.getId(),'type':'delete'}
        response = self.xmlServer.action('set',data)

        return response

Ok, then that’s probably not the case.

If you can find a way to reproduce it with your series of plug-ins, but using the demo account, I could maybe have a closer look.

Might be a special case, and the reason why I don’t have the scene save as a validator is to overcome a problem in selecting the renderlayers where I iterate over the renderlayers and setting them to current to get values from them.

I’ll just revisit the select_renderlayers plugin, so it doesn’t modify the scene.

Ah, ok. It does sound like a bug, either in ftrack or Pyblish, so if you do one day stumble upon a solution, it would be wonderful if you could post back here.

Just as another note, if the saving of the scene is in a repair method, it still crashes but guessing that would be expected as its practically the same as processing the instance.

Are you getting the same error message it crashes this way?

Sorry, couldn’t reproduce the crash, so ignore the crashing without publishing comment.

Hi all - let me know if I can provide any help here.

For reference:

  • This looks like the current (old) ftrack API being used, which uses the xmlrpclib in Python. That library does have a few __del__ methods that attempt to auto close the socket connection I believe.
  • I can’t see a call in the plugin code posted to ftrack.setup(). This means that the event hub was not connected by this plugin code (I assume you are loading the standard ftrack Maya plugin as well which is starting the event server connection).
  • In the ftrack API there are two distinct connections to the server. One is for the xmlrpc transport of the main API functions, the other is for the event system. It is a little unclear from the error posted which connection is causing the issue. Perhaps try to isolate by completely disabling the event system for your code run ftrack.EVENT_HUB.disconnect()?
1 Like

This looks like the current (old) ftrack API being used, which uses the xmlrpclib in Python. That library does have a few del methods that attempt to auto close the socket connection I believe.

Yup, its the old ftrack API

I can’t see a call in the plugin code posted to ftrack.setup(). This means that the event hub was not connected by this plugin code (I assume you are loading the standard ftrack Maya plugin as well which is starting the event server connection).

Yeah, loading the ftrack Maya plugin.

Disconnecting the EVENT_HUB, still causes the crash meaning that the problem is else where?

Thanks for helping out, @martin.

@tokejepsen, does it still crash when publishing via pyblish.util.publish()?

Successful publish with this code. Unsuccessful with the ui.

Ok, then it must have something to do with IPC. Pyblish is also using xmlrpclib for that.

@tokejepsen I think the only way to not get to the bottom of it, is with a minimal example, ideally using the ftrack demo account.

I can take a look if you get this up, you should be able to use all of your live plug-ins, given none are them are too personal.

On your end, you can run pyblish.api.deregister_all_paths(), and load in only a single plug-in at a time, either via pyblish.register_plugin(plugin) or from a directory, until you’ve found which single plug-in is responsible, and from there start chopping it down until the problem disappears.

Likely also related to: