Extraction Queue

So this is related to long extractions like; big point caches, simulations and renders, which have been discussed in; Publishing renders (lighting) and Pyblish Magenta.

The basic idea is to have an option to put the extraction and integration in a queue, so the user don’t have to have the host open and wait around. After validation the instances should (in theory, this might not always work in practise, but then there are some extra validation needed), be able to extract and integrate without any user interaction.

This feature would overlap current solution like event plugins in Deadline. But I know that @mkolar and I are heavily customizing these to work within our pipelines.
This is not to replace a tool like Deadline as render/processing management tool, but to take the publishing responsibility away from render farms, and work more universal. The initial idea is to have render farms emit signals to the extraction queue about success or failure, which would be their integration with Pyblish.

This would also facilitate local long extractions, where the user could put it in the queue and get on with the work.

I know @marcus is looking towards having Pyblish run in the system tray, so this is where the extraction queue could sit and inform the user about updates.

1 Like

As discussed on Gitter, this is a great initiative @tokejepsen.

This touches on two previous topics, including them here for completeness.

Workflow-wise, we’ll need to make the GUI work with multiple hosts simultaneously; otherwise you wouldn’t be able to publish when awaiting another publish. Graphically, I’m thinking of an “overview page” where each connected host is reflected and can visualise whether a host is currently working or not. Technically this would involve storing the publishing progress within each host such that it could be later retrieved when switching between them in the GUI.

Signalling

The signalling mechanism is already in place and is what is currently being used to simply show the GUI. This signalling occurs over TCP/IP as RPC via the xmlrpc standard Python library. It currently supports a few signals (commands, really);

  • show
  • hide
  • close
  • kill
  • heartbeat
  • find_available_port

And you can connect and emit these signals from any computer on your network like this.

import xmlrpclib
proxy = xmlrpclib.ServerProxy("http://127.0.0.1:9090")
proxy.show(9001)  # Port number of first connected host

Replace 127.0.0.1 with any IP address on your network to gain access to it’s running instance of Pyblish QML. It always runs on port 9090. If this port is also forwarded externally, you could potentially gain control over the internet (along with everyone else).

This is how you could implement a Deadline plug-in to “signal” completion/failure of a process.

# Not yet implemented
proxy.send_status({
  "plugin": "ExtractAbc",
  "instance": "ben01",
  "status": "success"
})

Receiving signals

This part is more tricky and something I’ll have to think closely about. This implies background and distributed processing, as in the same mechanism could be used to perform multiple plug-ins of any kind even within the same host. Get one, get all.

The tray feature Toke mentioned is indeed planned and will replace the need for PYBLISH_QML_CONSOLE to get a dedicated terminal for QML output. This will instead end up in it’s own window which you can access via a tray icon.

Initially intended to facilitate debugging, it comes naturally that you should be able to do more here, such as status messages, which is what we’re talking about currently.

1 Like