Allowing pyblish to communicate with ftrack database. This should eventually include creating and updating versions, adding components to existing versions, pulling task data from ftrack to context to be available to other packages and potentially running various validations based on ftrack data.
Current Functionality
With initial commit ftrack extension is dependent on some custom libraries that contain mostly convenience functions for easier data formatting from ftrack. I these will be added to the repo when I manage to clean them a little bit. This means that out of the bat the package won’t work. Until it does I’ll keep it in my fork so we don’t confuse people who might be interested in it.
We can however discuss approaches to take when developing this. I’d love to hear from anyone using ftrack as their need might be completely different than ours and it would be great to build something universally usable. Especially input from @lars would be greatly appreciated as he wrote in another topic that Realise is already publishing everything to ftrack using pyblish. Comparing the approaches would be a great addition to the discussion.
Hey Martin. Cheers, Bjorn has already gave me access to the repo and enabled it on our server. I just didn’t find time to dig into it yet. From the quick read through the docs it’s a bit confusing, but I’m sure once I start trying it out. I’ll get it quickly.
Yeah, the docs are still in progress. For most cases it is just simple queries and then plain old dictionary access. The API takes care of automatic caching, populating missing data under the hood etc:
>>> project = session.query('Project where name is "foo"')[0]
>>> print project['name']
foo
>>> project['name'] = 'bar'
>>> session.commit()
Once you get stuck in, let us know your feedback on the ftrack forums/support list.
Uff. I’d rather keep this simply about publishing. Creating is a whole different beast.
Missing folders for things being published is fine, but actually adding things to the database I’m not sure is a good idea.
EDIT:
Let me add to that. Something that handles these things would be lovely eventually, but I feel it’s not really within pyblish scope. Ftrack is currently missing a good robust system for dealing with these things outside the webUI, but this problem also involves some things that me and you discussed before. Primarily dealing with working files on top of published files. Some sort of file hub where artists can go through available work files, create new ones, work with minor versions, a potentially create new entities. A one stop shop for people to work from. The PSYOP pipeline that was posted here some time ago is a brilliant reference to what I think a good file handling backbone should look like.
A tiny bit of OT, when we already touched it a bit:
Actually I feel that as much as I prefer ftrack’s webUI and philosophy on the database front to shotgun’s (and that’s why we are using it), it is light-years behind shotgun toolkit in terms of integration to pretty much anything. And on a contrary to shotgun community, where there was lot’s of shared development from the early beta stages, with ftrack it seems like there is next to none. Your event server and recent ‘actions’ initiative are a bright exeptions to that.
Ok. Currently we are about to start a Hiero publishing pipeline, that uses Pyblish for creating shots. It works well, and the plugin isn’t very complicated, so having it outside the scope of pyblish-ftrack is fine by me.
Thanks, guess it just about encouraging it when Ftrack don’t.
Interesting. To be honest I didn’t think of that, because the hiero ftrack connect with hiero seems to work well enough and seeing what is coming with the next release, I didn’t even consider doing it outside of that. For publishing .hrox files I was thinking of eventually just using nuke extension. But then again, we don’t have hiero, but are starting with nuke studio now.
I’d love to. I will be crazy busy for another month though. No dev time at all. Untill then I’ll try to see the differences between the two. I have a feeling that NS is closer to nuke than hiero. At least from the looks of preferences and the feel so far.
That’s right, the question is more about how the nuke and hiero part play together in NS. Because you can regularly jump between compositing and conforming on the fly and the file menu stays the same. So for example when supervisor opens nuke script from NS because he just wants to tweak one or two grade nodes (without sending it back to artist), he should open from NS, change the script, publish and done. So technically working in nuke way, but 2 minutes later, he might be publishing the .hrox file as a new conform version. so working in hiero way.
Anyways. Not looking for problems here, just thinking out loud.
Oh you got problems mister! Just kidding. Sounds like a wicked workflow there, and an interesting integration problem.
Does that mean that they also share Python interpreter? I mean, does variables declared in the Nuke Studio Python terminal exist in Hiero and Nuke after switching?
Actually probably very similarly. They are dependent on their instances and those should be different between the two environments. Nuke publishes write nodes, whereas conform workspace would have instances such as shots and sequences, extracted as plates, edls, audio etc. However I have a feeling that I will only be looking at publishing the actual nuke studio scene files, rather than it’s output, because exporting and conforming things from it’s UI is what it does best. So we might want to stick with it.
I would like to propose a restructure of plugins to only collect and integrate.
The primary motivation for this is so we can add components for integration in the extraction stage. Currently we have to have an instance at the collection stage for a component to get the proper data for integration, like asset and assetversion.
Similarly this would also simplify what is required to publish a component.
The main question is how to structure the components to get all the data needed. My suggestion for a structure would be to have all the necessary data on each component to publish a component.