Actually we would need three methods parse, format and ls.
Parse would allow us to retrieve the identifiers from a path. This should retrieve as much identifiers as it can as opposed to what lucidity does by default which is returning the first match found. This would mean from any file it would know what it is.
Format would use a dictionary of identifiers to calculate a path. Preferrably 100% of the paths retrieved and created should be using this interface to ensure the file is where it should.
Ls would take a dictionary of identifiers and returns all available identifiers contained within it.
Would these three ensure that weâd be able to find where and what a file is?
These different methods could be implemented in different ways, e.g. schema vs schemaless. Yet the interface should be minimal and it seems this would provide all information to define file locations. If all our plug-ins would solely use this api it means the underlying system (schema vs schemaless) could be swapped by only implementing these three methods.
Sounds like it, but it also sounds a little magical.
For example, where does the root come from? And what about directories in-between the various data points, such as /publish inbetween /model and /v001?
Perhaps a prototype could help convey itâs benefits and better articulate itâs potential?
Since the component isnât an attribute on the node that would of course be a separate query. We do it like this (pseudocode):
for member in members:
matches = lsattrs({"uuid": member['uuid']})
# If the member has component add it to the node name
components = member['components']
if components:
matches = ['{0}.{1}'.format(node, components) for node in matches]
cmds.sets(matches, forceElement=shading_group)
Iâm not following this thread, but saw you were having problems with face assignments here.
I would say that it is not just face assignments thatâll be a problem, but there are other attributes you can be setting up in the lookdev file. What we usually do, is to import/reference the pointcaches and get the lookdev to use that point cache rather than trying to copy all the lookdev configuration to the point cache.
This is also what somewhat confuses me, because in reality /publish is a different place than /work. So if youâd want to differentiate between these two locations you would have to identify this difference, thus add it to the identifier. Unless itâs hardcoded to choose one folder over the other, so you would never access work and only access publish.
This same difference is actually between our film and assets folder. There should be something that âidentifiesâ it as being different, otherwise the difference could never be made.
The opposite would be to iterate the folders and guess that something with name or metadata X is actually your asset no matter where it is. Yet since it wouldnât know where to look youâd always iterate a full project. Even if youâd know where the asset is youâd need to identify whether youâre accessing the work or publish folder if youâre differing between those.
@tokejepsen Not entirely sure what you mean here. But weâre separating lookdev and lighting.
Lookdev is the designing of textures and shader setup for an asset. This would also hold custom attributes or displacement sets (like vray attributes or vray object properties nodes) and weâre exporting this data into our own format. Basically within the lighting scene we would recreate the exact conditions.
Actually weâre already exporting this additional data if Iâm correct.
Would be great if you can recreate is exactly. Its something I have been chasing for a long time, but could never account for all the attributes and setup. Thatâs why Iâve always resorted to applying the point cache to the lookdev mesh.
Ah yes. So what I assume youâre doing is you load in the lookdev mesh and apply the deformations of the mesh by applying the âpointcacheâ onto that lookdev mesh. So the pointcache that is loaded is not a new mesh being created in the scene, but only the deformations are loaded onto the pre-existing mesh.
Are you using Alembics for this? Or another format?
Technically I reference in the Alembic point cache, and the lookdev mesh. I connect the Alembic deformation to the lookdev mesh by attributes, copy the transform values and in some cases setup a live blendshape.
Never liked this workflow, but it has served me well on a couple of projects now. The good thing about it, is you can just replace the Alembic reference with a newer animation file, and everything is preserved.
Youâre referencing the Alembic file directly? Thatâs great to hear; we jumped through some hoops wrapping the alembic cache with a Maya scene, and referencing that instead, because of some bugs @BigRoy had noticed.
Itâs good that you mention that. Wouldnât have even thought about that being an issue. I can see how thatâs tricky with renderfarms since you canât control their userSetup.py.
I unfortunately only have experience with Deadline. With Deadline you can copy their Maya plugin, and setup the PYTHONPATH. Previously Iâve used custom bat files for this.
Ah, you mean a local renderfarm. I thought you meant pushing it to an online renderer, eg. Rebus Farm. In that case I donât think you have control on the paths being loaded. Now that I think of it, that would even be an issue with our relative $PROJECTROOT paths.
Either way could always set up a âpreprocessâ method for a remote renderfarm where you provide them with a âspecially made sceneâ. Even though itâs not efficient weâve done it in some cases.
I canât imagine a remote rendering system not providing control over itâs environment, this canât be true.
If we can control the environment within which a process is to be run, such as maya, we can add both a userSetup to PYTHONPATH and additional variables such as PROJECTROOT.
On a local farm, you must be able to do this as well?
Edit: Google brought me here, doesnât this solve this problem?