Pyblish Magenta

I just had a closer look at your fork, and it looks like it’s already merged with mine and is looking good! That’s perfect, I’m just a bit confused about how you can still be 17 commits ahead… but anyway, make it a PR asap and we’ll get going, just move the todo directory out of there for now to keep things tidy. You’ll need to push an update to your fork without it, and then PR.

Deleted my fork and reforked it. Cleaned up the changes and put it in and did two commits. Pull request is open. I’ll try to keep cleaner from now on.

Maybe I’ll just open dev branch on my fork so I can still sync between two locations using the github repository and merge with master when it’s working and use that for a pull request. Another benefit could be that people can track my progress even if it’s not clean and fully functional yet.

Sweet! Welcome aboard @Mahmoodreza_Aarabi!

Feel free to join the conversation about global progress of Pyblish Magenta (also outside of rigging).

Will try to pick it up somewhere tomorrow morning. (Won’t have time later during the day)

Ok, so a new Ben model has been published. It has more subdivisions and only a little bit more detail. I also corrected is height to 180 cm in Maya units.

Disable Smooth Preview on publish

I’ve noticed that it published with smooth preview enabled. This slows down loading the models into other scenes plus is most often not what you’d like to have. So I’ll be disabling smooth preview in the model extractor (hopefully later today).

Merged and feeling good.

About Mahmood doing the rigging, it made me realise that we are missing something quite important for this project, which is how to know in which directory to start.

Publishing is one thing, it assumes you’ve already gotten started which provides a baseline. But now, we’ll need to explicitly tell Mahmood where and how to create a directory for Ben’s rig, and I felt the same way when I got going with concept art.

What we need is some way to specify up-front what we’re working on such that we can build tools to accommodate for this.

In this case:

project: thedeal
asset: ben
family: rig

That should be enough information for a tool to figure out, alongside the schemas you’re set up @BigRoy, where to set the current working directory.

Any suggestions about how to approach this?

Perfect, I believe @Mahmoodreza_Aarabi is getting started today. Fingers crossed.

Sounds like a good place to make this kind of decision.

Here’s an interface I have in mind for us.

$ cd Dropbox/Pyblish
$ dash thedeal/ben/rig

Which will:

  1. Set environment variables into the current session.
  2. Creates the appropriate directories.
  3. cd into it.

Environment variables.

PROJECTROOT=%cd%/thedeal
ASSET=ben
FAMILY=rig

Which then finally assumes you launch Maya and Fusion from the command-line, from the same session.

I can try and throw together a minimal implementation (based on this) to work with the schema provided by the project, in this case assumed to be located under database/schema.yaml. Until discovering otherwise, the syntax will be assumed to be project/asset/family.

The utility itself will be a .py script as they are executable like any other, it will be located at the root of Dropbox/Pyblish and it will assume access to pyblish_magenta on the PYTHONPATH which in turn provides access to Lucidity.

Looking at schema.yaml it looks like we’re mapping the dev portion of each path, project mapped to root and asset to asset, but what is container?

Maybe we need to add a description to each variable, such as:

# Variables:
#   {@root}: Absolute path of project root
#   {asset}: Name of current asset
#   {container}: ?

paths:
  conceptArt.dev:
    pattern: '{@root}/dev/conceptArt/{container}/{asset}'
# Variables:
#   {@root}: Absolute path of project root
#   {asset}: Name of current asset
#   {container}: The parent category of the asset, eg.
#                - prop or character for an asset
#                - sequence for a shot

Does that help?

Also note that {@root} references the {root} template at the top (using a regex pattern to allow absolute path for the root). So the actually variable (or key?) that’s parsed will be named root.

To format one of the paths you would need the data in a dictionary, eg:

data = {'root': 'Dropbox/Pyblish/thedeal/',
        'container': 'character',
        'asset': 'ben'}
path = schema.get('model.dev').format(data)

I personally hate going command-line. Especially cmd.exe makes me cry a lot. :wink:
We’ve talked about this before and I feel we should give it a go as you explained, let’s do this!
You clearly have experience in that area and I think we should use that to our advantage.

I assume we can access the similar dash functionality from within Python to build tools on, correct? (Or actually, that’s the whole point?)

Yeah, that helps. Should we maybe call it category instead? More importantly, how do we derive category from project, asset and family? I’m tinkering with dash now and were having some trouble with this, hard-coded in character for now.

It’s awful, I agree. But let me show you what it gives us and then we can talk about how to make something better.

1 Like

Actually character is part of the asset description. Ben will never be something else than a character. Ben is Ben because is he is character/Ben.

Any asset or shot must always have a container.

category could also be used, but might be confusing for shots, since there it resolves to the sequence eg. sq010_intro.

Maybe this makes more sense:

# Variables:
#   {root}: Absolute path of project root
#   {asset.name}: Name of current asset
#   {asset.container}: Container of the current asset (eg. prop or character for asset, or the sequence for a shot)

Though that would mean (with lucidity) that you’d format the paths like so:

data = {'root': 'Dropbox/Pyblish/thedeal/',
        'asset': {'container': 'character',
                  'name': 'ben'}
       }
path = schema.get('model.dev').format(data)

Sure, that’s fine.

In that case, to figure out that ben is to be placed in a container called characters, we’ll need a separate source for this information somewhere.

How about a assets.yaml per project?

ben: character

From character we can derive a container called characters.

Sounds like we’d be building an actual database then? Correct?
Or at least now we’re building something that won’t be re-used outside of the deal, isn’t it?

For clarity, we’re making a file with all assets related to a project. Just so we don’t confuse it with an actual database like SQL or MongoDB.

In that case, yes, that’s what I had in mind. It will be defined per-project, and will resemble the list of assets I posted early on.

Here’s what it looks like at the moment.

ben:
    container: characters
jerry:
    container: characters

Whereas dash will only allow work to be started on an asset already defined here.

$ dash thedeal/notexist/rig
Asset "notexist" is not defined

Exactly. Let’s keep it as barebones as that and see if we need to expand later on.

Shouldn’t the dash.py be able to access pyblish_magenta and thus lucidity as well? If so some of your path resolving could be simplified by using what’s already been implemented there?

You’re looking at the code on Dropbox right now?

I’m not familiar enough with Lucidity to make optimisations, but I’d like us to have a look at how to simplify things once it’s up and running. I’m almost done, will post a tutorial here so we can talk about what to do next.

Yes.

No worries, sounds good.

Ok, looks like it works.

$ cd c:\users\marcus\Dropbox\Pyblish
$ dash thedeal/ben/rig

Following this command, the current development directory will have been entered and the project, asset and family is visible in the window title.

It will have applied a number of environment variables.

PROJECTSROOT: Absolute path to where projects are located
PROJECTROOT: Absolute path to current project
DEVELOPMENTDIR: Absolute path to current development directory
PROJECT: Name of current project
ASSET: Name of current asset
FAMILY: Family of current asset

These may then be used in plug-ins.

Directory Creation

In addition to environment variables, dash also creates the appropriate directories based on schema.yaml.

$ dash thedeal/jerry/lookdev
Create new development directory for thedeal/ben/lookdev? [Y/n]:

The prerequisities are:

  1. The project must exist
  2. The asset must be pre-defined in assets.yaml
  3. The family must be available in schema.yaml

Error handling

Here’s how it handles the user typing in the wrong thing.

marcus@ALIENFX>dash badsyntax
# Invalid syntax, the format is project/asset/family
marcus@ALIENFX>dash noproject/ben/rig
# Project "noproject" not found
marcus@ALIENFX>dash thedeal/notexist/rig
# Asset "notexist" not defined
# Available assets:
#  - ben
#  - jerry
marcus@ALIENFX>dash thedeal/ben/notexist
# No pattern defined for family "notexist"
marcus@ALIENFX>exit

exit puts you back at where you were before the context change, including the previous environment.

Integration

Finally, when running Maya from the same session, the directories have already been created and the project already set. Which means that when it comes time to save, the save dialog will automatically display the correct locations for new files to end up.

Integration is handled by Magenta via userSetup.py. It looks like this.

development_dir = os.environ.get("DEVELOPMENTDIR")

if development_dir:
    print("Setting Magenta development directory to: %s" % development_dir)
    mel.eval('setProject \"' + development_dir + '\"')

Maintenance

From a developer point-of-view, this means.

  1. Pre-defining assets.yaml
  2. Keep it up to date with new assets being added to a project
  3. Pre-defining families in schema.yaml
  4. Putting maya on the PATH.

Give it a try and let me know how it feels.

1 Like

Hello Guys
Today i started reading your conversation.
i’m happy that maybe i can help.
i like pyblish and thanks to marcus that call me for work on such projects.

anyway,
i will rig these models for animation, and i try to do in best way i can. but there is some problems, as you know rig is a technical issue and modeling is not, at least like rigging, i wonder that how pyblish can check problems in a model, like extra edges flow, or triangle faces or …
but i know that it is possible to check all nodes and behaiviours of rig one by one by validators.
another problem that i have is size of the character, why real size? i think a fake size is very better than real size, for example 1/3 real size. it is not problem for rigging, but i think it will change rendering issues in real projects.( just wondering)

another problem i have is that when i take the model from the puvlished asset for rig model is approved? model, UV, texture, or it maybe change later?
changing not approved model will create redundancy and wasting time in rigging.(back to checking model in pyblish with validators)

mmmmm
and after i finnish the rig where i should put it for animators?
it should publish with pyblish’s plugin related to rig?
i like to develop them too,
as i said to marcus i try to take 1-2 hours per day for this work.
i hope it open new doors to problem solving of pyblish.
(sorry for my weak english)
good luck men

1 Like

Welcome to the party @Mahmoodreza_Aarabi.

The advantage to using a real size is that there is less questions about what the size is; it can be assumed to be real. I think even in rendering real is preferred, as renderers can also assume what size you work with, as opposed to making guesses. There are other advantages to scaling things too, but so far we haven’t encountered any in this project. When we do, we should talk about which size is better.

It would be better if you thought of every asset as being “in progress”. It will change and you will most likely have to change your rig. That’s just the nature of this project, we can’t set too many rules because we need flexibility during development.

I think it’s safe to assume that the storyboards will not change, and the concept art will remain the same.

That’s a good point, we should include textures into the project. What are your thoughts on this @BigRoy?

When we’re finished, publishing will automatically put the rig in the right place, so you won’t have to worry about that. I think it will be under /assets/characters/ben/rig or something similar.


Seeing as we’re three now, it’s more important we coordinate with each other. For quick questions and answers, let’s use Gitter.

For sharing work and for discussion, let’s post here.

Sweet. Works fine here.

Not that it’s really important at this stage, but definitely as we progress: how would you manage this from within an application (not command-line)? And could you easily process a list of assets?

I think it’ll be too hard to perform a rig check to see whether your rig is a 100% up to professional standards, but there’s definitely some good stuff that can be validated. Out of the top of my head some ideas:

  • Validate no unused nodes are present in the exported content
  • Validate naming conventions of controls (and possibly other nodes)
  • Validate controls are zero’d by default (translate, rotate, scale and maybe even shear?)
  • Validate joints are hidden (and possibly other rig helper/utility nodes)
  • Validate all character geometry is bound/constrained to the rig (so at least all meshes move if rig is fully moved?)
  • Validate a character set is present for all controls? (useful if you use that)

I don’t think it’ll be a big issue for rendering. Shader/lighting setups will have to be tweaked for the size, but I don’t think it’s something that couldn’t work. We normally do use a 1/10th scale in Maya. A quick google search revealed there are still many topics about Maya messing up when changing units (eg. to meters) or working with huge scenery that extend into the far distance. So keeping smaller scale in general is safer.

Maya’s dynamics (and most lighting/rendering) solutions can be easily tweaked for scale, but some really require a ‘real’ scale to function normally. One example is yeti (fur simulations) that has no global scale that also influences the size of the simulation.

One isn’t necessarily more important than the other to fix. It’s a valid question which doesn’t have a singular answer.

This is the number one problem in any pipeline rigging with Maya. Maya’s rigging workflow is destructive meaning there is no real-time reference to the original geometry that can be changed in any way without breaking the rig. Thus a model change does mean redundancy and ‘somewhat a waste of time’.

Unfortunately any real (artistic) project undergoes changes back and forth, it’s an iterative process. 9 out 10 times if something is considered perfect as a model it will still gets some tweaks along the way, possibly breaking the rig. Usually this doesn’t mean the rig has to be rebuild, but only that meshes have to be swapped and updated (which can be time consuming in Maya without a custom toolset for it.)

All I can say for now is that I’ve published a model with subdivisions so a rig can be prototyped. One thing I know for sure though is that the UVs will still need work.

Yes, Pyblish should be used to ready your models for other people.

To get rigging published we have to implement a Selector for it. The Selector for the model instance can be used as a reference and changed towards one that works well for rigging. The first step would be to define the rules on what will be selected from your scene.

  • Do we have a single top group that needs to be exported? Or extra nodes outside of that?
  • What exactly do we need to extract from the scene? What nodes/connections?

Exactly, this are the questions we’ll answer as rigging progresses. It’ll also highlight what can be shared amongst the Model and Rig selectors.

@BigRoy, what do you think about setting the Maya project to wherever the development directory is?

I know you talked about how you currently set the project to the top-level project root, and not to where you work. In this case, it would simplify the work involved for Dash to setup a default directory in which to save files, as it opens the save-as dialog with the current project as the initial directory.

How else could we handle this?