Pyblish Magenta

Sure, let’s do versioning just like that. Let’s not worry about what Shotgun does just yet, we haven’t really reached any complexity yet.

Ideally, I would like for us to discover why certain things are better or worse than others. Let’s implement your Lucidity into the Selector and Conformer and move on with modeling.

I’ll handle the implementation of Lucidity into the concept art plug-ins and pop in a model, are you able to handle rigging? The model will be as simplistic as in the drawings, and the necessary poses are all there too.

Let’s assume published files are read-only. If anything is to be published more than once, it will have to be assigned a unique name. An incremented version could work I think.

Looking at your fork, could you open up your Issues section? There are some things we probably shouldn’t build into Magenta this early.

Also noticed you added a /database directory to the project, what are your thoughts?

The schema.yaml make reference to @root, but how is @root determined? For example, it could come from an environment variable PROJECTROOT or assumed to come from ~/Dropbox/Pyblish/thedeal.

Ok, here are some problems encountered attempting to implement the Lucidity fork.

  1. Lucidity can’t be bundled; because it imports itself globally (import lucidity). This breaks encapsulation of the Magenta Extension as the user will have to install it first.
  2. Lucidity can’t be installed via pip, I’m getting UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: invalid start byte
  3. Schema.from_yaml assumes a file path, which makes it difficult to separate a project from Magenta itself. Should be Schema.from_dict instead.

I made a pull-request with these limitations in mind.

However, I can’t get the schema to work, I’m getting this.

Path 'C:\\Users\\marcus\\Dropbox\\Pyblish\\thedeal\\dev\\conceptArt\\characters\\ben\\ben_model.PNG'
 did not match any of the supplied template patterns.

What did I do wrong? :open_mouth:

Sounds good.

Done.

Down the line I want to save metadata per asset there, plus store the project’s configuration there. Like the project’s schema.

The root is just the top folder of the project. With lucidity at the moment any path should be parseable/formattable to define a full path. The way I’ve set up the root pattern is with a regex that matches any top path (like multiple folders).

I’m assuming that when inside the project one would know the project’s root folder, but you won’t have to know it yet. SInce it’s parseable from any of the file locations that are in the schema. In our case it would parse to eg.: E:/Dropbox/Pyblish/thedeal for the root.

Hmm. Is it allowed to be bundled with the current license? Any tips on how to refactor it?

I don’t have any experience with making things pip-ready. What’s the usual workflow?

No worries, that’s a small change that I could wrap up soon.

I think paths must have forward slashes? Does a quick sanitize work there?

The problem I encountered was finding the schema to begin with, since it’s in the current project. A bit of a chicken-and-egg problem - you must have the schema to figure out the project, and must know the project to figure out the schema.

I assumed the existence of an environment variable called PROJECTROOT in the pull-request. How would you, given the current project and Magenta, rather do this?

Here’s the plug-in.

import os
import shutil

import pyblish.api
import pyblish_magenta.schema


@pyblish.api.log
class ConformConceptArt(pyblish.api.Conformer):
    """Conform concept art to final destination

    .. note:: The final destination is overwritten
        for each publish.

    """

    families = ["conceptArt"]

    def process(self, instance):
        # in  = thedeal/dev/conceptArt/characters/ben/ben_model.png
        # out = thedeal/asset/model/characters/ben/conceptArt
        input_path = instance.data("path")

        self.log.info("Conforming %s" % input_path)
        self.log.info("Assuming environment variable: PROJECTROOT")

        assert "PROJECTROOT" in os.environ, "Missing environment variable \"PROJECTROOT\""

        schema = pyblish_magenta.schema.load()
        data, template = schema.parse(input_path)

        self.log.info("Schema successfully parsed")
        new_name = template.name.replace('dev', 'asset')
        asset_template = schema.get_template(new_name)
        output_path = asset_template.format(data)
        self.log.info("Output path successfully generated: %s" % output_path)

        self.log.info("Conforming %s to %s" %
                      (instance, "..." + output_path[-35:]))

        if not os.path.exists(output_path):
            os.makedirs(output_path)

        try:
            shutil.copy(input_path, output_path)
        except:
            raise pyblish.api.ConformError("Could not conform %s" % instance)
        else:
            self.log.info("Successfully conformed %s!" % instance)

For which pyblish_magenta.schema handles Lucidity and the /database directory of the project.

import os
import lucidity


def load():
    """Load schema from project `project`

    Schemas are assumed to be located within a /database
    subdirectory of `project`.

    Arguments:
        project (str): Absolute path to project

    """

    project = os.environ["PROJECTROOT"]
    abspath = os.path.join(project, "database", "schema.yaml")
    return lucidity.Schema.from_yaml(abspath)

Yeah, it’s Apache, it’s similar to MIT.

About refactoring, yes, remove all imports that start with lucidity and make them relative instead.

# Before
import lucidity.error

# After
import error

# Or
from . import error
1 Like

If we bundle it, pip install won’t be necessary. But it’s not broken, I made a mistake.

$ pip install https://github.com/BigRoy/lucidity.git

It should have been this, which does work.

$ pip install git+https://github.com/BigRoy/lucidity.git

I assumed to have something that finds the root of the project from any given path. Basically iterating upwards in directories until it hits/finds the root folder that contains the database folder (possibly with a required stub file in there?) This way any path inside the project should be a valid starting point. (Though this does restrict setups that might require disjointed project folders?)

I’ve had some really annoying experiences with environment variables (especially managing it on multiple computers).

We can do it either way, I just need something for ConformConceptArt.

The thing about environment variables on multiple computers is to let it be the responsibility of a “wrapper” or “launcher”. Like the one we use in the Magenta example projects.

start_maya.bat

set PYTHONPATH=%~dp0..\..\pyblish_magenta\pythonpath;%PYTHONPATH%
maya

This is where you would figure out the project, and store it. For example, via Python.

start_maya.py

import os
import subprocess

def find_root():
  ...

os.environ["PROJECTROOT"] = find_root()
subprocess.Popen("maya")

But, I think we should store this in the Context. It’s a very good place for it, along with anything else global.

import pyblish.api
import pyblish_magenta

class SelectProjectRoot(pyblish.api.Selector):
  def process(self, context):
    context.set_data("projectRoot", pyblish_magenta.find_root())

Then we can use that in the other plug-ins. Thoughts?

Let’s give it a go!

I refactored my fork of lucidity so they are all relative imports. How do I relative import the __init__.py file? I need it to retrieve the parse and format methods from there. (I got something working now, but it seems weird)

Also implemented a Schema.from_dict() method.


By the way I had a quick go at a draft (previs?) stage for the model of Ben. I’ve published it through Pyblish using lucidity (just a quick test) using my current fork of pyblish-magenta. Hope to find some time soon to clean up the code some more.

Note: Currently it’s published without versioning!

(Also I love the deprecation warning in the UI!)

:slight_smile: I honestly expected the first reaction to it would be something along the opposite, but I’m glad you like it.

I suppose the trick is, you don’t. Is it that you have stored shared content in there? In that case, you can refactor that content out into it’s own module, like shared.py or an existing module, and then relatively import that.

You can test whether bundling works by putting Lucidity in a subdirectory that is also a package, and doing something like:

from subdir import lucidity

Importing might work, but make sure to test instantiating a few objects and calling a few methods to make sure. The error I got was from lucidity.error not being available.

Excellent beans!

Awesome, I noticed the updates on my Dropbox here too. Looking fine!

About your recent work to publish this thing, could you try and push only the changes you’ve made to the main repo so we can get to some sort of middle ground? At the moment it looks like your fork and the main repo is quite separate.

One thing you could do is checkout the master branch of the main repo on to your local repo, and “cherry pick” the change you’ve just done. From there, you can make a pull-request that only includes your recent change.

Ok! Having a look at the model, a few really good questions arise.

  • What should be the scale in Maya-units?
  • Where should the center of the character be?
  • The development files have versions, do we version up the same files when working on the same asset?
  • When publishing a playblast for review, what formats should we use?
  • …where should the playblasts end up?

I’ve taken the liberty of playblasting the model you just created like this.

└── ben
    └── geo
        ├── ben_shaded.mov
        ├── ben_shaded.gif
        ├── ben_wireframe.mov
        ├── ben_wireframe.gif
        └── ben.ma

On the premise that each file represents the same asset, same publish, but with different representations.

Here they are, in the flesh.


The reason for including the wireframe was because of the limbs that might need some divisions later on during rigging. :smile:

Validators

The fun stuff! Here are some things I noticed that might need validators. At the moment, he’s roughly 25 units tall. That’s centimeters by default. Should we aim for human-sized, 180cm? Also at the moment, he’s a bit off center. Would it make sense to put him straight on the X?

  • Validate Character Height
  • Validate Character Origin

Extractors

How about an extractor for mov and one for gif?

I’ll chime in here. I think you’re starting to micromange way beyond reasonable scope with these validators. In real production you wouldn’t be able to deal and try to validate such things, because you can never be sure whether the character shouldn’t actually be that big or small. You can check for file settings for instance, but can’t imagine making a validator per character just to make sure they are the right size. That would turn into the most expensive pipeline ever, because you’d probably need a person just to be writing validators like this.

About he off center. I’d be careful with that too. The furthest you might go is to check if the geo is within reasonable distance from the centre (meaning not 30 meters off), but minor positioning like this is very dependent on the design itself, so modeler should take care of that.

I recommend sticking to Maya’s default. We’ve had some difficulties with Maya becoming unstable and acting weird when working with bigger scenes on meters. I think the default is cm.
Pretty much all models end up exactly touching the floor, unless there’s a specific reason to have it centered in a different way.

I agree with @mkolar here. Currently it does contain a check if something isn’t like over 1e5 units away from center. Just as a sanity check.

I guess another sanity check could be in place to ensure that where something is positioned in the scene isn’t totally off-center. Or at least somewhat close to the center? Really depends on the type of object I guess? And if it’s only checking that far off center is it that much useful?

I guess we definitely could. This is a really quick draft publish (just to see if a model gets from A to B).

Sounds perfect. Let’s get working on having that automated through Extractors!
Should we make the turntable gifs relatively small in size? (Like 256x256 so it’s usable as thumbnails for something in a browser? Or what would you like to do with it? Or maybe smaller?)

How about a 512x512 .mov for quick reviewing purposes and a 128x128 .gif for thumbnails?

Also getting both shaded + wireframe in two different formats might be overkill. Especially if there’ll be a lot of publishes. I love that it comes with something that is a simple preview of the scene. Sweet! :smile:

We’ve never really playblasted published models, but I think it’s a good time to start!
For shots they go into their respective renders folder, but for models I’m not that sure yet.
I think next to their corresponding geo file could be fine?


I pushed a new update to my forked repository of pyblish-magenta. Currently all plug-in (not in todo folder) should be using the new DI methods for process and repair and won’t have the deprecation warning icon. Plus some other small changes.

We probably have a very different idea of “real”, and as you aren’t leaving much room for discussion, let’s move on.

@BigRoy, how about a validate_unit.py to ensure a correct setting in Maya? unit could be collected into the Context, such that the validator could be used anywhere a unit of cm is preferable, like Fusion.

The gifs are for use in here, such that we can keep each other, and everyone else, updated with our progress. 256 might be a bit small and doesn’t fit nicely in with posts…

How about this?

And let’s keep them < 1mb, considering people will have to download all of them.

The quicktimes on the other hand is so that we can look at an asset without opening Maya or whatever else software was used. It doesn’t need a weight nor size limit I think. Whatever gets the job done.

Yeah, should be ok. The next thing we will run into however is how to merge that with versions.

geo
├── ben_shaded_v001.gif
├── ben_shaded_v002.gif
├── ben_v001.mb
└── ben_v002.py

How about something like this?

geo
├── v001
└── v002
    ├── ben_shaded.gif
    └── ben.mb

Could you get it into the main repo so I can run it here? Don’t include the todo directory though, keep that in your fork.

My bad, Instead of real, I should have said, ‘small to medium’ sized. I was just trying to point out that you planned to aim at small studio with 0-1 developer. Such probably won’t have resources to write so specific validators.

No problem @mkolar!

Regardless of what is real, this did happen. There now is an invalid asset in the current project that could have been prevented with a validator.

What I would would like for this project, and for Magenta as a whole really, is for us to discover the requirements, based on actual experience, as opposed to considering what is “the best way” or what would work “everywhere else”.

It’s okay to generalise, but I think it’s better to specialise first, generalise second. Otherwise I think we’ll spend too much think thinking and too little time doing.

About these validators, at the end of the day, anything can be validated. The only question that really matters is whether the cost of preventing a problem is greater than the cost of dealing with it afterwards. You’ve got to stop thinking as an individual, and start thinking as a team.

For example, consider the cost of Roy adding a locator to his scene, call it height and place it at the highest point of the character. A validator could then derive the height of the asset by explicitly looking for height and querying it’s y value. If height isn’t available, then validation fails regardless. In our case, it’s safe to assume that the height of Ben isn’t going to change, so the cost would be a one-off. In return, we can assume that the height will always be valid for the remainder of the project, including the next character Jerry.

That’s quite an allright return of investment if you ask me.

Done. It’s in my fork of the repository; will push upstream soon (see below). Currently one Selector + Validators that does time, linear and angular units is in my last push to my repository. I could separate all those to make each individual one simpler?

Note: I offsetted the selector by +0.1 and gave it specified families. I could imagine the data about eg. FPS is not that important when doing look development so it could skip collecting it there? Seems to work fine. Let me know what you think about this.

Your proposed formats seems fine, let’s give it a go. Would you like to use your maya-capture scripts for the Extractor? If so, want to give that a go? Or do we strip the one from Pyblish Napoleon?

Let’s start with a simple one and expand as we define our needs along the way.

Looks good to me. Would love to know what others think about this? Any preferences or comments on how to do versioning?

How would I exclude that folder from a push to that repo if it’s in the commits of my own? (Any simple way?) I’ll do a clean-up over this weekend but would love to know the strategy with git there. :wink: Thanks in advance.

I think the point is more or less whether the validator is abstract enough (for now) to be validated without getting data from a database (eg. character must be 1.80 meters). From our studio’s workflow we iterate over models as we progress (both in size and shape) so validating a height would be very hard to do. I understand the different between working on a character and accidentally making him 18 centimeters or 180 meters. But you wouldn’t know if the character is supposed to be a mouse or a giant without setting limitations to your workflow. I’d love to hear about how you’d solve it always with this scenario.

It’s not that much an issue of writing the validator, but more deciding when it’s wrong or how you’re validating it without having the user to push in some sort of information about the size of the character. :wink:

Excellent, looks good. There are some things that would be very well suited for a discussion over a pull-request so we can mark out code and keep it in the GitHub history, the commit itself is also well suited for a pull-request alone.

Rather than reshuffling your local fork, what I would suggest is this.

  1. Keep your local repo, but rename it
  2. Delete your GitHub fork
  3. Re-fork
  4. Put your last changes into the new fork
  5. Pull-request

Then use your local renamed repo as reference and start pull-requesting. It won’t be many requests until we’ve done a complete merge, and that way you can more easily separate your todo directory too.

Could do, I can implement that, no problem. It’s the .gif I’m not sure of how to manage as I’ve never created any via scripting/command-line before. But I’ll have a look.

Validating height

I can understand your line of thinking @mkolar and @BigRoy, but if you give me a chance, I’ll show you how this can work across the board. Me explaining things in a general enough fashion isn’t going to get us anywhere.

In summary, I aim for:

  1. Anyone to install Magenta and start using it with no coding
  2. Have Magenta to be applicable to any production matching the requirements we nailed down in the beginning of this thread. (See “Narrow focus”)

And yes, I can see this work with a validator for height.

@Mahmoodreza_Aarabi is putting on the riggers hat and is getting to work with Ben this weekend. So basically, he’s got dibs. :smile:

The idea is that we’ll push out a rig, like we did the model, and find things about it to use or write validators for. When we’re done, it should be near impossible to publish a rig that doesn’t meet the exact criteria of the project, and any rigger should be getting enough information out of the plug-ins that he won’t need to go to anyone for help with non-technical issues.

@BigRoy, I think the first showstopper he will encounter is the low subdivisions of the arms, could you push out an updated version for that so that he can get started?