If you ever get given the task of integrating Houdini into a pipeline, one of the first things you’ll probably look at is render farm support. Many commercial farms come with their own submission tools, but sometimes you want to add your own customisations. Usually on a per ROP basis.
For example, you may want to add buttons to submit a job to the farm to each ROP like I did for my open source Deadline submitter:
So let’s say that you want to add render farm submission parameters for all the ROP nodes in Houdini. You have two choices. You can either:
make your own new versions of the ROPs as new HDAs, or
modify the UI of each new ROP instance every time you create it.
In part 1 of this article, we'll look at the pros and cons of both. Advance notice... I'm going to argue that HDAs should be avoided for this purpose!
In later parts we'll get into the details of hooking our Python code into Houdini, customising nodes, and other areas that will hopefully be of interest.
I should mention that I'm going to assume that the core pipeline code is stored externally in a Python library, and not within the HDAs. For example, a farm submission button on the HDA might call the "submit" function of the external pipeline library. (Generally speaking... storing a significant amount of code in an HDA can be problematic for a number of reasons I won't go into now. I'll save that for a different article.) This only really matters when we come to talk about maintenance and versioning.
There may be other assumptions that I'm unaware I'm making about the wider ecosystem, so feel free to request clarity in the comments section if something doesn't make sense.
Okay... let's look into the first option for customising built-in Houdini nodes for your pipeline; HDAs.
Option 1:
Making your own HDA versions of built-in ROPs
This approach would involve either duplicating a built-in HDA, or making an HDA from a Subnet with the built-in HDA inside.
Initially this sounds good; maybe it’s the most obvious choice, and seems simple? However there are complications! Let’s just focus on supporting the Mantra ROP with our customisations as an example.
Issue 1: Naming
What do we call our new custom Mantra HDA that we’re going to create? Houdini has conventions for handling this. The built-in Mantra node is called “ifd”. We can either add a version number to the name e.g. “ifd:1.0”, or we can add a namespace, “custom::ifd”, or potentially do both “custom::ifd::1.0”.
Let’s take a look at each these three approaches:
a) Adding a version number. “ifd:1.0”
As soon as you create an HDA with a version number, Houdini will use this node by default as a direct replacement to its own because the built-in “ifd” node doesn’t have a version number specified. If you create new Mantra nodes in your scene via the Tab menu (or even using the “Auto Create ROP” in the Render View), Houdini will automatically use your new custom Mantra HDA.
Seems great, right? Yes, this is by far the most convenient approach, but the short term gain comes with a potential future risk. What happens if Side Effects release an update to the Mantra node called “ifd::2.0” like they did with the Bake Texture ROP? I suppose you could create “ifd::2.0.1” as a patch on SideFX’s new version? But now you’re playing version-leapfrog with SideFX, and it’s going to confuse a lot of people about which one they should be using.
Okay, so this wouldn’t happen very often, but it’s not a robust long term approach so I’d avoid it.
b) Adding a namespace. “custom::ifd”
Creating a new HDA with the same name, but in a different namespace is kind of like saying “this thing is the same, but different”. That can be quite handy. The downside is that Houdini almost treats it like a completely different node type and by default it shows up as a new entry in the Tab menu.
How do we sort that out? Easy, we can just hide the original one from the Tab menu. You can either do one of two ways:
Creating a file called “OPcustomize” in a $HOUDINI_PATH location and put this inside:
ophide Driver ifd
Put this into your 123.py:
hou.ropNodeTypeCategory().nodeType("ifd").setHidden(True)
There's also another way of doing this which I recently stumbled across. In Houdini's preferences, there's a setting that, by default, tells Houdini to only show the latest/preferred version of an HDA. It will show multiple nodes in the Tab menu if they have different namespaces, which is what we're trying to avoid.
But... the "Show Single Operator from the Preferred Namespace" setting will only ever show a single HDA in the Tab menu, and deduces which one to show using the concept of a namespace hierarchy.
You can set the namespace hierarchy using the HOUDINI_OPNAMESPACE_HIERARCHY environment variable. If you set it to "foo bar" then it will prefer HDAs that have a namespace called "foo" before HDAs with a namespace called "bar". All other HDAs for that node type will be hidden from the Tab menu.
c) Add namespace and version: “custom::ifd::1.0”
Having a version number is definitely useful in managing which HDA people should use while still allowing for older HDAs (and legacy behaviour) to exist in people's Houdini scenes.
Including the namespace as well, means you no longer have any issues with having to avoid SideFX’s own version numbers.
So if we were using HDAs, my preference is that this would be the best option for naming.
Issue 2: Creation of duplicate HDAs
If you try to create a duplicate of the Mantra node, you'll notice that it doesn't quite work. You get some of the parameters, but not all of them. Sometimes this can be because a startup script isn't being run that relies on the HDA's type name.
In the case of the Mantra node, I suspect it's actually because that HDA is being defined inside a C++ library and there are other mechanisms at play which aren't available when we copy the HDA definition into our own OTL file. It's also possible that other functionality on the node will be missing, even if we went through the labourious process of adding the missing parameters ourself.
In this situation, the only option open to us is to put the Mantra node inside a subnet, promote all the Mantra node's parameters, and make an HDA out of that. In most cases you should find that works, but it's not ideal. We now have the overhead of having double the number of parameters, and there could be other unexpected issues with this additional complexity further down the line.
One other thing worth considering. When writing a farm submission tool in Houdini, there's usually a step in our code to analyse the dependencies between ROPs to determine which order they should be run in on the render farm. Now that our Mantra node is contained as a child node within our HDA, we need to know how to handle this in our code. Do we call "render()" on our new HDA or on the Mantra node inside? Both should work, but we would need to define a clear convention for our pipeline.
One other thing... if you try doing this, you'll see in the Render View pane that it won't recognise your custom Mantra HDA as a render node. It will show the Mantra node inside instead. In practice it doesn't make much difference, but it's not ideal from a UX point of view.
Issue 3: Script Support
A lot of python scripts that deal with HDAs, search for them by name, so they’ll now have to handle namespaces and version numbers. It’s not too hard, but it’s just a bit of extra faff that people need to remember to deal with.
Issue 4: Coping With Future Changes
At Framestore, we support a wide number of different Houdini versions. Each can be running on different projects at any one time. Sometimes, rarely, we allow different versions of Houdini to be run on the same show if there are very specific requirements, E.g. one shot requires a particular tool or feature.
If the Mantra ROP has any changes, however small, between Houdini versions, we’ll need to create a new version of our customised Mantra ROP with all the parameters we need. Since it’s not necessarily going to be obvious if there’s a new parameter or other changes, the safest thing to do would be to just make a new version of our Mantra ROP for each release of Houdini.
Chances are, you could get away with making new ROPs for every minor release (i.e. 17.5, 18.0, 18.5, etc) rather than having to worry about the patch releases. But it’s not out of the question that something may change in the interface in a patch release in order to fix a bug. So maybe we have to deal with patch releases too.
Now scale that up by the number of ROPs you want to support, and that’s a ton of work. It’s not something I’d want to take on as a manual task. Not only would it be laborious, but it would also be horrendously error prone to do it by hand.
Ah, but wait… why not make a build system to do it?
It's not an insignificant amount of work to make this, but definitely preferable to doing it by hand. We could write a script that adds the parameters and use a “make” build system to generate our HDAs for each version of Houdini. That’s seems like a good solution, but it’s not quite the end of the story.
Issue 5: Legacy HDA Clutter
What happens when our pipeline code needs to change, and the UI needs to be tweaked to support the changes? This can be a common occurrence, and is usually the result of a new feature being added.
We would have to change our HDA build script to support the new code changes. For old scenes, we'll still need to keep the old HDAs around so that they work the same way (a key principal of pipeline maintenance), and so the code base will also need to be backward compatible to support them still.
(Tip: In case we want to ever regenerate old HDAs, it can be useful to employ the use of tags in the Git repo to indicate the version of the build script that made them.)
All this quickly becomes less desirable when you start thinking about the complications of managing all the different HDAs across different versions of Houdini. Every time something changes in your custom UI, you need to update and deploy all the HDAs again across all versions of Houdini. After a while, you could end up with a lot of old versions of legacy HDAs that aren't being used any more, but need to be kept around in case you restore an old project at some point. Not only can all these versions clutter up the Asset Bar (see image below), but they can cause a noticeable impact on Houdini startup times.
Okay! So clearly there are a lot of things to be aware of when looking at using HDAs to customise the built-in Houdini nodes.
So lets look at the second option that we mentioned at the beginning.
Option 2:
Modify the UI of each new ROP instance
With this method, each time a new Mantra ROP is created we get Houdini to automatically run some code to add parameters to the new node. We would also be able to modify and hide existing Mantra parameters if we wanted. The code operates in an identical way to if we did it manually via the "Edit Parameter Interface..." menu.
This method means that as your code develops over time, the UI for any new ROPs that are created from that point forward will also change. Any ROPs that were previously created will be untouched.
So first the good news... Doing it this way removes most of the issues we just discussed with regards to HDAs. There's no hassle with legacy HDAs hanging around. No complicated build and deployment systems to manage.
So what are the downsides?
Issue 1: Backward Compatibility
As with the previous option, we still have to make sure our code is backward compatible and can support previous instances of ROPs, but that’s rarely a challenge. Most UI changes tend to be additions, so backward compatibility usually isn’t an issue.
If you want a 100% watertight way to guard against any issues, you can add a version tag to your ROP node when you create the interface. That then gives you the ability to detect older versions of the customised UI and provide an upgrade path if you need to.
Issue 2: Creation Speed
If your script has a lot of complex changes to make to the node's UI, it can potentially take a few seconds to run. Particularly on the first invocation. If it's less than a second or two, it shouldn't be a major issue for your artists. If it's longer than that, it could start to get annoying.
One way of improving your node creation time, is to use Houdini's Preset mechanism. You can potentially improve node creation speed by a factor of around 5. All you need is a way to automatically generate your presets. There are various ways you can do that which would allow you to avoid the issues we had with generating HDAs, and I'll cover that in a future post.
It's worth saying that I'm not currently using presets for speed up. Despite being fairly complex, I've made sure my UI adjustments are as fast as possible so at the moment it's not necessary. But I may look into it in the future as a "quality of life" improvement for artists.
Summary
Using this second option of having a user interface solely defined by the version of the code running at the time of node creation is highly convenient. You don’t have to worry about your code having dependencies on other files (HDAs), and if there is ever a problem, you’re able to update the UI as needed.
I'm not saying that you shouldn't go down the route of using custom HDAs if you feel there are particular benefits for you. Just be aware of the issues and make sure you're happy managing the complexity and additional maintenance.
Assuming you’re good with Option 2, we are now presented with a new problem. What’s the best way to hook into Houdini to call our Python code to do it? We'll discuss that in the next part of the article.
留言