BW Tools is now open source!

General / 18 October 2021

Hi all,

I am happy to announce that I joined Epic Games Quixel as a Tech Artist. As such, I am releasing my Designer Plugin BWTools open source, along with a new update. I have put considerable effort into these tools, so am very happy they are now more accessible to folk.

So what's new in this version? The most notable change is a new plugin, BW Framer, designed to help with adjusting your frames and some improvements to the existing layout tools. There is also new documentation on a dedicated site and not tucked away in my artstation blog. Please take a look through if you plan to use the new version. Aside from this, the whole toolset have been rewritten to make the code cleaner for this open source release. If your interesting in learning Designer's API, check out the code on my GitHub.

Here's a more detailed breakdown of the changes and a link to the download

https://www.artstation.com/a/848543

General changes

  • Replaced the top toolbar icon with a menu bar to access the tools. It now lives next to the Help menu.
  • Moved all graph related tools to the graph view toolbar, which can be toggled on and off.
  • It is no longer possible to run the tools on graph types currently unsupported by the Designer API. This was previously causing a crash with the latest release.
  • Added more detailed tooltips.

New plugin BW Framer

Reframe a selection of nodes quickly and provide default settings for the frame. No more manually adjusting frames after moving your nodes.

BW Layout Tool Changes

  • Considerable performance increase when running the layout tool. On average a 66x increase in performance. (102 seconds down to 1.6 seconds to run the tool on my test graph)
  • The mainline feature (which pushed nodes back) is now toggleable.
  • Added multiple layout styles.
  • Added a snap to grid option
  • Added option to run BW Straighten Connection after running the layout.
  • Several bug fixes in layout behavior (the default behavior may be a little different in this release).

BW Optimize Changes

  • Removed features which are now natively handled by Designer.
  • Fixed a bug which would incorrectly identify some nodes as duplicates.

BW Straighten Connection Changes

BW PBR Reference Changes

  • Is now opened from the new menu bar.
  • Removed custom color support in order to simplify the code for this release. The feature was barely used.

I want to thank everyone who had purchased the tools, your support is very much appreciated and if you purchased the tools in the last month, please dm me.

Flood Fill To Corners - Development - Part 3

General / 11 March 2021

Welcome to part 3 of my development blog! This is a continuation of the last two posts, so make sure to read them if you haven't yet!
Part 1
Part 2


In the last post we discussed using a circular kernel and explored different sampling patterns. We had moved away from using a stride value and instead defined a maximum search radius and scaled the sample points with that. However, textures with large variation in shape size will have problems as the search radius increases. In fact, we loose smaller shapes entirely, due to the radius becoming larger than the shape itself.

So what if we could scale this radius value per shape? In case you are not aware, the flood fill to bounding box size actually outputs a normalized size for each shape. Exactly what we need.

The values in the map are normalized to uv space, meaning if we multiply them by the texture size, they will actually give us the bounding box size of each shape in pixels! 

If we were to then scale the search radius by this bounding box size, our kernel would then respect each shape individually.

Of course, the bounding box size is going to be different for the x and y axis, so lets take the largest number of the two, max(x, y),  as a start. This is also the default on the flood fill to bounding box node. 

Left is without dynamic scaling, right is with. 

A big improvement already! However, a lot of the shapes are still becoming very small or disappearing again. This is due to the bounding box not accurately representing the shape in question.

Diagonal thin shapes being the worst offenders. Notice how poorly the box fits around the shape.

The flood fill to bounding box node has two other outputs, the values for x and y values respectively. Maybe we can get a more accurate bounding box by averaging the two, rather than taking the max. The image below shows a comparison between max(x, y) and (x + y) / 2. Green denoting how much we have improved.

Another good step forward but we are not quite there yet. Some shapes are still disappearing. After consulting with my lead again, he suggested I look into calculating the largest circle that fits inside the shape. In fact, the distance node can be used to generate this value. If we set a value of 256 on the distance node, it produces a full 0-1 gradient the width of a texture, regardless of the size. In the image below, we have a single line of pixels on the left and a distance node set to 256.

This is incredibly useful. If we were to distance our shapes, then the brightest pixel in each shape would actually be representing the largest radius that fits in that shape, in uv space. (I have made the image brighter so it is easier to see).

If we know the largest radius in a shape, that is the same to say we know the largest fitting circle.

That looks like a pretty good maximum search radius for our sample points to me! But, how do we get access to that value? It's not like we can simply plug the distance node into our pixel processor as the search radius is calculated per pixel. In the image below, our radius value would be wrong if we were currently calculating the red pixel.

What we need is that radius value dilated within each shape. Perhaps we can use the flood fill to grayscale node? Again, I have brighten the image to make it easier to see, but this is the flood fill to grayscale blended with the distance on max(lighten).

This was close, but you can see the flood fill to grayscale is not finding the brightest pixel and therefore dilating the wrong value. This is because the brightest pixel could be in any arbitrary position in the shape, and most certainly not the one the flood fill to grayscale is choosing.

Perhaps we can isolate the brightest pixel and dilate from that position instead? I thought maybe the easiest way would be to find the local maximum value, another neat idea from my lead. (poor guy has to listen to all my designer problems).

We wont go through the details of local maximum here because it didn't work out very well. But as a high level, it works similar to the blur we implemented. Except rather than averaging the kernel, we check to see if any of the neighbors are brighter than itself. Which means it will only output a white pixel at the very peaks.

Regardless, as you can see in the image below. Some cells are missing and I was not able to consistently produce a pixel at the peak of each shape.

I think this was largely due to the shapes potentially having more than one pixel with the brightest value. Like a rectangle for example.

So, what if we just create our own dilation node instead? We could take a 3x3 kernel again and simply output what ever is the brightest value. Of course, checking the shape ID's as usual.

Do this enough times and we will have dilated the brightest value! This is with 50 something passes.

But, how many passes is enough to fully dilate an image? Well again, this is resolution dependent, and different for each shape. Since designer doesn't support loops, our only option would be to assume the worst case and dilate the same number of times as the texture size.

Honestly, after a couple of hundred passes, the performance was so bad I just stopped.

Ok, so this wont work. It was Ole Groenbaek whom I was complaining too this time and together we went through the flood fill node looking for inspiration. In here we found a neat trick for cheaply dilating the value we need. 

Say we have our pixel whose value we want to calculate. Rather than looking at the neighboring pixels, instead we look at a pixel exactly half the texture size away. If that pixel is brighter, we change our one, otherwise ignore it. 

We then repeat this process, except this time we look half way between the first two.

And repeat, each time looking exactly half way between the last until we have filled out a full line.

What we are doing here is actually stepping down the texture resolution each time, meaning we only need to do a maximum of 12 passes should we want to fill a full 4k image. If we do this process on our image, checking the cell ID as usual, we have cut down the number of passes needed tremendously. In this example, there are 11 passes (to fill a 2k image) and instead of doing it up-down, we do it for each diagonal axis. (This was because I found it worked better with the typical cell shapes).

You may notice that some cells still have a gradient at the corners however. So for this reason, I repeat this process a second time, just to be sure all values are fully dilated.

What we have actually done here, is sort of half implemented a version of the jump flooding algorithm. Something Igor Elovikov pointed out to me after releasing the node. Jump flooding is exactly the process described above, but rather than looking at 1 pixel at a time, we look at 8.

This offers all the same benefits, is more accurate and it is cheaper since we end up doing even fewer passes in total. My reason for not using jump flood is simply because I didn't know about it, nor did it occur to me we could do the same process with 8 neighbors at a time.

I wanted to draw attention to this though, as it is absolutely the way to go, should you implement anything like this in the future.

With this, we have our largest fitting circle radius which we can scale our kernel by. 

Even with my best efforts to reduce performance cost, the node was still ranging from 20ms (on the lowest settings) up to 70ms (on highest) on my GTX 980 card, so I didn't want to have this feature on by default.

In fact, it does such a good job of preserving the shape, that some just looked strange.

Perhaps we should add a slider to let you transition between bounding box and best circle fit scaling.Which as it turns out, visually feels like your pruning smaller stones or not.

As a final optimization, when the slider is 0, there is a switch to disable the best circle fit calculation.

With these changes, our kernel search radius now depends on an input map.

This means the kernel is actually calculated per pixel, which in turn means we can manipulate it to produce some very interesting shapes. What you see below is the search radius being scaled by an additional variation input map (Which is animated for the gif).

This was another feature I had overlooked until Stan Brown suggested it to me! A good thing too since it was trivial to implement, only taking an additional multiply, and gives rise to much more flexibility in the node.


That concludes this series about the development of the node. I hope it was interesting to read and you are left with a good understanding of how the node works. As I mentioned in the first blog, I am sure there are better ways to create something like this. But regardless, it was a very enjoyable experience for me and so I wanted to share that with the community.

Thanks for reading everyone!

Flood Fill To Corners - Development - Part 2

General / 09 March 2021

In the previous post, we had noticed how the corners were not very smooth and somewhat resembled the shape of the kernel. Maybe sampling in a circle rather than a grid might result in better rounding of the corners. If you haven't read part 1, I would recommend checking that out first as this is a direct continuation of that!

Circular Kernels

Another issue from our previous iteration, aside from the pinching, was how unintuitive the stride parameter was. It is quite difficult to mentally map what a stride value of means, when it changes depending on the kernel size and output size of the texture. So, perhaps we could define a maximum search radius relative to some constant value, like uv space, and then linearly scale our samples inside. A circular star shape of 8 sample points in concentric rings seems like a sensible place to start. 

This allows us to adjust the kernel size independently of the sample count. In other words, we are free to add or remove more of these rings should we want more resolution without changing the overall size of the kernel.

This confirmed our suspicion and resulted in a much smoother rounding. The image below shows the error from a perfect circle between the grid kernel and the circular kernel. More white means more error.

When viewing the sampling pattern on a graph, another problem becomes very apparent. Just how sparse the sampling becomes the further away we get. 

Perhaps we should pick random points to sample within this radius instead? I have had trouble in the past with the random nodes inside designer being not very random. So instead of generating random sample points with those nodes, I precomputed the sample positions and brought them in as a float2 instead. Here is an example of what that could looks like.

This as it turns out, this had a few benefits I did not expect, first of which was performance. Using float2 nodes is cheaper than calculating new coordinates at runtime. It also decoupled the node from the random seed slider. Meaning, the nodes output would remain the same regardless of the random seed value and if you have followed any of my previous work, you might know I am a big fan of this!

Perhaps the most useful benefit was having the sample positions available outside of Designer. Now we could paste them into a tool like Desmos to visualize exactly what the kernel looks like (This is how I create these graph images).

This was particularly helpful in debugging, as it is quite difficult to see what has gone wrong when creating these pixel processors through code.

Anyway, the results were quite promising!

However, this introduced another rather awkward problem, the random points are... well random. Every time we iterate and rebuild the node the results are different. Some results are very wonky or lob sided. Indeed, it was a challenge just trying to generate this example image due to the randomness!

What we need is a more uniformly distributed pattern, something like a Poisson distribution. To be exact, its Poisson disk sampling we need, which is an algorithm to generate randomized points which are guaranteed to remain a minimum distance apart. The details of this algorithm is out of scope for this blog, but here is a nice breakdown should you want some further reading. Regardless, this is an example of the kernel with a Poisson distribution.

Now this is looking much better!

Since we are in python, generating these points was very easy. In fact, I found this implementation of it on github already, so there was no need to even write anything myself!

Here is a comparison between different builds of random sampling vs Poisson sampling. While the Poisson sampling is still random, the results are much more accurate and less variable between builds.

Up until this point, I had largely been testing on a 2k image and a fairly dense distribution of shapes. We still have a lot of artifacts when dealing with larger shapes though. I did do a lot of experimenting with smoothing these results (which we talk about below), but never did find an elegant solution to this problem. As far as I know, there just isn't a way around the issue with this sampling method. So, I decided the best approach was to expose a series of resolutions or sample counts to choose from. Ranging from 100, which is suited for these smaller shapes, up to 1000, to handle the large images.

With that said, I did receive great feedback post release with potential solutions involving jittering the kernel. This proved very fruitful and did indeed solve this resolution issue, however, I ultimately decided not to release an update as I felt the final results were a step back to what I had before. Perhaps this is a topic for a future post, should there be enough interest.
Anyway, I had showed my lead this progress and he suggested trying out a phyllotaxis sampling pattern. This is an approximation of a common pattern in nature, such as found in sun flower seeds. The argument being, if we have to have artifacts from low sample counts, at least phyllotaxis is a visually pleasing artifact.

The results are cleaner, but only barely. The biggest benefit however is the sample points are not random, so every time we build the node, the pattern remains the same.

Phyllotaxis sampling it is then! And this is what the final release of the node is using.

Smoothing and Blurs

I mentioned a moment ago that I attempted to smooth out all these artifacts we have been getting, as an alternative to simply increasing the sample count. So what are these smoothing techniques? Well I tried all the typical nodes like blur HQ, non uniform blur, the suite of warps, but ultimately they all have issues relating to blurring across shape boundaries or losing the core shape. What we need is a custom blur that respects the individual shapes.

In fact, the example blur we described right at the start of this blog series is the implementation I went with. That is, looking at the 8 neighbors and averaging the result. However, we do a check first to see if the neighboring pixel is in the same shape. Exactly the same method as we did to calculate the corners. So we can try stacking a few of these and by a few, I mean a lot. I really did this many to even approach a smooth result.

This of course ballooned performance again.

Instead, let us try increasing the stride so we sample a little further out than the immediate neighbor. As it turns out, a stride of 4 was a very good compromise and we are able to cut this 400 pass monster by a factor of 100. Then to eliminate any artifacts left over from the larger stride value, a final immediate neighbor pass is done.

There is an interesting question with these blurs which might not be immediately obvious. If we sample a pixel outside of the shape, what value should we use?

My first thought was to simply pass a value of 0, but as you see from the .gif below, that ends up completely loosing the original shape. Perhaps using the same value as the one we are calculating makes sense, the center pixel in the kernel? Or just ignoring the value entirely and only averaging those that are actually within the shape. These two ended up looking even worst!

The magic number turned out to be the center pixel * 0.9, which makes sense if you think about it. A slightly darker value is an approximation of the next value in the gradient, almost as though the gradient just continued to extend past the shape.

So, this is the majority of the node complete now. However, we still have a issue of varying shape sizes to solve. In the next post, we will discuss how we scale the kernel radius per shape and prevent the issue of smaller shapes disappearing altogether.


See you there!

Part 3

Flood Fill To Corners - Development - Part 1

General / 07 March 2021

I will be starting a blog series on the development of the flood fill to corner node I recently released.

I learnt an awful lot making this and a number of folk reached out asking how it works, so this blog will hopefully address that. This wont be a step by step tutorial and I plan to show very little in terms of actual implementation, since doing so would balloon this blog into a giant mess. Instead however, it will be going over the high level ideas, all the specific details and the problems I encountered.

This series is aimed for folk looking to learn more about the pixel processor and those of you already well versed with mathematics and image processing, will likely cringe at my shoddy implementation and unoptimized workflow. But for the rest of us, I hope it make for an interesting read!

I plan to write this more or less in chronological order so you can get a feel for how it developed over time, but I will reshuffle some stuff in order to maintain a better flow.

How To Identify Edges

If your new to image processing, a very common method of reasoning about an image is through a kernel matrix. A kernel is like a grid which you place over each pixel in an image to do calculations in.

Below we have an image with a shape in it. A simple kernel might look like this, where the center cell is the pixel we are trying to calculate and the remaining cells are the 8 immediate neighboring pixels.

For each pixel in the kernel, we would run some math operations on it to calculate a final value for the center pixel. We could for example, add up all the pixel in the kernel and divide that by the kernel size to give us an average value of all the pixels in this 3x3 grid.  Do that for all pixels in the image and we have implemented a simple blur.

You can visually think of the kernel looping over each pixel in our image, doing what ever calculation we want. Of course, our kernel could be any size and not necessarily looking at immediate neighbors either.

In reality, all pixels are calculated at the same time, rather then doing it one at a time like this, but I think the illustration can help to visualize what is happening.

So using this, how could we reason where corners are in a given shape? One approach, and the one I took, could be to count up how many neighboring pixels land inside the same shape and sum them up. Do this for all pixels and now you know where corners are in your image, pretty straightforward!

This is a very simple example, but if we had a bigger kernel with a larger image. The result gives us a gradient describing to what extent each pixel is a corner. With this gradient, we can apply various thresholds, levels, and tweaks to produce the outputs we want.

This is more or less how the whole node works. There are a ton of implementation details to solve but counting pixels inside the same shape is the high level approach. 

Kernel Size and Python

Most image processing techniques require you to repeat a calculation a set number of times or across a given range of pixels. In programming this is achieved through the use of loops. However, the pixel processor has one major limitation, it doesn't support loops.

So for this reason, I create everything in the pixel processors with python and the API. We wont be going through that process here, since its way out of scope for this blog. But to give you an idea, I wrote a plugin with a set of functions for creating and connecting nodes.

This way, I can easily iterate and use all of the Python toolkit, including loops. On the designer side, the plugin simply deletes everything inside the selected pixel processor node and rebuilds the node network for me.
Recall, the idea was to count all the pixels inside the kernel that reside in a shape. However, what would happen if we had two different shapes close together? We may inadvertently sample pixels from the other shape and mess up our count. We need a way to identify each shape independently.

The flood fill node has a 'Display advanced Parameters and Output' button, which you can use along with the flood fill to index node to output a unique number per shape.

Then for each pixel in our kernel, we can check if the shape indices match. If they do, output a value of 1 otherwise output 0. Add that all up, divide by the kernel size and there is our value. This is what that could look like inside designer.

We also alluded to the need of having a big kernel in order to produce a sufficient gradient, so lets consider that a moment. In order for a pixel to describe how close it is to an edge, it needs to have seen that edge pixel to have been included in the calculation.

That's potentially a very pretty big kernel. In the image above, the shape covers nearly all of the texture and if we wanted the gradient to reach close to the middle of the shape, the kernel would need to be at least half the texture size. If this image was 2k, the kernel size could potentially be 1024x1024 or more. That's 4,398,046,511,104 total samples by the way (1024^2 pixels in the kernel per pixel in the image), which is obviously ridiculous.

However, at this point I just wanted to test if the idea would work. So I took a typical use case and tried a few different kernel sizes.

This somewhat confirmed the idea, but there are some glaring issues. The most obvious being performance. Even with a 65x65 kernel, this was 500+ ms on my machine and took several minutes to create the node. 

Perhaps we do not need so many samples to produce a good enough result. We could try sampling in a star shape instead, to see what difference that would make. This would reduce our sample count by over half.

The results are comparable and the performance was hugely improved. Even the mask output is perhaps a little nicer.
However, with the large gaps in the kernel now, there are some slight artifacts to the gradient and something we will talk more about in another post.

To address the issue of needing different kernel sizes for different shape sizes. My first thought was to define a stride between each neighbor so we could adjust the kernel to best fit the texture in hand.

This certainly worked, but had the issue of very low sample counts producing prominent banding as the kernel size increases.

Furthermore, textures that feature large variation in shape size are handled poorly. Smaller shapes are quickly lost as the kernel covers the entire shape. Another topic we will talk a lot about in the following posts.

You may have noticed the corners are not very round and somewhat reflect the shape of the kernel. Notice here how squared off the corners are and how the star kernel has this slight pinch to it?

Well this is what the next part will be about, check it out here:
Part 2

Lets Talk Color Pickers - Part 3

General / 16 December 2020

Well its been over 2 years (!) since I last posted about color pickers. If you didn't see it, I developed a node to provide a procedural way of producing color variation given a single color picker. Take a look here. I did give a talk on the specific inner workings of the node and Ubisoft was kind enough to let me release that video to the public. So if your interested in how that node was put together, watch it here.

Since then, I have gotten some great feedback and it has seen some extensive real world testing. Finally, I found some free time to address the points raised.

So what has changed? As a brief overview the node now has:

> Supports harmonic color schemes <
> Sample counts and seed values have been decoupled from one another <
> Seed values range range from 0 - 50 <
> Added hue variation amount slider <

The node has been split into 4 dedicated nodes, one to handle value variation, another hue, the original node (named basic) and a new one which supports harmonic color schemes.

The harmonic version, lets you choose between 5 different color harmony schemes and will generate variation in those colors.

Of course, the above is an extreme example to highlight the different modes, but you can keep the colors as subtle as you need.

All the old settings are still available. You can choose to preserve saturation and add random hue variation onto this as before. But sample count and seed values have been decoupled for all nodes. Meaning, you will no longer lose the value variation you were happy with, when changing the hue's seed values, and visa versa. 

Seed values are now integers ranging from 0 - 50 which hopefully brings it more inline with what the slider should be doing. Previously, there were a total of 100 possible increment values, which made the slider feel too sensitive and difficult to cycle through. There should be plenty of possible seeds to pick from still, but it wont be so difficult using the slider anymore.

In addition to this, I have documented the node with examples to show precisely what it does and how to make the most of it. Be sure to check out the update on my store!

  

Lets Do A Mentorship!

General / 14 September 2020

Hey all!

I have joined forces with Jeremy and crew to offer mentorships in all things material art! Check it out on the new and swish https://www.dinustyempire.com/mentorships/ben-wilson website =D

Over the last few years, iv dipped into making breakdowns, video tutorials and providing feedback to folk, and while I really enjoy this, its hard to provide anything meaningful through these short interactions. So my hope with these mentorships are to provide dedicated one on one time with artists to really dive into their work and spend quality time with them.

The goal is to focus around material art and the Substance tools, generally aimed at students or junior level artists (although this is not a requirement) and provide feedback, guidance and support. The type of artist I envision getting the most out of the mentorship are those looking to improve their portfolio or really solidify their understanding of Substance; Whether this is simple feedback or portfolio reviews, through to developing a clean work methodology and even programming with the API.

We decided to provide an additional offering outside of a full mentorship, as we know sometimes people just want a session or two of dedicated feedback and not pay for a full mentorship. So this is what the crash course is for.


Of course there are many fantastic artists offering mentorships now, so I hope these offerings don't get lost in the wind. But either way, I want to open up my time as another option for artists to improve.


I hope to work with you soon!

GDD - Insert Topic #4

General / 27 August 2020

Hey all!

I returned to chat with Alex Beddows about materials and the industry, this time along with the amazing Josh Lynch and James Lucas. If you have an hour, check it out!


bwTools v 1.3 release - New plugin - PBR Color Chart

General / 30 April 2020

Hey folks!

I have released version 1.3 of bwTools and it includes a new plugin! A pbr color chart. People who have already purchased the tools can get the new update for free on my store https://www.artstation.com/marketplace/p/ewNd/bwtools-substance-designer-plugin 

This update is only compatible with Designer version 2020.1.1+ !!!

This plugin is a convenient PBR color chart built directly into Designer. It provides various pbr values, based on DONTNOD, to quickly and easily reference without the need to having a downloaded color chart opened in windows. https://seblagarde.wordpress.com/2014/04/14/dontnod-physically-based-rendering-chart-for-unreal-engine-4/ 

It is always displayed ontop of the designer viewport, making it easy to color pick from

Swatches are select-able, drag-able and hide the UI to allow for easy comparison with your texture. Simply click and drag over the 2d view to compare values directly.

Up to 10 custom colors are supported. You can edit the colors and name as needed

Full documentation can be found https://www.artstation.com/benwilson/blog/lAPv/bwtools-documentation 

Thanks for the support and stay safe!

bwTools - Documentation

General / 24 March 2020


Hi all!

Today is the first release of a group of plugins I have been working on to help organize and layout our substance designer graph networks. In conjunction with this, comes the release of bwTools, a Substance Designer plugin consisting of the tools I have been working on and will provide a platform for me to share any future work. As a bonus, my previous optimize graph plugin is bundled with it. This post will serve as documentation for that, so for an overview see links below.

You can find an overview video here:

https://www.youtube.com/watch?v=4Ckh0mgwYcA 

store page:

https://www.artstation.com/benwilson/store/ewNd/bwtools-substance-designer-plugin 

Please contact me for support through artstation or email!


Release Notes:

Version 1.3

- New plugin added : pbr color chart

  • A convenient pbr color chart built directly into designer
  • Based on DONTNOD unreal engine 4 pbr values
  • The color chart remains on top of Designer to make color picking easy
  • Color swatches are select-able, drag-able and hides all the UI to allow easy comparison with your texture
  • Supports up to 10 custom swatches

Version 1.2

- Fixed bug where some people were not able to load the plugin

Version 1.1

- Updated plugin to Substance Designer 2020.1.1 onwards. This update is not backwards compatible and will not work with Designer 2019!

- Fixed tooltips on toolbar icons


================================================

Installation

If you have a previous version installed, you need to delete the old bwTools folder inside C:\Users\Ben\Documents\Allegorithmic\Substance Designer\python\sduserplugins and restart designer.

Open Substance Designer and navigate to Tools > Plugin Manager...

Click Install at the bottom

 Navigate to your bwTools.sdplugin file and click open


================================================

================================================

bwTools

bwTools consists of 2 parts. A toolbar at the top of the Substance Designer application and the various plugins which make up the tools.

All settings for individual plugins currently installed can be found in the settings window here.


================================================

================================================

Layout Graph

This tool is designed to help speed up the laborious task of neatly arranging your nodes. Is it best used as a helper tool when laying out your graph to your personal style and to make the most of it, we must understand how it wants to layout your node selection.

Node placement behavior

Nodes are always placed behind their outputs and always inline with the one which produces the longest straight line.

Nodes have a concept of height, which means they will correctly stack on each other.

However, be aware that due to a limitation with the Substance Designer API, the plugin assumes all inputs/outputs are visible. In the example below, the top two nodes have many hidden inputs/outputs, leaving artificial gaps.


Nodes which start a chain are called root nodes. These are left untouched and all nodes input into it will arrange accordingly. Therefore, it is important to provide enough space for the network to expand if there are multiple root nodes.


The Mainline Concept

The tool wants to find a mainline through your selection and provide space for other parts of the network to feed into it. It typically assumes the network with the longest chain is the mainline.

If it thinks your network is equally important, it will simply place them evenly.

This makes it possible to influence the layout based on  your selection.

Other nodes are inserted into the mainline relative to their input position.

The tool will generally favor the middle node chain if possible however.


The Network Cluster Concept

Group of related nodes will form network clusters and these are what feed into the mainline. They will also be given space and positioned such that they never overlap with the mainline. The darker frames below are network clusters.


Looping Networks

Nodes that loop back into the network at various points form looping networks. These are often very complicated and can sometimes stretch the entire length of the graph. 


While the tool will successfully handle these types of networks, it is often better layout your graph more contextually, using the tool as an aid to speed up the process. Taking the network above, we run the tool on smaller network clusters instead (shown in the darker frames) and position them more contextually and to fit your personal style.


Settings

Hotkey

You can define your hotkey here. Requires a Substance Designer restart

Node Width 

Sets the width of each node, generally you can leave this untouched.

Spacer

Sets the distance between each node

Selection Count Warning

The plugin processing time is exponential, meaning large selections can take a very long time to compute. A warning is displayed before running the plugin if the number of selected nodes surpasses this threshold.

Consider Splits Nodes For Mainline

There are two styles for laying out the network. If Consider Split Nodes For Mainline is on, the algorithm will reason on split nodes first, generally preferring to use them as mainlines. This results visually larger encasing loops.

If Consider Split Nodes For Mainline is off, split nodes are given the same priority as everything else. The visual result here are more defined node clusters and grouping. Unless there are a lot of complicated looping networks in your selection, there may be no difference between the settings.


================================================

================================================

Straighten Connection

This tool will create dot nodes out of a given node to each of its connected outputs. Which helps reduce visual clutter and readability.

Dot nodes will be chained together.

Works on multiple selection too, handy to clean up the entire graph at the end of a working session.

There is a tool to remove dot nodes connected to your selected node too. Found in the toolbar

Settings

Straighten Selected Hotkey

You can define your hotkey here. Requires a Substance Designer restart

Remove Connected Dot Nodes Hotkey

You can define your hotkey here. Requires a Substance Designer restart

Distance From Input

Defines how far from the input of each node to place the dot node


================================================

================================================

PBR Color Chart

This is a convenient PBR color chart built directly into Designer. It provides various pbr values, based on DONTNOD, to quickly and easily reference without the need to having a color chart opened in windows. https://seblagarde.wordpress.com/2014/04/14/dontnod-physically-based-rendering-chart-for-unreal-engine-4/ 


It is always displayed ontop of the designer viewport, making it easy to color pick from


Swatches are select-able, drag-able and hide the UI to allow for easy comparison with your texture. Simply click and drag over the 2d view to compare values directly.


Up to 10 custom colors are supported. You can edit the colors and name as needed



================================================

================================================

Optimize Graph

This tool can be used to optimize various parts of your graph.

Remove Duplicate

Composite Nodes - Evaluate Input Chain

If this is on, the plugin will identify chains of nodes which are identical and remove them too

The plugin has some rules to what it regards a duplicate.

Settings must be identical for a node to be considered a duplicate

A node with an exposed parameter is not considered a duplicate

Uniform Color Nodes - Output Size

Will reduce all selected color nodes to 16x16, the optimal output size for a node in designer. Connected outputs are automatically set to relevant to parent


Blend Nodes - Ignore Alpha

Any selected blend node which only have grayscale inputs will have its alpha blending mode set to ignore alpha. 

Note: This setting requires a recompute of the graph, so is disabled by default. 


Graph Layout Plugin - Part 3

General / 27 February 2020

Hey folks!


Its been a while once again since my last post. Life got in the way again, quite literally this time, as my son was born just after new years! Hes super cute and very demanding! I did have the occasional spot of time to continue working on this plugin though so heres the update.


I took a step back a little and developed a tool to manage my plugins (spoiler!). I was often wanting to toggle settings with a nice little UI when trouble shooting, so I bit the bullet and built this thing. The nice part now is the UI dynamically builds itself based on a .json file. That way I can easily add modules or settings without having to rewrite any UI. From here on named bwTools!


But back to the main topic...I did a sort of re-write once again. After struggling with ironing out bugs, my logic was becoming very specific. So I went back and tried to simplify how I reason about the graph. Now instead of making an best guess about what a lane through a graph should be, the plugin tries to figure this out itself. This results in much more structured and understandable layouts. I am still doing everything is passes though, as this lets me make some assumptions about the state of the chain, so it was more of less the logic surrounding what a lane structure is that has changed.

Briefly, I first order the nodes into a proper hierarchy, then consider any chains that are deep, to more likely on the mainline. Sprinkle in some awkward setups such as chains equal in length, connecting to numerous parts of the graph or belonging on a different lane entirely and you a logic nightmare to solve!

You can notice on the .gif below however, that the blend nodes do not always assume the middle input is the mainline anymore. The other inputs also respect location and shift accordingly.


So if it does consider all the branches of a given node to be equally important, it will now divide them instead of trying to make lanes


With this, comes proper support for any number of inputs


Height has also been implemented, but with some drawback, unfortunately. There is no way of knowing how many input and output slots are actually visible and a lot of nodes have visibleif statements setup. The only information the API provides is the total number =( Until this gets added, I just had to reason on a node, assuming all slots are visible.


Another big challenge came with nodes that output multiple numbers to the same node. So now I try to make an educated guess by weighting all the inputs/outputs relative to each other. At the very least, it gives me a single value to work with, making it much easier to position nodes consistently.


So I still have an awful lot of weird bugs and behavior to figure out. I am trying to keep the logic as node agnostic as possible though and the results are starting to feel a lot more natural I think.


Im sure there are some tech artists face palming at me struggling through this, but we all start somewhere! It is now comfortable with most graphs, but I hope to solve this problem fully someday. 

Thanks for reading