Alignment Node - Development - Part 4

General / 20 March 2022

In the last part we just finished off creating our step function and now we will be implementing the rest of the node. The output of the step functions are images with pointers to the farthest pixel in a given direction.

This is in fact everything we need to calculate alignment points. For every pixel, we can now find relevant corner points and form a transformation matrix from it. 

The first step however was to convert this information into uv positions, much the same as the flood fill to position node. This is done by adding the current pixel position to the pointer vector. 

So now I have two maps which output uv position of the four corresponding points in R,G and B,A respectively.

There is a setup here that dilates these uv position through the shape by sampling the position at the center of each shape. However, I will gloss over this since its not actually needed and not important for this blog. I added it before figuring out the counter clockwise issue! I left it in the source file though, should you be interested in taking a look. 

With these four points, it is then possible to align an axis grid by taking the mid points between 2 of them. I ended up exposing a variety of different alignment modes, which essentially came down to picking which two points to align an axis too.

And here is an image showing the alignment points applied to a real example. 

Now we have 2 points to form an alignment transformation matrix from, lets look at how that actually happens. For this we need to understand how transformation matrices work. More specifically, a 2D transformation matrix. The one you find on the transform 2d node.

In the transform 2d node, the values for X and Y correspond to the coordinates of two vectors which will define a grid. These are called basis vectors.

So, X1, X2 are the x, y components defining what the X basis vector should look like. Y1, Y2 the x, y components of the Y basis vector. These describe what the X and Y  lines of a new grid should look like. You can think about it like rotating, squishing and squashing space itself into this new grid defined by these basis vectors.

Because we have two alignment points, we already have what we need to calculate at least one of the axis! We can use the delta between the two alignment points as an X  basis vector. 

But what about the Y axis? For that we can use a cross product. The cross product will give you the vector perpendicular to two input vectors in 3d space. 

So, if we were to assume our texture is in 3d space, where the X axis is already known and the Z Axis looks directly towards the view, then the cross product between those two vectors would give us Y. 

And here is the implementation for that. (1) We first construct the X axis and normalize it so the vectors length is 1. (2) Then convert that to a 3d vector by appending a z axis. We set the z component to 0 because we actually want this vector to rest on the texture plane. (3) Then cross product with a vector that points along the Z axis, which gives us the Y. (4) Since X and Y now both rest on the texture plane, we no longer need the z component, so that is removed and both vectors stored as the basis vectors of a 4x4 matrix. (5) Finally, the uv positions are multiplied by this matrix to perform the actual alignment. 

Also to note, there is a whole section here dealing with rotating the uv coordinates before being aligned and this was to provide some angle parameters to the user. I will leave you to explore that on your own too =)

However, as a small gotcha, if you are trying to implement this yourself; While Designer does come with a transform position node, it makes some assumptions about your uvs, so I needed to reimplement my own version to do the matrix multiplication.

And that was the last piece of the puzzle. When we input an image, it will align the uvs to each shape. Here is an image showing an example.

So this concludes everything I wanted to discuss about the node implementation, but before we close out I did want to briefly talk about why all this doesn't really work!

Firstly, there is the limitation of convex shapes. We alluded to this before; the fact I am using the bounding box center to determine the distance means some shapes will fail to find a valid point.

While this might not be a deal breaker (after all the use case is for convex stones), it does mean the setup is not 100% robust.

The biggest and main problem however is simple, this method is unable to find appropriate alignment points for a lot of shapes. This stems from the bad assumption that corners are going to be in the 45 degree directions (or which ever you pick) and the farthest point might not necessarily be in a good corner either. The node is particularly sensitive to shapes that are on the 45 degree angles.

In the field of computer vision, there is an image processing concept called image moments and this is used to calculate things like weight, centroid and shape orientation. The second order moment specifically is used to find the orientation and can be thought of as finding the axis of minimum energy required to spin the shape. This I think would be a much more robust way of approaching the problem!

I recommend watching this wonderful lecture on the topic here https://youtu.be/ZPQiKXqHYrM

For now though, I am done with this problem and may pick it up again in the future. I will be putting the source files on my store (free) should you want to dig into the node further, Even though this didn't pan out as I wanted, I do hope it was interesting to you none the less.

Before closing out, I would like to specifically thank Igor Elovikov https://www.artstation.com/elovikov who was the one who really reverse engineered the flood fill node and was kind enough to share his findings with me. I definitely wouldn't have figured it out on my own! And of course Stan Brown https://www.artstation.com/stanbrown who is the one who got me hooked onto the topic and letting me use his textures for testing!


If you would like to take a look at the source files, they are available on my store https://www.artstation.com/a/14364831

Thanks for reading!

Alignment Node - Development - Part 3

General / 20 March 2022

So where were we? To give a little recap, we needed to calculate the distance from the center to any pixel so that as we are stepping around the shape edges, we can use that value to find an appropriate corner point.

The issue right now is we can only deduce the farthest point, not the farthest point in a particular direction. We need a way to calculate that too.

My idea to solve this was to calculate a direction map and then compare a desired direction against that texture. This could then be used to modify the distance map. 

Lets look at how that works. It is possible to calculate the direction a gradient is sloping by finding the delta between neighbouring pixels. The delta is the name given to describe the change in a value between two points. So, if we plot a line on a graph. We could pick any two points on that line and say the delta in x (the change in x value) between those two points is the difference between the x value of point 2 and point 1. The same for Y axis.

So we could look at two pixels, in x and y respectively and use the values from the distance map to calculate the delta. 

Which, if the then plot on a new graph, and draw a line from the origin to the point, it will give us the direction we need. The vector dx, dy is the direction the distance map is sloping. 

So doing this for all pixels, we would end up with a texture which describes the direction of the gradient at every point in the shape.

Then in order to compare the vectors with a chosen direction, we must normalize the result, so every vector now has a length of 1. 

Taking the dot product is then comparing how close every direction is to the chosen one. Where the vectors are pointing in the same direction, the output is 1. As they become perpendicular, the output approachs 0 and where they point in the opposite direction, it approachs -1. 

Here is the implementation for that. It is split across two pixel processor nodes. 

Now I can multiply my distance map by this directional map and use that to find the farthest point in the step function instead. 

This is done four times in four directions. 

Now lets look at the step function. We already saw how this works for the flood fill node in part 1, and this will be almost identical. We will store pointers to neighbouring pixels until finding an edge, then start spinning around it.

The only difference is, instead of storing the vector to a reference pixel, we will store the vector to the farthest pixel. This is as simple as storing the one with the largest distance value, taken from map we just calculated. 

So, looking at an overview, we have some initial setup of values (the first pass in the step function), then successive jumps with each jump updating its pointer to a new farthest pixel. 

And here are the passes of successive jumps. In my testing I found 8 passes to be to little to resolve fully and 16 to be just fine. However, I added more steps just in case and setup some switches on blend nodes to toggle between if needed. 

You may notice too that I have 2 nodes for the initial setup of values, with one going into 3 step functions and the other only into 1. 

The reason for this was because in that particular direction, the values were not converging correctly on some shapes. Notice how on this shape there is a gap in the right hand corner.

To be honest, it is still unclear to me why this is happening, but I did notice it was only occurring when calculating the direction to the bottom right. This lead me to suspect it was something to with stepping around the edges in a clock wise pattern. So this second node is stepping counter clockwise instead. Indeed it fixes the issue, but if anyone has insight into why exactly we must step counter clockwise when calculating the bottom right direction, please let me know!

For now we will end part 3 and return in the next post to see what the step function outputs and how it was used.
https://www.artstation.com/blogs/benwilson/nZ67/alignment-node-development-part-4


Alignment Node - Development - Part 2

General / 19 March 2022

In the last part we looked at the flood fill node and how it works. In this part, we will see how this applies to the alignment node and start discussing the details.

The goal is to find some points which can be used to form an alignment matrix. So, the idea I had was to more or less do the same thing as the flood fill. Step around the edges, asking if this reference pixel is a 'corner' or not. Ideally, we would end up with an output that looks something like this.

The first idea I had was to use an if statement very similar to the one in the flood fill, where it finds a reference pixel; Is the pixel farther up and right? Is the pixel farther down and left? and so on...

However, this turned out to not work well. Sometimes the node would never converge to a single pixel. I think this is because my if statement was setup with an AND condition: If farther AND farther right. Take this pixel for example, it would fail because while its reference pixel might well be farther to the right, its not farther right and farther up. Essentially I was finding local minimums and never converging fully.

You might think that you could stagger the if statement instead. Something like: If farther right, then check if its farther up. The issue here however is you end up biasing the result to a particular axis. 

I tried a variety of these condition and never found one that worked. In the case of flood fill, I think it works because its only interested in finding one pixel and it doesn't matter where exactly. So they are able to construct an if statement which is defined fully. In this case though, we need a particular corner, in a particular direction, and that is harder to define.

So instead I decided to use the distance from the center as a way of weighting the question. The idea is this: Always pick the pixel which is farther away from the center, but only if it is pointing in a particular direction.

Now, before we discuss how that is implemented, there is an interesting question. What is the center of the shape? The centroid may be the obvious one, but we don't immediately have that information. And in fact, to calculate that we would need to use a concept called image moments. We will talk more about this at the end of the series. 

The next logical one might be the center of the bounding box. We do in fact have that information to hand already, it comes with the flood fill node. However, both of these come with other problems; its possible the center actually lays outside of the shape.

Another approach may be to take the center of a best fit circle. This is something I covered in my last blog series https://www.artstation.com/blogs/benwilson/2WpV/flood-fill-to-corners-development-part-3

The issue here is that its quite a bit more expensive to compute and may not be very 'center' in some shapes.

In the end, I decided to use the bounding box center for simplicity. I wanted to get the node up and running quickly to see if it would even work anyway. After all, my use case was a typical convex shape anyway.


So lets look at how to calculate the distance from the center. There are 3 steps I took for this:

1. The first is to dilate the flood fill to position output, which if you didn't know, gives you the uv position of the bounding box center for each shape.

2. The second is a pre pass to try and prune floating pixels which were causing issues in the step functions. Here you can see how the pre pass identify and remove floating pixels.

This is done by looking at each neighbour pixel, counting them up and outputting black if the pixel is isolated. So if a pixel is surrounded by black, except for one pixel, then I assume it to be associated with a shape, but isolated.

There are 4 passes to this, because pruning one pixel might produce another isolated one (in the case of a line), reasoning that four will probably be enough. This is quite hacky, but I want to emphasis again that I am not worrying about making the node fail safe and robust at this point. I needed to prove the theory would work first, which as it turns out, didn't =)

3. This last part is very simple, take the distance between the center point and the center pixel. The farther we are from the center, the larger the distance value will be.

However, this introduces an interesting error when applied to the full tiling texture. When we are close to the border, the center point might actually be on the other end of the texture, leading to artificially large distance values. 

To solve this, we must offset the texture by one full tile and then compute the distance, taking which ever is the smallest. Do this for all 8 directions and it will fix the tiling. 

This wraps up part 2, in part 3 https://www.artstation.com/blogs/benwilson/opoN/alignment-node-development-part-3 we will continue directly from here and discuss how to use this newly computed distance map. See you soon. 

BW Tools is now open source!

General / 18 October 2021

Hi all,

I am happy to announce that I joined Epic Games Quixel as a Tech Artist. As such, I am releasing my Designer Plugin BWTools open source, along with a new update. I have put considerable effort into these tools, so am very happy they are now more accessible to folk.

So what's new in this version? The most notable change is a new plugin, BW Framer, designed to help with adjusting your frames and some improvements to the existing layout tools. There is also new documentation on a dedicated site and not tucked away in my artstation blog. Please take a look through if you plan to use the new version. Aside from this, the whole toolset have been rewritten to make the code cleaner for this open source release. If your interesting in learning Designer's API, check out the code on my GitHub.

Here's a more detailed breakdown of the changes and a link to the download

https://www.artstation.com/a/848543

General changes

  • Replaced the top toolbar icon with a menu bar to access the tools. It now lives next to the Help menu.
  • Moved all graph related tools to the graph view toolbar, which can be toggled on and off.
  • It is no longer possible to run the tools on graph types currently unsupported by the Designer API. This was previously causing a crash with the latest release.
  • Added more detailed tooltips.

New plugin BW Framer

Reframe a selection of nodes quickly and provide default settings for the frame. No more manually adjusting frames after moving your nodes.

BW Layout Tool Changes

  • Considerable performance increase when running the layout tool. On average a 66x increase in performance. (102 seconds down to 1.6 seconds to run the tool on my test graph)
  • The mainline feature (which pushed nodes back) is now toggleable.
  • Added multiple layout styles.
  • Added a snap to grid option
  • Added option to run BW Straighten Connection after running the layout.
  • Several bug fixes in layout behavior (the default behavior may be a little different in this release).

BW Optimize Changes

  • Removed features which are now natively handled by Designer.
  • Fixed a bug which would incorrectly identify some nodes as duplicates.

BW Straighten Connection Changes

BW PBR Reference Changes

  • Is now opened from the new menu bar.
  • Removed custom color support in order to simplify the code for this release. The feature was barely used.

I want to thank everyone who had purchased the tools, your support is very much appreciated and if you purchased the tools in the last month, please dm me.

Flood Fill To Corners - Development - Part 3

General / 11 March 2021

Welcome to part 3 of my development blog! This is a continuation of the last two posts, so make sure to read them if you haven't yet!
Part 1
Part 2


In the last post we discussed using a circular kernel and explored different sampling patterns. We had moved away from using a stride value and instead defined a maximum search radius and scaled the sample points with that. However, textures with large variation in shape size will have problems as the search radius increases. In fact, we loose smaller shapes entirely, due to the radius becoming larger than the shape itself.

So what if we could scale this radius value per shape? In case you are not aware, the flood fill to bounding box size actually outputs a normalized size for each shape. Exactly what we need.

The values in the map are normalized to uv space, meaning if we multiply them by the texture size, they will actually give us the bounding box size of each shape in pixels! 

If we were to then scale the search radius by this bounding box size, our kernel would then respect each shape individually.

Of course, the bounding box size is going to be different for the x and y axis, so lets take the largest number of the two, max(x, y),  as a start. This is also the default on the flood fill to bounding box node. 

Left is without dynamic scaling, right is with. 

A big improvement already! However, a lot of the shapes are still becoming very small or disappearing again. This is due to the bounding box not accurately representing the shape in question.

Diagonal thin shapes being the worst offenders. Notice how poorly the box fits around the shape.

The flood fill to bounding box node has two other outputs, the values for x and y values respectively. Maybe we can get a more accurate bounding box by averaging the two, rather than taking the max. The image below shows a comparison between max(x, y) and (x + y) / 2. Green denoting how much we have improved.

Another good step forward but we are not quite there yet. Some shapes are still disappearing. After consulting with my lead again, he suggested I look into calculating the largest circle that fits inside the shape. In fact, the distance node can be used to generate this value. If we set a value of 256 on the distance node, it produces a full 0-1 gradient the width of a texture, regardless of the size. In the image below, we have a single line of pixels on the left and a distance node set to 256.

This is incredibly useful. If we were to distance our shapes, then the brightest pixel in each shape would actually be representing the largest radius that fits in that shape, in uv space. (I have made the image brighter so it is easier to see).

If we know the largest radius in a shape, that is the same to say we know the largest fitting circle.

That looks like a pretty good maximum search radius for our sample points to me! But, how do we get access to that value? It's not like we can simply plug the distance node into our pixel processor as the search radius is calculated per pixel. In the image below, our radius value would be wrong if we were currently calculating the red pixel.

What we need is that radius value dilated within each shape. Perhaps we can use the flood fill to grayscale node? Again, I have brighten the image to make it easier to see, but this is the flood fill to grayscale blended with the distance on max(lighten).

This was close, but you can see the flood fill to grayscale is not finding the brightest pixel and therefore dilating the wrong value. This is because the brightest pixel could be in any arbitrary position in the shape, and most certainly not the one the flood fill to grayscale is choosing.

Perhaps we can isolate the brightest pixel and dilate from that position instead? I thought maybe the easiest way would be to find the local maximum value, another neat idea from my lead. (poor guy has to listen to all my designer problems).

We wont go through the details of local maximum here because it didn't work out very well. But as a high level, it works similar to the blur we implemented. Except rather than averaging the kernel, we check to see if any of the neighbors are brighter than itself. Which means it will only output a white pixel at the very peaks.

Regardless, as you can see in the image below. Some cells are missing and I was not able to consistently produce a pixel at the peak of each shape.

I think this was largely due to the shapes potentially having more than one pixel with the brightest value. Like a rectangle for example.

So, what if we just create our own dilation node instead? We could take a 3x3 kernel again and simply output what ever is the brightest value. Of course, checking the shape ID's as usual.

Do this enough times and we will have dilated the brightest value! This is with 50 something passes.

But, how many passes is enough to fully dilate an image? Well again, this is resolution dependent, and different for each shape. Since designer doesn't support loops, our only option would be to assume the worst case and dilate the same number of times as the texture size.

Honestly, after a couple of hundred passes, the performance was so bad I just stopped.

Ok, so this wont work. It was Ole Groenbaek whom I was complaining too this time and together we went through the flood fill node looking for inspiration. In here we found a neat trick for cheaply dilating the value we need. 

Say we have our pixel whose value we want to calculate. Rather than looking at the neighboring pixels, instead we look at a pixel exactly half the texture size away. If that pixel is brighter, we change our one, otherwise ignore it. 

We then repeat this process, except this time we look half way between the first two.

And repeat, each time looking exactly half way between the last until we have filled out a full line.

What we are doing here is actually stepping down the texture resolution each time, meaning we only need to do a maximum of 12 passes should we want to fill a full 4k image. If we do this process on our image, checking the cell ID as usual, we have cut down the number of passes needed tremendously. In this example, there are 11 passes (to fill a 2k image) and instead of doing it up-down, we do it for each diagonal axis. (This was because I found it worked better with the typical cell shapes).

You may notice that some cells still have a gradient at the corners however. So for this reason, I repeat this process a second time, just to be sure all values are fully dilated.

What we have actually done here, is sort of half implemented a version of the jump flooding algorithm. Something Igor Elovikov pointed out to me after releasing the node. Jump flooding is exactly the process described above, but rather than looking at 1 pixel at a time, we look at 8.

This offers all the same benefits, is more accurate and it is cheaper since we end up doing even fewer passes in total. My reason for not using jump flood is simply because I didn't know about it, nor did it occur to me we could do the same process with 8 neighbors at a time.

I wanted to draw attention to this though, as it is absolutely the way to go, should you implement anything like this in the future.

With this, we have our largest fitting circle radius which we can scale our kernel by. 

Even with my best efforts to reduce performance cost, the node was still ranging from 20ms (on the lowest settings) up to 70ms (on highest) on my GTX 980 card, so I didn't want to have this feature on by default.

In fact, it does such a good job of preserving the shape, that some just looked strange.

Perhaps we should add a slider to let you transition between bounding box and best circle fit scaling.Which as it turns out, visually feels like your pruning smaller stones or not.

As a final optimization, when the slider is 0, there is a switch to disable the best circle fit calculation.

With these changes, our kernel search radius now depends on an input map.

This means the kernel is actually calculated per pixel, which in turn means we can manipulate it to produce some very interesting shapes. What you see below is the search radius being scaled by an additional variation input map (Which is animated for the gif).

This was another feature I had overlooked until Stan Brown suggested it to me! A good thing too since it was trivial to implement, only taking an additional multiply, and gives rise to much more flexibility in the node.


That concludes this series about the development of the node. I hope it was interesting to read and you are left with a good understanding of how the node works. As I mentioned in the first blog, I am sure there are better ways to create something like this. But regardless, it was a very enjoyable experience for me and so I wanted to share that with the community.

Thanks for reading everyone!

Flood Fill To Corners - Development - Part 2

General / 09 March 2021

In the previous post, we had noticed how the corners were not very smooth and somewhat resembled the shape of the kernel. Maybe sampling in a circle rather than a grid might result in better rounding of the corners. If you haven't read part 1, I would recommend checking that out first as this is a direct continuation of that!

Circular Kernels

Another issue from our previous iteration, aside from the pinching, was how unintuitive the stride parameter was. It is quite difficult to mentally map what a stride value of means, when it changes depending on the kernel size and output size of the texture. So, perhaps we could define a maximum search radius relative to some constant value, like uv space, and then linearly scale our samples inside. A circular star shape of 8 sample points in concentric rings seems like a sensible place to start. 

This allows us to adjust the kernel size independently of the sample count. In other words, we are free to add or remove more of these rings should we want more resolution without changing the overall size of the kernel.

This confirmed our suspicion and resulted in a much smoother rounding. The image below shows the error from a perfect circle between the grid kernel and the circular kernel. More white means more error.

When viewing the sampling pattern on a graph, another problem becomes very apparent. Just how sparse the sampling becomes the further away we get. 

Perhaps we should pick random points to sample within this radius instead? I have had trouble in the past with the random nodes inside designer being not very random. So instead of generating random sample points with those nodes, I precomputed the sample positions and brought them in as a float2 instead. Here is an example of what that could looks like.

This as it turns out, this had a few benefits I did not expect, first of which was performance. Using float2 nodes is cheaper than calculating new coordinates at runtime. It also decoupled the node from the random seed slider. Meaning, the nodes output would remain the same regardless of the random seed value and if you have followed any of my previous work, you might know I am a big fan of this!

Perhaps the most useful benefit was having the sample positions available outside of Designer. Now we could paste them into a tool like Desmos to visualize exactly what the kernel looks like (This is how I create these graph images).

This was particularly helpful in debugging, as it is quite difficult to see what has gone wrong when creating these pixel processors through code.

Anyway, the results were quite promising!

However, this introduced another rather awkward problem, the random points are... well random. Every time we iterate and rebuild the node the results are different. Some results are very wonky or lob sided. Indeed, it was a challenge just trying to generate this example image due to the randomness!

What we need is a more uniformly distributed pattern, something like a Poisson distribution. To be exact, its Poisson disk sampling we need, which is an algorithm to generate randomized points which are guaranteed to remain a minimum distance apart. The details of this algorithm is out of scope for this blog, but here is a nice breakdown should you want some further reading. Regardless, this is an example of the kernel with a Poisson distribution.

Now this is looking much better!

Since we are in python, generating these points was very easy. In fact, I found this implementation of it on github already, so there was no need to even write anything myself!

Here is a comparison between different builds of random sampling vs Poisson sampling. While the Poisson sampling is still random, the results are much more accurate and less variable between builds.

Up until this point, I had largely been testing on a 2k image and a fairly dense distribution of shapes. We still have a lot of artifacts when dealing with larger shapes though. I did do a lot of experimenting with smoothing these results (which we talk about below), but never did find an elegant solution to this problem. As far as I know, there just isn't a way around the issue with this sampling method. So, I decided the best approach was to expose a series of resolutions or sample counts to choose from. Ranging from 100, which is suited for these smaller shapes, up to 1000, to handle the large images.

With that said, I did receive great feedback post release with potential solutions involving jittering the kernel. This proved very fruitful and did indeed solve this resolution issue, however, I ultimately decided not to release an update as I felt the final results were a step back to what I had before. Perhaps this is a topic for a future post, should there be enough interest.
Anyway, I had showed my lead this progress and he suggested trying out a phyllotaxis sampling pattern. This is an approximation of a common pattern in nature, such as found in sun flower seeds. The argument being, if we have to have artifacts from low sample counts, at least phyllotaxis is a visually pleasing artifact.

The results are cleaner, but only barely. The biggest benefit however is the sample points are not random, so every time we build the node, the pattern remains the same.

Phyllotaxis sampling it is then! And this is what the final release of the node is using.

Smoothing and Blurs

I mentioned a moment ago that I attempted to smooth out all these artifacts we have been getting, as an alternative to simply increasing the sample count. So what are these smoothing techniques? Well I tried all the typical nodes like blur HQ, non uniform blur, the suite of warps, but ultimately they all have issues relating to blurring across shape boundaries or losing the core shape. What we need is a custom blur that respects the individual shapes.

In fact, the example blur we described right at the start of this blog series is the implementation I went with. That is, looking at the 8 neighbors and averaging the result. However, we do a check first to see if the neighboring pixel is in the same shape. Exactly the same method as we did to calculate the corners. So we can try stacking a few of these and by a few, I mean a lot. I really did this many to even approach a smooth result.

This of course ballooned performance again.

Instead, let us try increasing the stride so we sample a little further out than the immediate neighbor. As it turns out, a stride of 4 was a very good compromise and we are able to cut this 400 pass monster by a factor of 100. Then to eliminate any artifacts left over from the larger stride value, a final immediate neighbor pass is done.

There is an interesting question with these blurs which might not be immediately obvious. If we sample a pixel outside of the shape, what value should we use?

My first thought was to simply pass a value of 0, but as you see from the .gif below, that ends up completely loosing the original shape. Perhaps using the same value as the one we are calculating makes sense, the center pixel in the kernel? Or just ignoring the value entirely and only averaging those that are actually within the shape. These two ended up looking even worst!

The magic number turned out to be the center pixel * 0.9, which makes sense if you think about it. A slightly darker value is an approximation of the next value in the gradient, almost as though the gradient just continued to extend past the shape.

So, this is the majority of the node complete now. However, we still have a issue of varying shape sizes to solve. In the next post, we will discuss how we scale the kernel radius per shape and prevent the issue of smaller shapes disappearing altogether.


See you there!

Part 3

Flood Fill To Corners - Development - Part 1

General / 07 March 2021

I will be starting a blog series on the development of the flood fill to corner node I recently released.

I learnt an awful lot making this and a number of folk reached out asking how it works, so this blog will hopefully address that. This wont be a step by step tutorial and I plan to show very little in terms of actual implementation, since doing so would balloon this blog into a giant mess. Instead however, it will be going over the high level ideas, all the specific details and the problems I encountered.

This series is aimed for folk looking to learn more about the pixel processor and those of you already well versed with mathematics and image processing, will likely cringe at my shoddy implementation and unoptimized workflow. But for the rest of us, I hope it make for an interesting read!

I plan to write this more or less in chronological order so you can get a feel for how it developed over time, but I will reshuffle some stuff in order to maintain a better flow.

How To Identify Edges

If your new to image processing, a very common method of reasoning about an image is through a kernel matrix. A kernel is like a grid which you place over each pixel in an image to do calculations in.

Below we have an image with a shape in it. A simple kernel might look like this, where the center cell is the pixel we are trying to calculate and the remaining cells are the 8 immediate neighboring pixels.

For each pixel in the kernel, we would run some math operations on it to calculate a final value for the center pixel. We could for example, add up all the pixel in the kernel and divide that by the kernel size to give us an average value of all the pixels in this 3x3 grid.  Do that for all pixels in the image and we have implemented a simple blur.

You can visually think of the kernel looping over each pixel in our image, doing what ever calculation we want. Of course, our kernel could be any size and not necessarily looking at immediate neighbors either.

In reality, all pixels are calculated at the same time, rather then doing it one at a time like this, but I think the illustration can help to visualize what is happening.

So using this, how could we reason where corners are in a given shape? One approach, and the one I took, could be to count up how many neighboring pixels land inside the same shape and sum them up. Do this for all pixels and now you know where corners are in your image, pretty straightforward!

This is a very simple example, but if we had a bigger kernel with a larger image. The result gives us a gradient describing to what extent each pixel is a corner. With this gradient, we can apply various thresholds, levels, and tweaks to produce the outputs we want.

This is more or less how the whole node works. There are a ton of implementation details to solve but counting pixels inside the same shape is the high level approach. 

Kernel Size and Python

Most image processing techniques require you to repeat a calculation a set number of times or across a given range of pixels. In programming this is achieved through the use of loops. However, the pixel processor has one major limitation, it doesn't support loops.

So for this reason, I create everything in the pixel processors with python and the API. We wont be going through that process here, since its way out of scope for this blog. But to give you an idea, I wrote a plugin with a set of functions for creating and connecting nodes.

This way, I can easily iterate and use all of the Python toolkit, including loops. On the designer side, the plugin simply deletes everything inside the selected pixel processor node and rebuilds the node network for me.
Recall, the idea was to count all the pixels inside the kernel that reside in a shape. However, what would happen if we had two different shapes close together? We may inadvertently sample pixels from the other shape and mess up our count. We need a way to identify each shape independently.

The flood fill node has a 'Display advanced Parameters and Output' button, which you can use along with the flood fill to index node to output a unique number per shape.

Then for each pixel in our kernel, we can check if the shape indices match. If they do, output a value of 1 otherwise output 0. Add that all up, divide by the kernel size and there is our value. This is what that could look like inside designer.

We also alluded to the need of having a big kernel in order to produce a sufficient gradient, so lets consider that a moment. In order for a pixel to describe how close it is to an edge, it needs to have seen that edge pixel to have been included in the calculation.

That's potentially a very pretty big kernel. In the image above, the shape covers nearly all of the texture and if we wanted the gradient to reach close to the middle of the shape, the kernel would need to be at least half the texture size. If this image was 2k, the kernel size could potentially be 1024x1024 or more. That's 4,398,046,511,104 total samples by the way (1024^2 pixels in the kernel per pixel in the image), which is obviously ridiculous.

However, at this point I just wanted to test if the idea would work. So I took a typical use case and tried a few different kernel sizes.

This somewhat confirmed the idea, but there are some glaring issues. The most obvious being performance. Even with a 65x65 kernel, this was 500+ ms on my machine and took several minutes to create the node. 

Perhaps we do not need so many samples to produce a good enough result. We could try sampling in a star shape instead, to see what difference that would make. This would reduce our sample count by over half.

The results are comparable and the performance was hugely improved. Even the mask output is perhaps a little nicer.
However, with the large gaps in the kernel now, there are some slight artifacts to the gradient and something we will talk more about in another post.

To address the issue of needing different kernel sizes for different shape sizes. My first thought was to define a stride between each neighbor so we could adjust the kernel to best fit the texture in hand.

This certainly worked, but had the issue of very low sample counts producing prominent banding as the kernel size increases.

Furthermore, textures that feature large variation in shape size are handled poorly. Smaller shapes are quickly lost as the kernel covers the entire shape. Another topic we will talk a lot about in the following posts.

You may have noticed the corners are not very round and somewhat reflect the shape of the kernel. Notice here how squared off the corners are and how the star kernel has this slight pinch to it?

Well this is what the next part will be about, check it out here:
Part 2

Lets Talk Color Pickers - Part 3

General / 16 December 2020

Well its been over 2 years (!) since I last posted about color pickers. If you didn't see it, I developed a node to provide a procedural way of producing color variation given a single color picker. Take a look here. I did give a talk on the specific inner workings of the node and Ubisoft was kind enough to let me release that video to the public. So if your interested in how that node was put together, watch it here.

Since then, I have gotten some great feedback and it has seen some extensive real world testing. Finally, I found some free time to address the points raised.

So what has changed? As a brief overview the node now has:

> Supports harmonic color schemes <
> Sample counts and seed values have been decoupled from one another <
> Seed values range range from 0 - 50 <
> Added hue variation amount slider <

The node has been split into 4 dedicated nodes, one to handle value variation, another hue, the original node (named basic) and a new one which supports harmonic color schemes.

The harmonic version, lets you choose between 5 different color harmony schemes and will generate variation in those colors.

Of course, the above is an extreme example to highlight the different modes, but you can keep the colors as subtle as you need.

All the old settings are still available. You can choose to preserve saturation and add random hue variation onto this as before. But sample count and seed values have been decoupled for all nodes. Meaning, you will no longer lose the value variation you were happy with, when changing the hue's seed values, and visa versa. 

Seed values are now integers ranging from 0 - 50 which hopefully brings it more inline with what the slider should be doing. Previously, there were a total of 100 possible increment values, which made the slider feel too sensitive and difficult to cycle through. There should be plenty of possible seeds to pick from still, but it wont be so difficult using the slider anymore.

In addition to this, I have documented the node with examples to show precisely what it does and how to make the most of it. Be sure to check out the update on my store!

  

Lets Do A Mentorship!

General / 14 September 2020

Hey all!

I have joined forces with Jeremy and crew to offer mentorships in all things material art! Check it out on the new and swish https://www.dinustyempire.com/mentorships/ben-wilson website =D

Over the last few years, iv dipped into making breakdowns, video tutorials and providing feedback to folk, and while I really enjoy this, its hard to provide anything meaningful through these short interactions. So my hope with these mentorships are to provide dedicated one on one time with artists to really dive into their work and spend quality time with them.

The goal is to focus around material art and the Substance tools, generally aimed at students or junior level artists (although this is not a requirement) and provide feedback, guidance and support. The type of artist I envision getting the most out of the mentorship are those looking to improve their portfolio or really solidify their understanding of Substance; Whether this is simple feedback or portfolio reviews, through to developing a clean work methodology and even programming with the API.

We decided to provide an additional offering outside of a full mentorship, as we know sometimes people just want a session or two of dedicated feedback and not pay for a full mentorship. So this is what the crash course is for.


Of course there are many fantastic artists offering mentorships now, so I hope these offerings don't get lost in the wind. But either way, I want to open up my time as another option for artists to improve.


I hope to work with you soon!

GDD - Insert Topic #4

General / 27 August 2020

Hey all!

I returned to chat with Alex Beddows about materials and the industry, this time along with the amazing Josh Lynch and James Lucas. If you have an hour, check it out!