Brash & Plucky

by: Chema Guerra

Diffraction vs. Multi-resolution

I’ve been working lately on glare/bloom/fringe and other post-processing effects in Maverick Render. Some of these inherit from our lovely ArionFX Adobe Photoshop and AfterEffects plug-in.

One complaint in ArionFX and also in Maverick is (was, because this post is about a successful fix) that Glare/Bloom diverge in shape and power when the input image is rendered at a different resolution, even if the Glare/Bloom parameters stay the same.

There are some relatively unobvious reasons for this. Basically, the challenges are:

  • Hard challenge: Diffraction is a frequency analysis effect. For a render, this happens in the discrete realm (pixels). The size (amount of pixels) of the images involved changes what frequencies and how they show up in the Fourier Transform.
  • Hard challenge: Anti-Aliasing of neighboring pixels (more prevalent at low resolution) averages their power and dims the overall Glare/Bloom overlay. This can pose a real problem for thin geometries such as lightbulb filaments.
  • Easy challenge: As illustrated in some of my previous posts, the FT itself has some properties that relate its scale and power to the scale and power of the aperture/obstacle of the lens iris. These of course must be compensated for.
  • Medium challenge: Changes in aspect ratio, or in padding in the image buffers (such as the padding between the IPR size in the UI vs. the canvas size) must be taken into account as well.

The upcoming release of Maverick will address these issues.

Here’s a small video with a sequence of Maverick post-processing effects, all rendered alternating landscape and portrait aspect ratios between 512 and 2048. The video is cropped as landscape to be easier on the eyes. As can be seen, at lower resolutions there’s always some power divergence, and a little bit of blur. But those are unavoidable to some extent.

Youtube 4K version: youtu.be/4K8F8vP76…

Glare/Bloom versatility in Maverick

These below are a bunch of glare patterns produced with the (much improved) bloom/glare toolchain in Maverick Render.

These tests have been conducted by @Vloba with the soon-to-be-released Maverick Studio 2022.5 and they make use of some of our procedural maps for aperture.

These images are expressive of the versatility of the new system.

Bessel functions

Bessel functions

From Wikipedia

Iterative implementation

I came up with the above implementation, with which I plotted these.

J0-4 in [0..2pi]

2*abs(J1-4(x)/x) in [0..2pi]

Iterative methods are a terrible choice for many use cases because of error accumulation, jitter, the very presence of a loop, etc… But in this case this method also happens to be pretty weak and way too easy to break:

J0-4 in [0..8pi]

Deus ex machina

After some unsuccessful googling I was about to work on my own Taylor Series approximation or at least study the proposed implementations in Numerical Recipes. But I learnt to my surprise :o) that <math.h> provides j0/j1/jn/y0/y1/yn and even CUDA provides j0f/j1f/jnf/y0f/y1f/ynf.

I checked whether the C/CUDA implementations match my iterative implementation, and they do.

For reasons I will discuss in a future post, I am after j1 only, so… my job here is done.

Fourier Transform of a unit pulse

One might need (as is the case for me now) to compute a FT without actually computing the FT. i.e., to find the explicit formulation for the resulting FT of a certain 2D signal.

In particular, as part of some optimizations in Maverick Render I wish to create a map that looks like the diffraction pattern of a polygonal aperture shape, but without computing the FT of said aperture.

From a previous post

Looking at this video from my previous post, intuition says that each of the streaks of a diffraction pattern is “kind of independent from” or “overlaps onto” the rest and could be modeled on its own from a 1D FT. This intuition is incorrect in the general case. But let’s focus on the one case where this reasoning is correct: the diffraction pattern of a square shape.

Separability of the FT

The Fourier Transform is separable in both directions:

  • The 2D analysis formula can be written as a 1D analysis in the x direction followed by a 1D analysis in the y direction.
  • The 2D synthesis formula can be written as a 1D analysis in the u direction followed by a 1D analysis in the v direction.

So it seems natural to focus on the 1D case first and then combine 2x 1D transforms for the square. The cross section of a square is (one cycle of) a 1D rectangular pulse.

Fourier Transform of a 1D unit pulse

From Wikipedia

A unit pulse of amplitude A and width T is a function g(t) that yields A for t IN [-T/2..+T/2] and 0 everywhere else. Plugging this g(t) in the definition of the Fourier Transform, we obtain this:

So the FT of a unit pulse is the real-valued function A*T*sinc(f*T). The imaginary component in this case cancels out.

The sinc function

The sin(pi*f*T)/(pi*f*T) part of the solution is such a fundamental building block in the field of signal processing that it got a name of its own: The venerable sinc function, or alternatively the sampling function.

From Wikipedia: The normalized sinc function is the Fourier Transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal.

The sinc function can be expressed in unnormalized form (sin(x)/x which integral over the real line is pi) or in normalized form (sin(pi*x)/(pi*x) which integral over the real line is 1).

In both cases there is an indetermination at x=0 but you can use L’Hôpital’s rule to show that sinc(0)=1.

From Wikipedia

Fourier Transform of a rectangular shape

Since the FT is separable as mentioned above, the 2D FT of a square shape must be the 1D FT of a unit pulse in the x direction, followed by another 1D FT in the y direction.

Note that this would work for rectangles too, by using a different T value for each axis.

The top-right image is the actual FT of the square shape on the top-left. The bottom-right image is just the above piece of code. The bottom-left image is the profile along the x direction of the above code at y=0 which is proportional to abs(sinc).

The top-right and bottom-right images match exactly, except for:

  • Some numerical drift.
  • Unlike the above code, the true FT is periodic and the streaks wrap-around. But this happens to be a desirable property in my case of use.

So: success!

Can this method be generalized to n-sided polygons?

One might hope that the FT of n-sided polygons, or even inf-sided polygons (circles) could be conscructed similarly by composing rotating FTs of square functions somehow. But the truth is that things get a little bit more complicated than that.

In particular, the other case of interest for me in my application besides squares is circles. We’ll get into those in a future post.

The double-slit experiment

In the basic version of the double-slit experiment, a laser beam illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen.

The pattern formed on the screen is the (constructive/destructive) sum of two rectangular interference patterns; one per slit. The profile of each pattern happens to be a sinc function.

In this image below, I have dimmed the slits vertically using a gaussian in order to generate an image that more closely resembles classic pictures of the experiment.

Playing with the Fourier Transform (II)

This post is an extension of a post I wrote many years ago called Playing with the Fourier Transform (I).

The Fourier Transform has become an often-used tool in my toolbox because:

I will not be unveiling anything new here, but wanted to post visualizations I just made of some properties of the Fourier Transform that everyone in the field of Computer Graphics (or anyone dealing with the FT) should be familiar with. The videos below come straight out of Maverick’s Unit Testing system.

In the videos:

  • The input signal (top-left) is in white.
  • The real part of the FT (bottom-left) is in red (-) and green (+).
  • The imaginary part of the FT (bottom-right) is in red (-) and green (+).
  • The magnitude of the FT (top-right) is in faux color.

Changing the amplitude of the input signal

When the amplitude of the input signal is multiplied up or down, the magnitude of the FT is equally multiplied up or down.

Shifting the input signal

Shifting the input signal causes changes in phase in the (re,im) components of the FT, but the FT magnitude stays invariant.

Note that the small fluctuations in the FT magnitude below are due to anti-aliasing in the input signal and numerical drift in the computations. If sampling sizes and numerical precision were increased, the top-right quadrant would look completely frozen.

Sampling in larger/smaller buffers

Sampling the input signal and then computing the FT in larger buffers (e.g., twice as large as in the video below) produces exactly the same FT, occupying the central part of the buffer, leaving padding space towards the edges before wrapping-around.

The opposite happens if the buffers become smaller. The FT stays the same size (which will make it wrap-around earlier on all 4 edges).

Note that these videos may appear to be the same size in your browser, because this blog’s layout will dock media content to the width of the page.

Rotating the input signal

When the input signal is rotated, the FT rotates in sync.

Scaling the input signal

When the input signal is scaled up or down, the overall shape of the FT gets scaled inversely. The overall power also changes, proportionally to the scale.

FT of a polygonal shape

The Fourier Transform of an n-sided polygon casts n streaks. Because the FT is periodic, when n is even, only n/2 streaks are apparent due to overlapping pairs. When n goes to infinity, the FT becomes the FT of a circle, which is called the Airy disk.

The periodicity of the FT is visible when the streaks wrap around the edges of the image.

FT of a gaussian

While not strikingly obvious in the video below, the Fourier Transform of a 2D gaussian function is another 2D gaussian. Note how the transform has no imaginary component in this case.

[EDIT] All throughout this post I loosely used the term FT to actually mean DFT.

Youtube versions of the above videos

www.youtube.com/shorts/Y9…

www.youtube.com/shorts/gS…

www.youtube.com/shorts/hd…

www.youtube.com/shorts/JH…

www.youtube.com/shorts/aA…

Lattice-Boltzmann Method (LBM)

A couple of weekends ago I was tinkering with fluid dynamics and implemented a few compute shaders in Unity. Among the ones I implemented, my favorite is the Lattice-Boltzmann Method featured here.

This early implementation allows to draw SDF entities in the simulation space behaving as blockers, inflows or outflows. I intend to expand on the features, but most likely at a slow pace, because this is a no-rush side-project. But I definitely wish to post now and then about my progress.

I also hope to find some spare time soon to write a quick article on how to implement an LBM cellular automata from a light-on-maths perspective.

Playing with the Fourier Transform (I) - 2016-07-28

[EDIT] This post was migrated from my blog from 2011…

[EDIT] This post was extended on 2022.09.

The beauty and power of the Fourier Transform never cease to amaze me. And since several effects in ArionFX are based on it, I have had to play with it a lot in recent times.

As explained in a previous post, diffraction patterns (e.g., the glare simulated in ArionFX for Photoshop) come from the Fourier Transform of the lens aperture. I will use the FT of an aperture mask for visualization in this post.

I will use pow-2 square sizes (my FT implementation is an FFT). Let’s start by this Aperture x Obstacle map output directly from ArionFX for Photoshop v3.5.0.

Aperture x Obstacle mask, rasterized @ 512×512

The Fourier Transform of this image, rasterized at that size in said 512px buffer, is the following.

FT of the mask above

The faux-color is done on the magnitude of each complex number in the FT. All the FT images in this post are normalized equally, and offset to look “centered” around the mid-pixel.

Such diffraction patterns and some heavy convolution magic are what ArionFX uses to compute glare on HDR images:

Resulting glare in ArionFX

Now, let’s focus on what happens to the FT (frequency space) when one does certain operations on the source data (image space). Or, in this exemplification: what happens to the diffraction pattern when one plays with the rasterized aperture mask.

Note that we’re speaking of the Discrete Fourier Transform, so sampling (rasterization, pixelization, anti-aliasing, …) issues are mostly ignored.

Rotation about the center

A rotation of the source buffer about its center doesn’t change the frequencies present in the data; only their orientation. So a rotation in the source data rotates the FT in the exact same way.

As we will see next, this property holds true regardless of the center of rotation, because the magnitude of the FT is invariant with respect to translations.

Rotation about the center

Translation (with wrap-around)

Frequencies arise from the relative position of values in the data field, and not from their absolute position in said field. For this reason, shifting (wrap-around included) the source data does not affect the magnitude of the corresponding Fourier Transform in any way.

Invariance to translation

Let’s recall that the idea behind the FT is that “any periodic function can be rewritten as a weighted sum of sines and cosines of different frequencies”. Periodic being the keyword there.

Repetition (tiling)

Tiling the data buffer NxM times (e.g., 2×2 in the example below) produces the same FT, but with frequencies “exploded” every NxM cells, canceling out everywhere else.

This is because no new frequencies are introduced, since we are transforming the same source data. However, the source data is NxM times smaller proportional to the data buffer size (i.e., the frequencies become NxM times higher).

Exploded frequencies on tiling

Data scaling

Normalization and sampling issues aside, scaling the data within the source buffer scales the FT inversely.

This is because encoding smaller data produces higher fundamental frequencies, while encoding a larger version of the same data produces lower fundamental frequencies.

Inverse effect on scaling

In the particular case of glare (e.g., ArionFX) this means that the diffraction pattern becomes blurrier if the iris is sampled small. Or, in other words, for a given iris, the sharpest diffraction pattern possible is achieved when the iris is sampled as large as the data buffer itself.

Hosek & Wilkie sky model - 2015-03-03

[EDIT] This post was migrated from my blog from 2011…

This is an old comparison between the Preetham and Hosek & Wilkie sky/sun models.

Preetham et al. sky model

Hosek & Wilkie sky model

Diaphragm and f-stop - 2014-08-17

[EDIT] This post was migrated from my blog from 2011…

This is just another image taken from the Maverick Unit Testing system. The chart displays different polygonal diaphragms at different f-stop values. Doubling the f-stop number, halves the surface light can pass through.

Polygonal diaphragms and f-stop

Glare patterns - 2014-08-14

[EDIT] This post was migrated from my blog from 2011…

Glare in photography is due to Fraunhofer diffraction as light from distant objects passes through the camera diaphragm.

There is a magical connection between Fraunhofer diffraction (physics) and the Fourier Transform (math). As a matter of fact, the intensity of the Fraunhofer diffraction pattern of a certain aperture is given by the squared modulus of the Fourier Transform of said aperture.

Assuming a clean and unobstacled camera, the aperture is the diaphragm shape. Here you have the diffraction patterns that correspond to some basic straight-blade (polygonal) diaphragms.

Glare patterns

Interestingly, the Fourier Transform produces one infinite decaying streak perpendicular to each polygon edge. When the number of edges is even, the streaks overlap in pairs. That is why an hexagonal diaphragm produces 6 streaks, and an heptagonal diaphragm produces 14.

The leftmost pattern happens to be the Airy disk. The Airy disk is a limit case where the number of polygon edges/streaks is infinite.

The examples above were generated at 256x256. The visual definition of the pattern naturally depends on the resolution of the buffers involved in the computation of the Fourier Transform. However, note that the FT has an infinite range. This means that for ideal polygonal shapes, the streaks are infinitely long.

In the practical case, buffers are far from infinite, and you hit one property of the Fourier Transform that is often nothing but an annoyance: the FT is cyclic. The image below depicts what happens when one pumps up the intensity of one of the glare patterns obtained above: the (infinite) streaks, which warp-around the (finite) FT buffer, become evident.

Cyclic glare pattern

Bonus: Here’s some real-life glare I screengrabbed this evening at the European Athletics Championships.

Real-life glare

Point-Spread Functions & Convolutions - 2014-07-14

[EDIT] This post was migrated from my blog from 2011…

One might explain what a convolution is in many ways. However, in the field of image processing, there is an informal and very intuitive way to understand convolutions through the concept of Point-Spread Functions and their inverses.

A PSF is an arbitrary description of the way in which a point spreads its energy around itself in 2D space.

Classic PSFs: 1D box, 2D box, 1D Gaussian, 2D Gaussian

Although this is not a mandatory requirement, the integral of a PSF usually equals 1, so no energy is gained or lost in the process. The above image does not match this requirement for the sake of visualization; the PSFs on the right column have been un-normalized for better visibility. On the other hand, the range of a PSF (how far away from the source point energy is allowed to reach) can be infinite. However, in most practical uses the range is finite, and usually as short as possible.

So, a PSF(x,y) is a function f:R^2->R or, in the case of images, a finite/discrete real-value 2D matrix. For example, PSF(x,y)=0.2 means that the point P=(a,b) sends 20% of its energy to point Q=(a+x,b+y).

If we apply the above PSFs to all the pixels in an image, this is what we get:

Classic PSFs applied to an image

WARNING: Do not confuse this with a convolution; we’re not there yet.

The inverse of a PSF (let’s use the term IPSF for now) is a description of what amount of energy a point receives from the points around itself in 2D space.

So, an IPSF(x,y) is also a function f:R^2->R or, in the case of images, a finite/discrete real-value 2D matrix. For example, IPSF(x,y)=0.2 means that the point Q=(a,b) receives 20% of the energy from point P=(a+x,b+y).

From here follows that a PSF and the corresponding IPSF are radially symmetric:

IPSF(-x,-y) = PSF(x,y)

If P=(a,b) spreads energy to Q=(a+x,b+y), then Q=(a’,b’) gathers energy from P=(a’-x,b’-y).

Finally: a convolution is the result of applying the same IPSF to all the pixels of an image. Note that IPSF matrices are more commonly known as convolution kernels, or filter kernels.

Conveniently enough, the PSFs displayed above are all radially symmetric with respect to themselves. As a matter of fact, it is true to most popular convolution kernels (e.g., 1D/2D box blur, 1D/2D Gaussian blur, …) that the PSF and the IPSF are identical. This makes the process of spreading/gathering energy equivalent in the cases presented above, but this is not true to other (more exotic) kernels.

In the case of image convolutions, kernels are usually square matrices of dimensions DxD, where D=(R+1+R) and R is generally known as the radius of the kernel. This way, kernels have a central pixel. For instance, a kernel of R=3 (where each pixel is affected by neighbors never farther than 3px away) would be a 7x7 matrix.

The convolution is a fundamental operation in Digital Image Processing, and most image filters (e.g., Gaussian Blur in Photoshop) are based on convolutions in one way or another.

Naive algorithm: A convolution is an operation that takes two discrete real-value matrices (i.e., a luminance image and a convolution kernel) and makes the center of the kernel slide along each pixel in the image. At each pixel, the kernel is multiplied point-wise with all the pixels it covers, and the sum of these products is used to replace the original pixel value. Since this operation modifies pixels on the go, an auxiliary buffer is necessary.

Let’s assume that the resolution of the image is WxH pixels, and the convolution kernel is a matrix of dimensions wxh. The convolution needs to run through WxH pixels and at each pixel, it must perform and add wxh products. This is as slow as O(n^4) = Terrible.

As a matter of fact, the convolution of even a small kernel with a large image can take an eternity (literally) to compute using this naive solution. Fortunately, there is some mathematical trickery that we can take advantage of. More on fast convolutions in a future post.

Bonus remark: A particularly nice example of PSF is glare, which comes from the Fraunhoffer diffraction of the camera/eye aperture. Below you can see what happens when a glare PSF is applied to an HDR image. The actual implementation convolutes the IPSF (the radial-symmetric of the glare PSF) with the source HDR image.

Glare applied to an Arion render

Typical glare PSF for a 6-blade iris

Random-walk SSS

Arion, predecessor of Maverick Render, was already doing “random-walk” volume and SSS rendering long before the term random-walk even became trendy.

This is a graph plotted by some simulation code I prototyped way back during the R&D phase (circa 2014). Each wavelength (simplified to RGB in the plot) traverses the media with different stochastic statistics.

Index-Of-Refraction - 2011-10-15

[EDIT] This post was migrated from my blog from 2011…

This all below is stuff that kids study in school. :)

Below is a visualization of the behavior of a ray of light as it hits a dielectric interface.

Some key phenomena which show up in the video are:

  • The Fresnel term (reflection vs. refraction).
  • The Index of Refraction.
  • The critical angle.
  • Total Internal Reflection (TIR).
  • As nd increases the Index of Refraction becomes higher, and so does the Fresnel term, which defines the proportion between reflected and refracted light. The critical angle becomes higher too, so there is more Total Internal Reflection.

When a ray of light hits an interface (assuming an ideal surface), all incident light must be either reflected or refracted. The Fresnel term (controlled by nd) tells how much light is reflected at a given incident angle. All the light that is not reflected is refracted, so both amounts (reflection and refraction) always add up to the total amount of incident light.

The Fresnel term approaches 1 at grazing angles (all light is reflected and nothing is refracted, regardless of nd) and is low (the lower the smaller the nd) at perpendicular angles (more light is refracted).

As a rule of thumb:

  • The lower the nd, the lower the IOR, and the more transparent the surface (more glass/liquid-like).
  • The higher the nd, the higher the IOR, and the more reflective the surface (more metallic/mirror-like).

For example:

  • Void: nd=1.
  • Air: nd=1.1.
  • Water: nd=1.33.
  • Glass: nd=1.51.
  • Diamond: nd=2.5.
  • Metals: nd=20+. (Simplification that ignores Complex IOR).
  • Ideal mirror: nd=infinity.

When a ray of light enters a medium with an nd lower than the nd of the previous medium, there is an angle at which the Fresnel term becomes 1 and beyond which light won’t refract anymore. This angle is called critical angle, and beyond it, the surface behaves like a perfect mirror, reflecting back all incident light. This effect is called Total Internal Reflection (TIR).

The cosine lobe - 2011-10-12

[EDIT] This post was migrated from my blog from 2011…

Testing image/video uploads.