Our world is alive with waves – from the gentle ripples on a pond to the invisible carriers of sound and light. When these waves meet, they don’t simply pass by; they interact in a mesmerizing dance of reinforcement and cancellation, a phenomenon known as interference. This principle is fundamental, shaping how we perceive sound, how light creates vibrant colors, and how technologies like radar and medical imaging function. Our interactive simulation provides a window into this world, leveraging the immense processing capability of your computer’s Graphics Processing Unit (GPU) to visualize the intricate patterns born from the superposition of waves emanating from multiple point sources.
At the core of this simulation lies the principle of superposition, which elegantly states that when two or more waves traverse the same medium, the net displacement at any given point is simply the sum of the individual displacements each wave would have produced on its own. To model this for each point (x, y)
on our simulated water surface, we consider the contribution from every individual wave source. A common representation for a circular wave spreading from a point is given by the equation:
~E(r,t)=A/sqrt(r) cos(2 pi f+phi_0)~
Here, r
is the distance from the source to the point (x, y)
, and the 1/sqrt(r)
term models how the wave’s amplitude diminishes as it spreads. The Amplitude
is its initial strength at the source. The frequency
dictates how rapidly the wave oscillates, directly influencing its wavelength (λ = waveSpeed / frequency
). The crucial term t'
represents the effective time; since a wave takes time (timeDelay = r / waveSpeed
) to travel, the wave’s state at point (x, y)
at the current currentTime
actually reflects its state at the source at an earlier moment, t' = currentTime - timeDelay
. The initialPhase
allows us to define the wave’s starting point in its cycle, a critical factor in controlling how waves align—or misalign—when they meet. The cos(...)
function itself captures the essential up-and-down harmonic motion of the wave.
To bring these interacting waves to life visually, we must perform this calculation for every single pixel on our display grid, sum the contributions from all sources for each pixel, and then translate that final displacement into a color. For a reasonably sized display and several sources, this quickly amounts to millions of calculations per frame – a task that would overwhelm a traditional CPU approach.
This is where the GPU, through WebGL (Web Graphics Library), becomes our computational engine. Unlike the CPU, which excels at complex sequential tasks, the GPU is a master of parallelism, possessing hundreds or thousands of simpler cores that can perform the same operation on vast amounts of data simultaneously. This is perfect for our wave simulation, as the calculation for each pixel is independent.
The GPU-Accelerated Procedure unfolds as follows:
- Initialization & Data Transfer: First, your computer’s main processor (CPU) prepares the stage. It initializes the WebGL environment on an offscreen canvas (in p5.js, this is
createGraphics(..., WEBGL)
). It then sends the GPU the current state of our wave world: thecurrentTime
, the overallwaveSpeed
, the properties of every point source (its position, frequency, amplitude, and initial phase), and themaxTotalAmplitude
which defines the range for our color visualization. The selected color map is also prepared by the CPU, often as a small 1D “texture” (a lookup table or LUT) containing 256 pre-calculated colors, and this LUT texture is also sent to the GPU. - Shader Programs Take Over: The CPU then instructs the GPU to run small, highly optimized programs called shaders. We use two main types:
- A Vertex Shader is responsible for figuring out the geometry we want to draw. For our full-screen wave field, we simply tell it to draw a large rectangle (or quad) that covers the entire offscreen WebGL canvas.
- The Fragment Shader is the real workhorse. This shader program is executed by the GPU in parallel for every single pixel of that rectangle.
- Per-Pixel Physics on the GPU: Inside the fragment shader, each GPU core working on a pixel performs the complete physics calculation:
- It determines the pixel’s corresponding real-world
(wx, wy)
coordinates. - It then iterates through the data for all active wave sources (which it received as “uniform” variables from the CPU).
- For each source, it calculates the distance
r
, thetimeDelay
, theeffectiveTime (t')
, and applies the wave equation (including decay) to find the displacement contribution of that single source. - After calculating the contributions from all sources, it sums them up (superposition) to get the
totalDisplacement
at that specific pixel’s location.
- It determines the pixel’s corresponding real-world
- Color Mapping on the GPU: The calculated
totalDisplacement
is then normalized using themaxTotalAmplitude
(to a 0.0-1.0 range). This normalized value is used as a coordinate to look up the appropriate color from thecolorLUT
texture that was sent by the CPU. If a simple grayscale map is chosen, the normalized value itself becomes the color intensity. This color lookup is also incredibly fast on the GPU. - Output to Offscreen Buffer: The fragment shader outputs the final calculated color for its pixel. This color is written into the offscreen WebGL canvas (
glGraphics
). - Display to Screen: Finally, back on the CPU, the main
draw()
loop takes this complete, GPU-rendered image from theglGraphics
buffer and draws it onto the visible 2D canvas on your screen. Any additional UI elements, like source markers or informational text, are then drawn on top by the CPU using standard 2D drawing commands.
By offloading the repetitive, per-pixel physics and coloring calculations to the GPU’s parallel architecture, we can simulate and visualize complex interference patterns in real-time. You can directly observe constructive interference, where waves align to create larger amplitudes (seen as brighter or more intense colors), and destructive interference, where waves cancel out, leading to calmer regions or nodal lines (often appearing neutral or white in a diverging color map). Experimenting with source frequency (and thus wavelength), spacing, and initial phases (especially with the Phased Array Tool to achieve beamforming) allows for a direct and intuitive exploration of wave behavior that would be computationally prohibitive for the CPU alone at interactive frame rates. This simulation transforms complex physical equations into a dynamic visual playground, offering insights into the fundamental ways waves shape our universe.