The purpose of this page is to give an overview of how noise functions in this family work and the various tradeoffs that can be made in implementation choices. How to use noise to create and modify content is a huge topic...BLAH, BLAH add links.
WebGL demo: http://glsl.heroku.com/e#7967.0
Top to bottom: value noise, gradient noise and value noise. Right-to-left: single sample, 3-sample fBn, 3-sample turbulence. Cells are purposefully aligned.
This family of noise functions are incredibly useful tools for creating and modifying content. According to CG industry lore it was informally observed in the 90s that "90% of 3D rendering time is spent in shading, and 90% of that time is spent computing Perlin (gradient) noise". Regardless of the truth of this observation, this family of noise functions are certainly one of the most important techniques not only in procedurally generated content but in CG as a whole. Increases in CPU speed and the relatively new addition of GPU computation allow for runtime evaluation of the cheaper of these methods in realtime graphics.
Attempting to give any detailed descriptions of how to "use" noise functions to create or modify content is well beyond the scope of any short description. The goal here is to outline some basics of core generation techniques and to provide links to more detailed information in specific areas of interest.
For the local discussion, we'll assume that noise accepts floating point input for a sample coordinate and returns a floating point value (usually either on [0,1] or [-1,1]). It will provide some sketches of 2D implementations to (hopefully) aid in understanding.
Noise functions are evaluated in some number of dimensions (typically 1,2,3 or 4). This is simply to say that you provide some input coordinate and noise returns the corresponding fixed value at that position, just like any other multi-dimensional function. From a signal processing perspective this family can be described as an attempt to approximate band-pass filtering
of white noise
. Perhaps a simpler description would be that they are attempts at coherent pseudo-random number generators (PRNG).
Regular PRNGs attempt to create a fixed sequence (from some initial state data...frequently termed the 'seed') of values that appear to be statistically independent. White noise can be created from a PRNG as in the following sketch (in 2D):
float eval(float x, float y)
long seed = mix(x,y); prng.setSeed(seed); return prng.nextFloat(); }
Unfortunately raw white noise is of very little use. If you were to create a 2D texture from white noise, regardless of how you walk through the 'noise' function the result would be virtually identical. The result would be like what you'd see on an old broadcast TV tuned to a channel without a signal. What's really needed to be useful are random values that are coherent: which roughly says that sample points far apart are like PRNG values, appear to be independent, and the set of all sample points close to one another vary continuously (or smoothly in less formal speak).
Value noiseValue noise
is the one of the original attempts at this style of noise generation. It is very often miscalled Perlin noise. Evaluation is very cheap, but it burden with serious defects and is very poor at band-pass filtering. Quality can be improved, but even the most basic improvements make it more expensive than gradient noise. So a general guideline for using this technique is to only use a very cheap version and only when some existing content can be minorly modified by one or two evaluations.
Value noise is computed by forming a regular grid, computing random values at each vertex and blending the values to produce a result. Sketch in 2D:
float eval(float x, float y)
int ix = (int)Math.floor(x);
int iy = (int)Math.floor(y);
float dx = x - ix;
float dy = y - iy;
float r00 = mix(ix, iy);
float r10 = mix(ix+1, iy);
float r01 = mix(ix, iy+1);
float r11 = mix(ix+1, iy+1);
So to compute value noise in 'n' dimensions, the work required is related to n2
(1D = line segment or 2 vertices, 2D = square or 4 verts, 3D = cube and 8, etc). The problems with value noise stem from the fact that at each evaluation point, the result only depends on blended data interior the cell that its within. This results in sample points close to one another, but in different cells, to not vary continuously. This results in very obvious defects along cell boundaries. Early attempts to fix this major problem included visiting further away cells and using more complex blending functions...which drastically increase complexity. The introduction of gradient noise made these solutions obsolete.
However value noise is far from useless. Its very cheap computational cost can make it a good choice when many noise samples are required and is what you will most often find used in "demoscene" style shaders.References
Perlin gradient noise
Created in 1983 by Ken Perlin, this Oscar award winning technique is a clever way to minorly modify value noise to drastically improve the output quality. Usually when one is (correctly) calling a noise function "Perlin" noise, this is the technique being discussed. The clever addition is to choose a vector associated with each vertex (gradient vector). Then to calculate the vector from the vertex to the sample point. The dot product between these two vectors gives a weighting to modify the value at each vertex. It was quickly noted that this last step is not really useful and that the dot product itself is more than a sufficiently random value (dropping one multiply). Next the dot product results at the vertices are interpolated to generate a final result. Although the dot product drastically reduces defects along the cell boundaries, it introduces a new defect. Specifically the dot product will always approach zero as the sample point approaches one of the cell vertices.
Notice that like value noise, the output entirely depends on the evaluation of a single cell and has the same complexity in the number of dimensions. The difference here is that the random vector helps to smooth out values across neighboring cells...much in the same way that Gouraud shading improves over flat shading. Sketch in 2D:
float eval(float x, float y)
int ix = (int)Math.floor(x);
int iy = (int)Math.floor(y);
x -= ix;
y -= iy;
int h00 = mix(ix, iy);
int h10 = mix(ix+1, iy);
int h01 = mix(ix, iy+1);
int h11 = mix(ix+1, iy+1);
float r00 = dotRandVect(h00, x, y);
float r10 = dotRandVect(h10, x-1, y);
float r01 = dotRandVect(h01, x, y-1);
float r11 = dotRandVect(h11, x-1, y-1);
x = weight(x);
y = weight(y);
float xb = lerp(x, r00, r10);
float xt = lerp(x, r01, r11);
return lerp(y, xb, xt);
Note that there have been numerous improvements made to gradient noise over the years, so some references may be referring to older versions. And, of course, authors may make minor tweaks (for better or worse) to their specific implementation.
Variants of note:
- Originally the vectors were randomly generated unit vectors. Creating these on the fly is rather expensive. In days of yore a precomputed table of random vectors was an option (less so today given memory access overhead). The reduced number of random vectors introduces some very minor directional defects.
Perlin later noted that using a small set of vectors (all the permutations of vector components of zero and +/-one, but not all zero) drastically reduced computational cost. Specifically this drops 1 multiply per dimension per vertex (2*4 in 2D, 3*8 in 3D). This significantly increases directional defects (SEE: Defects below). Some GPU implementations use more mathematically complex selection to address this issue.
- Two ease functions: Perlin uses a weight function which he terms either ease or s-curve. The original function was: 3t2-2t3. This function is continuous but its derivative is not. This was later replaced by the more expensive: 10t3-15t4 + 6t5 which is C2 continuous.
Yeah...add tons of stuff here
Perlin simplex noise
In 2002 Ken Perlin introduced a new noise function that is a drastic change in direction. The purpose was to create a function which could be cheaply implemented in hardware and addresses some of the defects in gradient noise. Although designed for hardware it is a better fit for modern CPU and GPU architectures.
The first major change is how cells are formed. Instead of regular breaking up of space, the input is skewed into a simplex (SEE: Stefan Gustavon's paper for details). This drops the number of vertices needed from n2
to (n+1), where 'n' is the number of dimensions.
The second major change is instead of calculating values and each vertex and blending the results to compute the final result, the result is instead a summation of contributions from each. This lowers the dependency chain and can increase throughput. For example in 2D, in value and gradient noise, one might first blend in "X": the top edge, then the bottom (these two are independent), then take those results and blend in "Y" to get a final result. In 2D simplex noise, the contribution from the three vertices are independently computed and summed to produce the result.
As a rule of thumb, if you need noise (of this variety) in three or four dimensions, then simplex noise is the way to go.References
Noise is one of those area's where science and art collide. As such the various listed defects only really have meaning if they have a negative impact on the desired result.
- hash function:
- gradient vector selection:
- aligned cell structure:
The cheapest way to attempt to hide these defects is to insure that the grid structures of multiple noise evaluations are not aligned with one another. BLAH, BLAH
Isotropic and anisotropicIsotropic
is math-speak for uniform in all directions and anisotropic
is, well, not...the thing in question isn't uniform in all directions. The goal of all the above noise functions is to be isotropic. All, however, have directional defects which make this not quite true. Getting anisotropic results from isotropic noise simply involves applying a non-uniform scale factor when sampling.
The sketches above are for noise functions without a period. It is commonly desirable to have noise be periodic, or in other words to wrap at specific boundaries. Well, there's good and there's bad news. The first bad news is that most "methods" to make noise periodic are very expensive and don't really work (SEE: Matt Zucker's FAQ above for an example). The first good news is that it's simple to perform cheaply, assuming that wrapping at integer boundaries and in particular power-of-two boundaries is an acceptable limitation. Making a minor modification to the vertex computation allows this to happen..masking in the case of power-of-two and "faking" an integer modulo in other cases. This requires modifying the base noise function (having special cases, dynamic code generations, etc.) Another option is to use a noise function in (potentially) a higher dimension higher than desired and to "walk" that space in such a way that you reach the same coordinate at boundary points. This later happens someway naturally if computation is performed at runtime on the GPU. As an example to apply noise to a sphere (or any other 3D object), one simply samples a 3D noise function at a scaled and/or translated coordinate of the object's surface (or 4D function if the noise is to be animated in time).
Noise functions tend to be expensive as many calls are usually required to create a specific effect. As such speed is pretty important when computated at runtime. Given the nature of noise it is a very good candidate for running on the GPU...blah, blah
Pre-computation vs. runtime
Other noise functions
There are many other noise functions, many of which are too complex to be evaluated at runtime, but may have game usage for pre-generated content. BLAH BLAH:
- Anisotropic noise
- Gabor noise: not the same family, but can generate similar results.
- Sparse convolution noise: realtime variants potentially reasonable on the GPU
- Wavelet noise
This wiki entry has had 19 revisions with contributions from 1 members.