Technology

Popular, albeit shallow explanation of how this kind of images is produced:

First, to make an image, I build the 3D "scene" using mathematical functions to shape the landscape, clouds, and the like. There's a lot of various funky functions, like 'random' fractals that's good for terrain and clouds, periodic functions like sine or my 'hexlattice', complicated stuff like my erosion fractal, etc. You can see an pandromeda's example of how functions are manipulated in MojoWorld's ProUI 1. (in example, blocks represent the functions and connections between blocks represent the composition of functions. Like, outputs of two Texture Color functions are connected to be Multiplied together. ) . The scene is then "rendered" (using mostly my Volumetrics Renderer software) to see how it looks, parameters and functions are tweaked, scene is "rendered" again, etc. until i like it, then i adjust color gamma a little and put image on the website. Sounds pretty simple2 so far, eh? But look what the software (that i develop) has to do:

The Volumetrics Renderer itself doesn't really care where functions come from or what they look like. The renderer's job is to simulate the interactions of light with 3D scene, and produce the image as seen by virtual camera. It is very important for realistic images that the interactions are accurate and capture all range of natural phenomena. It is also very important that renderer achieves good result in as few calculations as possible as it has really a lot of calculations to do that can take many hours on ordinary personal computer for good image.

Renderer traces "virtual" light rays through scene, computing collisions of light with the surfaces (and reflections off water/etc), and scattering of light by clouds.

For cloudy/airy things, when light is inside cloud volume, renderer 'asks' the cloud defining functions to tell the optical properties (density, light absorbtion and scattering coefficients, glow, etc.) of cloud (or air or whatever) at that place (values are different in different places, see 3). Then renderer computes how the light is affected at that place with such parameters.

For solid things, the shaping is done with some special function. In some places that function's value is positive and in some negative, to renderer that mean solid material where value is negative and empty space where positive. The boundary is called "isosurface", and renderer finds collisions of light rays with it. When light collides with e.g. rock boundary, renderer checks other functions for surface properties like color, then computes interaction of light with the surface (lighting, reflection, etc) depending to view and light angle to the surface.

To renderer, the blue sky or clouds or shockwaves or mist or fireballs or whatever is just "cloud", and various 'hard' stuff you can see in above images is just "isosurface".

There are two types of "virtual" light in the renderer, the illumination rays that correspond to light coming from the sun, and the camera rays that correspond to light going to the camera's tiny 'lens'.
The light that comes from the sun first passes through "cloud" that represent the planetary atmosphere (clear air). As light goes through planetary atmosphere, the blue-green light is scattered more than green-red (that's why sky is blue). At sunset, the light travels large distance in the atmosphere before hitting ground or clouds, and so the sunlight becomes noticeably yellow-red. You can see it in real world sunsets, and in the sunset images above.

As light goes deeper into atmosphere, it encounters the white-ish lower mist and clouds, then the terrain, water, etc. Part of light is blocked by the clouds, you can see the beams of unblocked light in above images. The curvature of earth is responsible for beautiful "clouds lit from below" sunsets, this is also correctly simulated by volumetrics.

In nature, some small part of light finally comes into the camera or eye, and onto the sensor, film or retina, to be recorded. This is very impractical to simulate directly in computer graphics as only an extremely tiny portion of sunlight actually enters the eye, rest just bounces around until it is either absorbed or reflected back into space. The light takes a lot of computing time to simulate, and any simulation of the light that doesn't go into camera would be a waste of computing time.
Hence, the variant of ray tracing method is used, the rays is cast from the eye to the scene (to simulate light that would be coming from scene to the eye). This allows to process only the parts of scene that are visible and ignore portions of scene that don't have effect on the final image. The volumetrics incorporates a great deal of various complicated optimization algorithms and the like to minimize amount of computations required to produce photo-realistic image.


I tried to keep this text reasonably short, hence it is not getting into deep or complicated details on the actual working of volumetrics, just the basic introduction for visitors who came to look at images and are curious how the images have been made.

Footnotes:

1: It is also possible to construct functions in software other than MojoWorld or to code them in programming language like C++ . The "new framework" images use cloud shaping functions coded in C++. It is possible to use some real world data instead of function, for example the elevation map of Hymalaya or satellite map of clouds [note: i didn't use any maps in images that you can see in gallery]. Building scene is only a part of job though, and not particularly hard part; making the image from the scene is what 'renderer' software does.

2: There's at least two levels of sarcasm in word "simple". Actually it's not that difficult as it sounds, really.

3: Functions give different values for different places, and also can have different values at different time for animation. For example of such function, consider temperature in the volume of your room. At winter it is low near windows, high near heaters and your computer, and so on. Imagine function with place as parameter that gives the temperature in that place at given time.
In physics, this kind of thing is named "scalar field".


keywords: volumetric rendering, photorealistic cloud rendering, sky, photorealism, computer graphics, research


(C) 2004..2014 Dmytry Lavrov.
Want to say something or ask some question? Contact:

_
[an error occurred while processing this directive]