Simulations on GPU. Introduction
Super-computer in your home PC
Modern video cards with thousands of shaders have performance more than
1TFlops. Access to the main GPU memory takes hundreds of cycles and can slow
down calculations. But if you have a task in which thousands of independent
similar threads start at the same time, then while some of them are waiting
for memory other can be calculated. Therefore your GPU may be 100 times faster
than CPU.
WebGL and simulations on GPU
We can use "graphical" GLSL shaders to generate fractals on GPU.
The fragment shader below calculates color for the pixel with coordinates
vec2 gl_FragCoord (see the page source and
The Mandelbrot and Julia sets Anatomy).
Then WebGL executes the fragment shader for every pixel.
See also
GPU
Gems 2. Part IV: General-Purpose Computation on GPUS: A Primer.
I'm looking for a (simple) intoduction into OpenGL and simulations on GPU.
WebGL 2
WebGL2 is based on OpenGL ES 3.0 [1,2]. Many its features are available
in WebGL1 as extensions. See
WebGL2 Fundamentals,
WebGL 2 Samples,
Rendering
algorithms implemented in raw WebGL 2 by Tarek Sherif
Experimental Compute shaders
Compute shaders will be more suitable for simulations [2].
E.g. one can make calculations on 3D grids (textures)
only in one GL call. See
Introduction
to compute shaders in OpenGL ES SDK for Android, ARM Developer Center
WebGL 2.0
Compute shader Demos by Kentaro Kawakatsu.
[1] D.Ginsburg, B.Purnomo
OpenGL ES 3.0 Programming Guide. Second Edition
[2] D.Shreiner, G.Sellers, J.Kessenich, B.Licea-Kane
OpenGL Programming Guide. Eighth Edition
The Official Guide to Learning OpenGL, Version 4.3
Simulations on GPU
updated 27 Jan 2019