A while back I did some work with GlideN64 to perform noise emulation entirely on the GPU.

I stopped working on it. As usual, certain groups of people said this is due to “people being mean”.

So, today, figured I relook at it and see if I could improve on the implementation as last time and see if I can optimize the current implementation.

The implementation this time around is rather simple:

- Use one line of GLSL for the actual noise generation
- Scale the coordinates used for the noise seeding according to N64 resolution *properly*
- Use prime numbers to seed the PRNG more.

This results in ***8* **lines of GLSL compared to ***183* **lines of C++.

The shader is as follows:

1 2 3 4 5 6 7 8 9 10 |
static const char* fragment_shader_noise = "highp float rand(vec2 co) {\n" "return fract(sin(mod(dot(co.xy,vec2(12.9898,78.233)),3.14) * 43758.5453));\n" "}\n" "highp float snoise() \n" "{ \n" " mediump vec2 coord = floor(gl_FragCoord.xy/uScreenScale); \n" " return rand(vec2(rand(coord*12.9898),rand(coord*78.233))); \n" "} \n" ; |