merged master

pull/39/head
kynd 9 years ago
commit 36f920b570

@ -0,0 +1,51 @@
# 关于这本书
## 引言
<canvas id="custom" class="canvas" data-fragment-url="cmyk-halftone.frag" data-textures="vangogh.jpg" width="700px" height="320px"></canvas>
上面两幅图由不同的方式制成。第一张是梵高用手一层一层画出来的,需要花费些时间。第二张则是用一些【?像素矩阵的组合】生成的:一个青色,一个品红,一个黄色,和一个黑色。关键的区别在于第二张图是用非序列化方式实现的(即不是一步一步实现,而是多个同时进行)。
这本书是关于这个革命性的计算机技术片段着色器fragment shaders它将数字生成的图像带到了新的层面。类似于当年古腾堡的印刷术。
![Gutenberg's press](gutenpress.jpg)
Fragment shaders片段着色器可以让你控制像素在屏幕上的快速渲染。这就是为什么 shader 在各种场合被广泛使用的原因不管是在手机的视频过滤器还是酷炫的的3D视频游戏中。
![Journey by That Game Company](journey.jpg)
在接下来的章节你会发现 shader 是多么难以置信地快速和强大,还有如何将它应用到专业的和个人的作品中。
## 这本书是为谁而写的?
这本书是写给有线性代数和三角学的基本知识的创意编程者、游戏开发者和工程师的,还有那些想要提升他们的作品的图形质量到一个令人激动的新层次的人。(如果你想要学习编程,我强烈推荐你先学习[Processing](https://processing.org/)等你玩起来processing再回来看这个
这本书会教你如何使用shaders并把它整合进你的项目里以提升作品的表现力和图形质量。因为GLSLOpenGL的绘制语言的shaders 在很多平台都可以编译和运行你将可以把在这里学的运用到任何使用OpenGL, OpenGL ES 和 WebGL 的环境中。也就是说,你将可以把学到的知识应用到[Processing](https://processing.org/)[openFrameworks](http://openframeworks.cc/)[Cinder](http://libcinder.org/)[Three.js](http://threejs.org/)和iOS/Android游戏中。
## 这本书包含哪些内容?
这本书将会专注于GLSL像素着色。首先我们会给出shaders的定义然后我们会学习如何制作可被程序执行的形状图案材质和与之相关的动画。你将会学到基础的着色语言并把它们应用到有用的情景中比如图像加工图像运算矩阵卷积模糊颜色过滤查找表及其他效果和模拟Conway的生命游戏Gray-Scott反应扩散水波水彩效果Voronoi细胞等等。到书的最后我们讲看到一系列基于光线行进的进阶技术。
**每章都会有可以玩的交互的例子。**当你改动代码的时候,你会立刻看到这些变化。一些概念可能会晦涩难懂,而这些可交互的例子会对你学习这些材料非常有益。你越快把这些代码付诸实践,你学习的过程就会越容易。
这本书里不包括的内容有:
* 这**不是**一本openGL或webGL的书。OpenGL/webGL是一个比GLSL或fragment shaders更大的主题。如果你想要学习openGL/webGL推荐看 [OpenGL Introduction](https://open.gl/introduction), [the 8th edition of the OpenGL Programming Guide](http://www.amazon.com/OpenGL-Programming-Guide-Official-Learning/dp/0321773039/ref=sr_1_1?s=books&ie=UTF8&qid=1424007417&sr=1-1&keywords=open+gl+programming+guide) (也被叫做红宝书) 或 [WebGL: Up and Running](http://www.amazon.com/WebGL-Up-Running-Tony-Parisi/dp/144932357X/ref=sr_1_4?s=books&ie=UTF8&qid=1425147254&sr=1-4&keywords=webgl)
* 这**不是**一本数学书。虽然我们会涉及到很多关于线代和三角学的算法和技术,但我们不会详细解释它。关于数学的问题我推荐手边备一本:[3rd Edition of Mathematics for 3D Game Programming and computer Graphics](http://www.amazon.com/Mathematics-Programming-Computer-Graphics-Third/dp/1435458869/ref=sr_1_1?ie=UTF8&qid=1424007839&sr=8-1&keywords=mathematics+for+games) 或 [2nd Edition of Essential Mathematics for Games and Interactive Applications](http://www.amazon.com/Essential-Mathematics-Games-Interactive-Applications/dp/0123742978/ref=sr_1_1?ie=UTF8&qid=1424007889&sr=8-1&keywords=essentials+mathematics+for+developers)。
## 开始学习需要什么准备?
没什么。如果你有可以运行WebGL的浏览器像ChromeFirefox或Safari和网络点击页面底端的“下一章”按钮就可以开始了。
此外,基于你有的条件或需求你可以:
* [制作一个离线版的本书](http://thebookofshaders.com/appendix/)
* [用树莓派而不是浏览器来运行书中示例](http://thebookofshaders.com/appendix/)
* [做一个PDF版的书用于打印](http://thebookofshaders.com/appendix/)
* 用[github仓库](https://github.com/patriciogonzalezvivo/thebookofshaders)来帮助解决问题和分享代码

@ -0,0 +1,48 @@
# 开始
## 什么是 Fragment Shader(片段着色器)
在之前的章节我们把 shaders 和古腾堡印刷术相提并论。为什么这样类比呢?更重要的是,什么是 shader
![From Letter-by-Letter, Right: William Blades (1891). To Page-by-page, Left: Rolt-Wheeler (1920).](print.png)
如果你曾经有用计算机绘图的经验,你就知道在这个过程中你需要画一个圆,然后一个长方形,一条线,一些三角形……直到画出你想要的图像。这个过程很像用手写一封信或一本书 —— 都是一系列的指令,需要你一件一件完成。
Shaders 也是一系列的指令,但是这些指令会对屏幕上的每个像素同时下达。也就是说,你的代码必须根据像素在屏幕上的不同位置,表现出不同的样貌。就像活字印刷,你的程序就像一个 function函数输入位置信息输出颜色信息当它编译完之后会以相当快的速度运行。
![Chinese movable type](typepress.jpg)
## 为什么 shaders 运行特别快?
为了回答这个问题,不得不给大家介绍**并行处理**parallel processing的神奇之处。
想象你的 CPU 是一个大的工业管道,然后每一个任务都是通过这个管道的某些东西 —— 就像一个生产流水线那样。有些任务要比别的大,也就是说要花费更多时间和精力去处理。我们就称它要求更强的处理能力。因为计算机自身架构的原因,这些任务需要串行;即一次一个地依序完成。现代计算机通常有一组四个处理器,就像这个管道一样运行,一个接一个地处理这些任务,从而使计算机流畅运行。每个管道通常被称为**线程**。
![CPU](00.jpeg)
视频游戏和其他图形应用比起别的程序来说,需要高得多的处理能力。因为它们的图形内容需要操作无数像素。想想看,屏幕上的每一个像素都需要计算,而在 3D 游戏中几何和透视也都需要计算。
让我们回到开始那个关于管道和任务的比喻。屏幕上的每个像素都代表一个最简单的任务。单独来看完成任何一个像素的任务对 CPU 来说都很容易,那么问题来了,屏幕上的每一个像素都需要解决这样的小任务!也就是说,哪怕是对于一个老式的屏幕(分辨率 800x600来说都需要每帧处理480000个像素即每秒进行14400000次计算是的这对于微处理器就是大问题了而对于一个现代的 2800x1800 视网膜屏每秒运行60帧就需要每秒进行311040000次计算。图形工程师是如何解决这个问题的
![](03.jpeg)
这个时候并行处理就是最好的解决方案。比起用三五个强大的微处理器或者说“管道”来处理这些信息用一大堆小的微处理器来并行计算就要好得多。这就是图形处理器GPU : Graphic Processor Unit)的来由。
![GPU](04.jpeg)
设想一堆小型微处理器排成一个平面的画面假设每个像素的数据是乒乓球。14400000个乒乓球可以在一秒内阻塞几乎任何管道。但是一面800x600的管道墙每秒接收30波480000个像素的信息就可以流畅完成。这在更高的分辨率下也是成立的 —— 并行的处理器越多,可以处理的数据流就越大。
另一个 GPU 的魔法是特殊数学函数可通过硬件加速。非常复杂的数学操作可以直接被微芯片解决,而无须通过软件。这就表示可以有更快的三角和矩阵运算 —— 和电流一样快。
## GLSL是什么
GLSL 代表 openGL Shading LanguageopenGL 着色语言这是你在接下来章节看到的程序所遵循的具体标准。根据硬件和操作系统的不同还有其他的着色器shaders)。这里我们将依照[Khronos Group](https://www.khronos.org/opengl/)的规则来执行。了解 OpenGL的历史将有助于你理解大多数奇怪的约定所以建议不妨阅读[openglbook.com/chapter-0-preface-what-is-opengl.html](http://openglbook.com/chapter-0-preface-what-is-opengl.html)。
## 为什么 Shaders 有名地不好学?
就像蜘蛛侠里的那句名言欲戴其冠必承其重并行计算也是如此GPU 的强大的架构设计也有其限制与不足。
为了能使许多管线并行运行,每一个线程必须与其他的相独立。我们称这些线程对于其他线程在进行的运算是“盲视”的。这个限制就会使得所有数据必须以相同的方向流动。所以就不可能检查其他线程的输出结果,修改输入的数据,或者把一个线程的输出结果输入给另一个线程。如果允许线程到线程的数据流动将使所有的数据面临威胁。
并且 GPU 会让所有并行的微处理器(管道们)一直处在忙碌状态;只要它们一有空闲就会接到新的信息。一个线程不可能知道它前一刻在做什么。它可能是在画操作系统界面上的一个按钮,然后渲染了游戏中的一部分天空,然后显示了一封 email 中的一些文字。每个线程不仅是“盲视”的,而且还是“无记忆”的。同时,它要求编写一个通用的规则,依据像素的不同位置依次输出不同的结果。这种抽象性,和盲视、无记忆的限制使得 shaders 在程序员新人中不是很受欢迎。
但是不要担心!在接下来的章节中,我们会一步一步地,由浅入深地学习着色语言。如果你是在用一个靠谱的浏览器阅读这个教程,你会喜欢边读边玩书中的示例的。好了,不要再浪费时间了,赶快去玩起来吧! 点击**Next >>**开启 shader 之旅!

@ -3,7 +3,7 @@
In the previous chapter we described shaders as the equivalent of the Gutenberg press for graphics. Why? And more importantly: what's a shader?
![From Leter-by-Leter, Right: William Blades (1891). To Page-by-page, Left: Rolt-Wheeler (1920).](print.png)
![From Letter-by-Letter, Right: William Blades (1891). To Page-by-page, Left: Rolt-Wheeler (1920).](print.png)
If you already have experience making drawings with computers, you know that in that process you draw a circle, then a rectangle, a line, some triangles until you compose the image you want. That process is very similar to writing a letter or a book by hand - it is a set of instructions that do one task after another.

@ -0,0 +1,53 @@
## Hello World
“Hello world!”通常都是学习一个新语言的第一个例子。这是一个非常简单,只有一行的程序。它既是一个热情的欢迎,也传达了编程所能带来的可能性。
然而在 GPU 的世界里,第一步就渲染一行文字太难了,所以我们改为选择一个鲜艳的欢迎色,躁起来!
<div class="codeAndCanvas" data="hello_world.frag"></div>
如果你是在线阅读这本书的话,上面的代码都是可以交互的。你可以点击或者改动代码中任何一部分,尽情探索。多亏 GPU 的架构shader 会**飞速**地编译和更新这使得你的改动都会立刻出现在你眼前。试试改动第6行的值看会发生什么。
尽管这几行简单的代码看起来不像有很多内容,我们还是可以据此推测出一些知识点:
1. shader 语言 有一个 ```main``` 函数会在最后返回颜色值。这点和C语言很像。
2. 最终的像素颜色取决于预设的全局变量 ```gl_FragColor```。
3. 这个C系语言有内建的**变量**(像```gl_FragColor```**函数**和**数据类型**。在本例中我们刚刚介绍了```vec4```(四分量浮点向量)。之后我们会见到更多的类型,像 ```vec3``` (三分量浮点向量)和 ```vec2``` (二分量浮点向量),还有非常著名的:```float```(单精度浮点型), ```int```(整型) 和 ```bool```(布尔型)。
4. 如果我们仔细观察 ```vec4``` 类型,可以推测这四个变元分别响应红,绿,蓝和透明度通道。同时我们也可以看到这些变量是**规范化**的意思是它们的值是从0到1的。之后我们会学习如何规范化变量使得在变量间**map**(映射)数值更加容易。
5. 另一个可以从本例看出来的很重要的 C 系语言特征是,预处理程序的宏指令。宏指令是预编译的一部分。有了宏才可以 ```#define``` (定义)全局变量和进行一些基础的条件运算(通过使用 ```#ifdef``` 和 ```#endif```)。所有的宏都以 ```#``` 开头。预编译会在编译前一刻发生,把所有的命令复制到 ```#defines``` 里,检查```#ifdef``` 条件句是否已被定义, ```#ifndef``` 条件句是否没有被定义。在我们刚刚的“hello world!”的例子中我们在第2行检查了 ```GL_ES``` 是否被定义,这个通常用在移动端或浏览器的编译中。
6. ```float```类型在 shaders 中非常重要,所以**精度**非常重要。更低的精度会有更快的渲染速度,但是会以质量为代价。你可以选择每一个浮点值的精度。在第一行(```precision mediump float;```)我们就是设定了所有的浮点值都是中等精度。但我们也可以选择把这个值设为“低”(```precision lowp float;```)或者“高”(```precision highp float;```)。
7. 最后可能也是最重要的细节是GLSL 语言规范并不保证变量会被自动转换类别。这句话是什么意思呢显卡的硬件制造商各有不同的显卡加速方式但是却被要求有最精简的语言规范。因而自动强制类型转换并没有包括在其中。在我们的“hello world!”例子中,```vec4``` 精确到单精度浮点,所以应被赋予 ```float``` 格式。但是如果你想要代码前后一致,不要之后花费大量时间 debug 的话,最好养成在 ```float``` 型数值里加一个 ```.``` 的好习惯。如下这种代码就可能不能正常运行:
```glsl
void main() {
gl_FragColor = vec4(1,0,0,1); // 出错
}
```
现在我们已经基本讨论完了“hello world!”例子中所有主要的内容,是时候点击代码,检验一下我们所学的知识了。你会发现出错时程序会编译失败,只留一个寂寞的白屏。你可以试试一些好玩的小点子,比如说:
* 把单精度浮点值换成整型数值,猜猜你的显卡能不能忍这个行为。
* 试试把第六行 comment 掉,不给函数赋任何像素的值。
* 尝试另外写个函数,返回某个颜色,然后把 ```main()``` 放到这个函数里面。给个提示,这个函数应该长这样:
```glsl
vec4 red(){
return vec4(1.0,0.0,0.0,1.0);
}
```
* 有很多种构造 ```vec4``` 类型的方式,试试看其他方式。下面就是其中一种方式:
```glsl
vec4 color = vec4(vec3(1.0,0.0,1.0),1.0);
```
尽管这个例子看起来不那么刺激,它却是最最基础的 —— 我们把画布上的每一个像素都改成了一个确切的颜色。在接下来的章节中我们将会看到如何用两种输入源来改变像素的颜色:空间(依据像素在屏幕上的位置)和时间(依据页面加载了多少秒)。

@ -0,0 +1,61 @@
## Uniforms
现在我们知道了 GPU 如何处理并行线程,每个线程负责给完整图像的一部分配置颜色。尽管每个线程和其他线程之间不能有数据交换,但我们能从 CPU 给每个线程输入数据。因为显卡的架构,所有线程的输入值必须**统一**uniform而且必须设为**只读**。也就是说,每条线程接收相同的数据,并且是不可改变的数据。
这些输入值叫做 ```uniform``` (统一值),它们的数据类型通常为:```float```, ```vec2```, ```vec3```, ```vec4```, ```mat2```, ```mat3```, ```mat4```, ```sampler2D``` and ```samplerCube```。uniform 值需要数值类型前后一致。且在 shader 的开头,在设定精度之后,就对其进行定义。
```glsl
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution; // 画布尺寸(宽,高)
uniform vec2 u_mouse; // 鼠标位置(在屏幕上哪个像素)
uniform float u_time; // 时间(加载后的秒数)
```
你可以把 uniforms 想象成连通 GPU 和 CPU 的许多小的桥梁。虽然这些 uniforms 的名字千奇百怪,但是在这一系列的例子中我一直有用到:```u_time``` (时间), ```u_resolution``` (画布尺寸)和 ```u_mouse``` (鼠标位置)。按业界传统应在 uniform 值的名字前加 ```u_``` ,这样一看即知是 uniform。尽管如此你也还会见到各种各样的名字。比如[ShaderToy.com](https://www.shadertoy.com/)就用了如下的名字:
```glsl
uniform vec3 iResolution; // 视口分辨率(以像素计)
uniform vec4 iMouse; // 鼠标坐标 xy 当前位置, zw 点击位置
uniform float iGlobalTime; // shader 运行时间(以秒计)
```
好了说的足够多了,我们来看看实际操作中的 uniform 吧。在下面的代码中我们使用 ```u_time``` 加上一个 sin 函数,来展示图中红色的动态变化。
<div class="codeAndCanvas" data="time.frag"></div>
GLSL 还有更多惊喜。GPU 的硬件加速支持我们使用角度,三角函数和指数函数。这里有一些这些函数的介绍:[```sin()```](../glossary/?search=sin), [```cos()```](../glossary/?search=cos), [```tan()```](../glossary/?search=tan), [```asin()```](../glossary/?search=asin), [```acos()```](../glossary/?search=acos), [```atan()```](../glossary/?search=atan), [```pow()```](../glossary/?search=pow), [```exp()```](../glossary/?search=exp), [```log()```](../glossary/?search=log), [```sqrt()```](../glossary/?search=sqrt), [```abs()```](../glossary/?search=abs), [```sign()```](../glossary/?search=sign), [```floor()```](../glossary/?search=floor), [```ceil()```](../glossary/?search=ceil), [```fract()```](../glossary/?search=fract), [```mod()```](../glossary/?search=mod), [```min()```](../glossary/?search=min), [```max()```](../glossary/?search=max) 和 [```clamp()```](../glossary/?search=clamp)。
现在又到你来玩的时候了。
* 降低颜色变化的速率,直到肉眼都看不出来。
* 加速变化,直到颜色静止不动。
* 玩一玩 RGB 三个通道,分别给三个颜色不同的变化速度,看看能不能做出有趣的效果。
## gl_FragCoord
就像 GLSL 有个默认输出值 ```vec4 gl_FragColor``` 一样,它也有一个默认输入值( ```vec4 gl_FragCoord``` )。``` gl_FragCoord```存储了活动线程正在处理的**像素**或**屏幕片段**的坐标。有了它我们就知道了屏幕上的哪一个线程正在运转。为什么我们不叫 ``` gl_FragCoord``` uniform (统一值)呢?因为每个像素的坐标都不同,所以我们把它叫做**varying**(变化值)。
<div class="codeAndCanvas" data="space.frag"></div>
上述代码中我们用 ```gl_FragCoord.xy``` 除以 ```u_resolution```,对坐标进行了**规范化**。这样做是为了使所有的值落在 ```0.0``` 到 ```1.0``` 之间,这样就可以轻松把 X 或 Y 的值映射到红色或者绿色通道。
在 shader 的领域我们没有太多要 debug 的,更多地是试着给变量赋一些很炫的颜色,试图做出一些效果。有时你会觉得用 GLSL 编程就像是把一搜船放到了瓶子里。它同样地困难、美丽而令人满足。
![](08.png)
现在我们来检验一下我们对上面代码的理解程度。
* 你明白 ```(0.0,0.0)``` 坐标在画布上的哪里吗?
* 那 ```(1.0,0.0)```, ```(0.0,1.0)```, ```(0.5,0.5)``` 和 ```(1.0,1.0)``` 呢?
* 你知道如何用**未**规范化normalized的 ```u_mouse``` 吗?你可以用它来移动颜色吗?
* 你可以用 ```u_time``` 和 ```u_mouse``` 来改变颜色的图案吗?不妨琢磨一些有趣的途径。
经过这些小练习后,你可能会好奇还能用 shader 大法做什么。接下来的章节你会知道如何把你的 shader 和 three.jsProcessing和 openFrameworks 结合起来。

@ -0,0 +1,156 @@
## 运行你的 shader
现在你可能跃跃欲试,想在你熟悉的平台上小试牛刀了。接下来会有一些比较流行的平台的示例代码,展示如何在这些平台上配置 shader。在这个 [github 仓库](https://github.com/patriciogonzalezvivo/thebookofshaders/tree/master/04) 中有本章的三种平台的示例代码)
**注释 1**:如果你不想用这些平台来运行 shader且你想在浏览器外使用 shader你可以下载[glslViewer](https://github.com/patriciogonzalezvivo/glslViewer)。这个 MacOS +树莓派程序直接在终端运行,并且是为本书的例子量身打造的。
**注释2**:如果你想用 WebGL 显示 shader并不关心其他平台你可以用[glslCanvas](https://github.com/patriciogonzalezvivo/glslCanvas) 。这个 web 工具本来是为本书设计的,但是太好用了,所以我常常用在其他项目中。
### **Three.js**
为人谦逊而非常有才华的 Ricardo Cabello (也就是 [MrDoob](https://twitter.com/mrdoob) )和许多[贡献者](https://github.com/mrdoob/three.js/graphs/contributors) 一起搭了可能是 WebGL 最知名的平台,[Three.js](http://threejs.org/)。你可以找到无数程序示例,教程,书籍,教你如何用这个 JavaScript 库做出酷炫的 3D 图像。
下面是一个你需要的例子,教你用 three.js 玩转 shader。注意 ```id="fragmentShader"```脚本,你要把下面的代码拷到里面。
下面是一个 HTML 和 JS 的示例,
```html
<body>
<div id="container"></div>
<script src="js/three.min.js"></script>
<script id="vertexShader" type="x-shader/x-vertex">
void main() {
gl_Position = vec4( position, 1.0 );
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
uniform vec2 u_resolution;
uniform float u_time;
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
gl_FragColor=vec4(st.x,st.y,0.0,1.0);
}
</script>
<script>
var container;
var camera, scene, renderer;
var uniforms;
init();
animate();
function init() {
container = document.getElementById( 'container' );
camera = new THREE.Camera();
camera.position.z = 1;
scene = new THREE.Scene();
var geometry = new THREE.PlaneBufferGeometry( 2, 2 );
uniforms = {
u_time: { type: "f", value: 1.0 },
u_resolution: { type: "v2", value: new THREE.Vector2() }
};
var material = new THREE.ShaderMaterial( {
uniforms: uniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent
} );
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
renderer = new THREE.WebGLRenderer();
renderer.setPixelRatio( window.devicePixelRatio );
container.appendChild( renderer.domElement );
onWindowResize();
window.addEventListener( 'resize', onWindowResize, false );
}
function onWindowResize( event ) {
renderer.setSize( window.innerWidth, window.innerHeight );
uniforms.u_resolution.value.x = renderer.domElement.width;
uniforms.u_resolution.value.y = renderer.domElement.height;
}
function animate() {
requestAnimationFrame( animate );
render();
}
function render() {
uniforms.u_time.value += 0.05;
renderer.render( scene, camera );
}
</script>
</body>
```
### **Processing**
2001年由[Ben Fry](http://benfry.com/) 和 [Casey Reas](http://reas.com/) 创建,[Processing](https://processing.org/)是一个极其简约而强大的环境,非常适合初尝代码的人(至少对于我来是这样)。关于 OpenGL 和视频,[Andres Colubri](https://codeanticode.wordpress.com/)为 Processing 平台做了很重要的更新,使得环境非常友好,玩 GLSL shader 比起以前大大容易了。Processing 会在你的 sketch 的 ```data``` 文件夹搜索名为 ```"shader.frag"``` 的文件。记得把这里的示例代码拷到你的文件夹里然后重命名 shader。
```processing
PShader shader;
void setup() {
size(640, 360, P2D);
noStroke();
shader = loadShader("shader.frag");
}
void draw() {
shader.set("u_resolution", float(width), float(height));
shader.set("u_mouse", float(mouseX), float(mouseY));
shader.set("u_time", millis() / 1000.0);
shader(shader);
rect(0,0,width,height);
}
```
在 2.1 版之前的版本运行 shader你需要在你的 shader 文件开头添加以下代码:
```#define PROCESSING_COLOR_SHADER```。所以它应该看起来是这样:
```glsl
#ifdef GL_ES
precision mediump float;
#endif
#define PROCESSING_COLOR_SHADER
uniform vec2 u_resolution;
uniform vec3 u_mouse;
uniform float u_time;
void main() {
vec2 st = gl_FragCoord.st/u_resolution;
gl_FragColor = vec4(st.x,st.y,0.0,1.0);
}
```
更多 Processing 的 shader 教程戳 [tutorial](https://processing.org/tutorials/pshader/)。
### **openFrameworks**
每个人都有自己的舒适区,我的则是[openFrameworks community](http://openframeworks.cc/)。这个 C++ 框架打包了 OpenGL 和其他开源 C++ 库。在很多方面它和 Processing 非常像,但是明显和 C++ 编译器打交道一定比较麻烦。和 Processing 很像地openFrameworks 会在你的 data 文件夹里寻找 shader 文件,所以不要忘记把你的后缀 ```.frag``` 的文件拷进去,加载的时候记得改名。
```cpp
void ofApp::draw(){
ofShader shader;
shader.load("","shader.frag");
shader.begin();
shader.setUniform1f("u_time", ofGetElapsedTimef());
shader.setUniform2f("u_resolution", ofGetWidth(), ofGetHeight());
ofRect(0,0,ofGetWidth(), ofGetHeight());
shader.end();
}
```
关于 shader 在 openFrameworks 的更多信息请参考这篇[excellent tutorial](http://openframeworks.cc/tutorials/graphics/shaders.html),作者是 [Joshua Noble](http://thefactoryfactory.com/)。

@ -0,0 +1,148 @@
## 颜色
![Paul Klee - Color Chart (1931)](klee.jpg)
我们目前为止还未涉及到GLSL的向量类型。在我们深入向量之前学习更多关于变量和色彩主题是一个了解向量类型的好方法。
若你熟悉面向对象的编程范式或者说编程思维模式你一定注意到我们以一种类C的 ```struct```的方式访问向量数据的内部分量。
```glsl
vec3 red = vec3(1.0,0.0,0.0);
red.x = 1.0;
red.y = 0.0;
red.z = 0.0;
```
以x,y,z定义颜色是不是有些奇怪正因如此我们有其他方法访问这些变量——以不同的名字。```.x```, ```.y```, ```.z```也可以被写作```.r```, ```.g```, ```.b``` 和 ```.s```, ```.t```, ```.p```。(```.s```, ```.t```, ```.p```通常被用做后面章节提到的贴图空间坐标)你也可以通过使用索引位置```[0]```, ```[1]``` 和 ```[2]```来访问向量.
下面的代码展示了所有访问相同数据的方式:
```glsl
vec4 vector;
vector[0] = vector.r = vector.x = vector.s;
vector[1] = vector.g = vector.y = vector.t;
vector[2] = vector.b = vector.z = vector.p;
vector[3] = vector.a = vector.w = vector.q;
```
这些指向向量内部变量的不同方式仅仅是设计用来帮助你写出干净代码的术语。着色语言所包含的灵活性为你互换地思考颜色和坐标位置。
GLSL中向量类型的另一大特点是可以用你需要的任意顺序简单地投射和混合变量值。这种能力被形象地称为*鸡尾酒*。
```glsl
vec3 yellow, magenta, green;
// Making Yellow
yellow.rg = vec2(1.0); // Assigning 1. to red and green channels
yellow[2] = 0.0; // Assigning 0. to blue channel
// Making Magenta
magenta = yellow.rbg; // Assign the channels with green and blue swapped
// Making Green
green.rgb = yellow.bgb; // Assign the blue channel of Yellow (0) to red and blue channels
```
#### 个人工具箱
你可能不习惯用数字拾取颜色--这样非常反直觉。幸运的是app store上有许多可以轻松完成这项任务的程序。寻找一个合适自己的并练习将颜色转化为 ```vec3``` 或 ```vec4``` 格式。例如,这是我在[Spectrum](http://www.eigenlogik.com/spectrum/mac)中使用的的模板。
```
vec3({{rn}},{{gn}},{{bn}})
vec4({{rn}},{{gn}},{{bn}},1.0)
```
### 混合颜色
现在你了解到如何定义颜色是时候将先前所学的整合一下了在GLSL中有个十分有用的函数[```mix()```](../glossary/?search=mix)这个函数让你以百分比混合两个值。猜下百分比的取值范围没错0到1完美在你学习了这些基于栅格运动功夫后是时候练习一下了
![](mix-f.jpg)
看下下列代码中的第18行这里展示了我们如果是用随时间变化的sin绝对值来混合 ```colorA``` 和 ```colorB```。
<div class="codeAndCanvas" data="mix.frag"></div>
试着来show一下你所学到的
* 给颜色赋予一个有趣的过渡。想想某种特定的感情。哪种颜色更具代表性他如何产生又如何褪去再想想另外的一种感情以及对应的颜色。然后改变上诉代码中的代表这种情感的开始颜色和结束颜色。Robert Penner 开发了一些列流行的计算机动画塑形函数,被称之为平滑函数。你可以研究这些例子并得到启发,但最好你还是自己写一个自己的过度函数。
### 玩玩渐变
[```mix()```](../glossary/?search=mix) 函数有更多的用处。我们可以输入两个互相匹配的变量类型而不仅仅是单独的 ```float``` 变量,在我们这个例子中用的是 ```vec3```。这样我们便获得了混合颜色单独通道 ```.r``````.g``` 和 ```.b```的能力。
![](mix-vec.jpg)
试试下面的例子。正如前面一个例子我们用一条线来可视化根据单位化x坐标的过渡。现在所有通道都按照同样的线性变换过渡。
现在试试取消25行的注释看看会发生什么。然后再试试取消26行和27行。记住直线代表了```colorA``` 和 ```colorB```每个通道的混合比例。
<div class="codeAndCanvas" data="gradient.frag"></div>
你可能认出了我们用在25行到27行的造型函数。试着改写他们是时候把前几张的内容结合起来探索一些新的渐变。试试下列挑战
![William Turner - The Fighting Temeraire (1838)](turner.jpg)
* 创作一个渐变来代表 William Turner的落日。
* 用 ```u_time``` 做个一日出和日落的动画。
* 能用我们所学的做一道彩虹吗?
* 用 ```step``` 函数在做一个五彩的旗子。
### HSB
我们不能脱离色彩空间来谈论颜色。正如你所知除了rgb值有其他不同的方法去描述定义颜色。
[HSB](http://en.wikipedia.org/wiki/HSL_and_HSV) 代表色相,饱和度和亮度(或称为值)。这更符合直觉也更有利于组织颜色。稍微花些时间阅读下面的 ```rgb2hsv()``` 和 ```hsv2rgb()``` 函数。
将x坐标位置映射到Hue值并将y坐标映射到明度我们就得到了五彩的可见光光谱。这样的色彩空间分布实现起来非常方便比起RGB用HSB来拾取颜色更直观。
<div class="codeAndCanvas" data="hsb.frag"></div>
### 极坐标下的HSB
HSB原本是在极坐标下产生的以半径和角度定义而并非在笛卡尔坐标系基于xy定义下。将HSB映射到极坐标我们需要取得角度和到像素屏中点的距离。由此我们运用 [```length()```](../glossary/?search=length) 函数和 [```atan(y,x)```](../glossary/?search=atan) 函数在GLSL中通常用atany,x
当用到矢量和三角学函数时,```vec2```, ```vec3``` 和 ```vec4```被当做向量对待,即使有时候他们代表颜色。我们开始把颜色和向量同等的对待,事实上你会慢慢发现这种理念的灵活性有着相当强大的用途。
**注意**如果你想了解除length以外的诸多几何函数例如[```distance()```](../glossary/?search=distance), [```dot()```](../glossary/?search=dot), [```cross```](../glossary/?search=cross), [```normalize()```](../glossary/?search=normalize), [```faceforward()```](../glossary/?search=fraceforward), [```reflect()```](../glossary/?search=reflect) 和 [```refract()```](../glossary/?search=refract)。 GLSL也有与向量相关的函数[```lessThan()```](../glossary/?search=lessThan), [```lessThanEqual()```](../glossary/?search=lessThanEqual), [```greaterThan()```](../glossary/?search=greaterThan), [```greaterThanEqual()```](../glossary/?search=greaterThanEqual), [```equal()```](../glossary/?search=equal) and [```notEqual()```](../glossary/?search=notEqual)。
一旦我们得到角度和长度我们需要单位化这些值0.0到1.0。在27行[```atan(y,x)```](../glossary/?search=atan) 会返回一个介于-PI到PI的弧度值-3.14 to 3.14),所以我们要将这个返回值除以 ```TWO_PI```在code顶部定义了来得到一个-0.5到0.5的值。这样一来用简单的加法就可以把这个返回值最终映射到0.0到1.0。半径会返回一个最大值0.5因为我们计算的是到视口中心的距离而视口中心的范围已经被映射到0.0到1.0所以我们需要把这个值乘以二来得到一个0到1.0的映射。
正如你所见这里我们的游戏都是关于变换和映射到一个0到1这样我们乐于处理的值。
<div class="codeAndCanvas" data="hsb-colorwheel.frag"></div>
来挑战下下面的练习吧:
* 把极坐标映射的例子改成选择色轮,就像“正忙”的鼠标图标。
* 把造型函数整合进来来让HSB和RGB的转换中强调某些特定值并且弱化其他的。
![William Home Lizars - Red, blue and yellow spectra, with the solar spectrum (1834)](spectrums.jpg)
* 如果你仔细观察用来拾色的色轮见下图你会发现它用一种根据RYB色彩空间的色谱。例如红色的对面应该是绿色但在我们的例子里是青色。你能找到一种修复的方式来让它看起来和下图一样么[提示:这是用塑形函数的好机会!]
![](colorwheel.png)
#### 注意函数和变量
在进入下一章之前让我们停下脚步回顾下。复习下之前例子的函数。你会注意到变量类型之前有个限定符 ```in```,在这个 [*qualifier*](http://www.shaderific.com/glsl-qualifiers/#inputqualifier) (限定符)例子中它特指这个变量是只读的。在之后的例子中我们会看到可以定义一个 ```out``` 或者 ```inout```变量。最后这个 ```inout```,再概念上类似于参照输入一个变量,这意味着我们有可能修改一个传入的变量。
```glsl
int newFunction(in vec4 aVec4, // read-only
out vec3 aVec3, // write-only
inout int aInt); // read-write
```
或许你还不相信我们可以用所有这些元素来画一些炫酷的东西。下一章我们会学习如何结合所有这些技巧通过融合 (*blending*) 空间来创造几何形状。没错。。。融合(*blending*) 空间。

@ -0,0 +1,2 @@
- [Distance Transforms of Sampled Functions] (http://cs.brown.edu/~pff/papers/dt-final.pdf)

@ -0,0 +1,238 @@
## 图形/形状
##### 图形和形状在这里会是有一些习惯上的区别。第二版在校正吧!!!
![Alice Hubbard, Providence, United States, ca. 1892. Photo: Zindman/Freemont.](froebel.jpg)
终于我们一直学习的技能就等着这一刻你已经学习过GLSL的大部分基础类型和函数。你一遍又一遍的联系你的造型方程。是时候把他们整合起来了。你就是为了这个挑战而来的在这一章里你会学习到如何以一种并行处理方式来画简单的图形。
### 长方形
想象我们有张数学课上使用的方格纸而我们的作业是画一个正方形。纸的大小是10 * 10而正方形应该是8 * 8. 你会怎么做?
![](grid_paper.jpg)
你是不是会涂满除了第一行第一列和最后一行和最后一列的所有格点?
这和着色器有什么关系方格纸上的每个小方形格点就是一个线程一个像素。每个格点有它的位置就想棋盘上的坐标一样。在之前的章节我们将x和y映射到rgb通道并且我们学习了如何将二维边界限制在0和1之间。我们如何用这些来画一个中心点位于示像屏的中心正方形
我们从空间角度来判别的if语句伪代码开始。这个原理和我们思考方格纸的策略异曲同工。
```glsl
if ( (X GREATER THAN 1) AND (Y GREATER THAN 1) )
paint white
else
paint black
```
现在我们有个更好的主意让这个想法实现来试试把if语句换成step并用0到1代替10 * 10的范围。
```glsl
uniform vec2 u_resolution;
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
vec3 color = vec3(0.0);
// Each result will return 1.0 (white) or 0.0 (black).
float left = step(0.1,st.x); // Similar to ( X greater than 0.1 )
float bottom = step(0.1,st.y); // Similar to ( Y greater than 0.1 )
// The multiplication of left*bottom will be similar to the logical AND.
color = vec3( left * bottom );
gl_FragColor = vec4(color,1.0);
}
```
step函数会让没每一个小于0.1的像素变成黑色vec30.0并将其与的变成白色vec31.0。左边和底边的“并行”由逻辑运算符AND完成——当x y都为1.0时返回1.0.这就画了两条黑线,一个在画布的底面另一个在左边。
![](rect-01.jpg)
在前一例代码中我们重复每个像素的结构左边和底边。我们可以把原来的一个值换成两个值直接给step来精减代码。就像这样
```glsl
vec2 borders = step(vec2(0.1),st);
float pct = borders.x * borders.y;
```
目前为止,我们只画了长方形的两条边(左边和底面)。看下下面的例子:
<div class="codeAndCanvas" data="rect-making.frag"></div>
取消21~22行的注释来看看如何转置坐标的同时重复使用step函数。这样二维向量vec2(0.0,0.0)会被变换到右上角。这就是转置页面和重复过程的数字等价。
![](rect-02.jpg)
注意在18行和22行所有的边都被放大了。等价于这样写
```glsl
vec2 bl = step(vec2(0.1),st); // bottom-left
vec2 tr = step(vec2(0.1),1.0-st); // top-right
color = vec3(bl.x * bl.y * tr.x * tr.y);
```
是不是很有趣这种都是关于运用step函数、逻辑运算和转置坐标的结合。
再进行下一个环节之前,挑战下下面的练习:
* 改变长方形的比例和大小。
* 用smoothstep函数代替step函数试试在相同的代码下会有什么不同。注意通过改变取值你可以不仅可以得到模糊边界也可以由漂亮的顺滑边界。
* 应用floor做个另外的案例。
* 挑个你最喜欢的应用做成函数,这样未来你可以调用它。并且让它灵活高效。
* 写一个只画长方形边界的函数。
* 想一下如何在一个画板上移动并放置不同的长方形?如果你做出来了,试着像[Piet Mondrian](http://en.wikipedia.org/wiki/Piet_Mondrian)一样创作以长方形和色彩的图画。
![Piet Mondria - Tableau (1921)](mondrian.jpg)
### 圆
在笛卡尔坐标系下,用方格纸来画正方形和长方形是很容易的。但是画圆就需要另一种方式了,尤其我们需要一个对“每个像素”的算法。一种解决办法是用[```step()```](../glossary/?search=step)函数将重新映射的空间坐标来画圆。
如何实现?让我们重新回顾一下数学课上的方格纸:我们把圆规展开到半径的长度,把一个针脚戳在圆圆心上,旋转着把圆的边界留下来。
![](compass.jpg)
将这个过程翻译给shader意味着纸上的每个方形格点都会隐含着问每个像素线程是否在圆的区域以内。我们通过计算像素到中心的距离来实现这个判断
![](circle.jpg)
There are several ways to calculate that distance. The easiest one uses the [```distance()```](../glossary/?search=distance) function, which internally computes the [```length()```](../glossary/?search=length) of the difference between two points (in our case the pixel coordinate and the center of the canvas). The ```length()``` function is nothing but a shortcut of the [hypotenuse equation](http://en.wikipedia.org/wiki/Hypotenuse) that uses square root ([```sqrt()```](../glossary/?search=sqrt)) internally.
有几种方法来计算距离。最简单的是用[```distance()```](../glossary/?search=distance)函数,这个函数其实内部调用 [```length()```](../glossary/?search=length)函数计算不同两点的距离在此例中是像素坐标和画布中心的距离。length函数内部只不过是用平方根([```sqrt()```](../glossary/?search=sqrt))计算斜边的方程。
![](hypotenuse.png)
你可以使用[```distance()```](../glossary/?search=distance), [```length()```](../glossary/?search=length) 或 [```sqrt()```](../glossary/?search=sqrt)到计算显示屏的中心的距离。下面的代码包含着三个函数,毫无悬念的他们返回相同的结果。
* 注释和取消注释某行来试试看用不同方式得到相同的结果。
<div class="codeAndCanvas" data="circle-making.frag"></div>
上回我们把到中心的距离映射为颜色亮度。离中心越近的越暗。注意到映射值不宜过高因为从中心vec2(0.5, 0.5)到最远距离才刚刚超过0.5一点。仔细考察这个映射:
* 你能从中推断出什么?
* 我们怎么用这个方法来画圆?
* 试试有没有其他方法来实现这样画布内圆形渐变的效果。
### 距离场
我们可也可以从另外的角度思考上面的例子把它当做海拔地图等高线图——越黑的地方意味着海拔越高。想象下你就在圆锥的顶端那么这里的渐变就和圆锥的等高线图有些相似。到圆锥的水平距离是一个常数0.5。这个距离值在每个方向上都是相等的。通过选择从那里截取这个圆锥,你就会得到或大或小的圆纹面。
![](distance-field.jpg)
其实我们是通过“空间距离”来重新解释什么是图形。这种技巧被称之为“距离场”从字体轮廓到3D图形被广泛应用。
来小试下牛刀:
* 用[```step()```](../glossary/?search=step)函数把所有大于0.5的像素点变成白色并把小于的变成黑色0.0【原文似乎有问题等于0.5呢这就要看step函数的定义了哈哈哈】
* 反转前景色和背景色。
* 调戏下[```smoothstep()```](../glossary/?search=smoothstep)函数,用不同的值来试验着得到一个顺滑的边界的圆。
* 一旦遇到令你满意的应用,把他写成一个函数,这样将来就可以调用了。
* 给这个圆来些缤纷的颜色吧!(圣诞红?不错的主意)
* 再加点动画?一闪一闪亮晶晶?或者是怦然的心跳?(或许你可以从上一章汲取一些灵感)
* 让它动起来?能不能移动它并且在同一个市像屏上放置多个圆?
* 如果你结合函数来混合不同的距离场,会发生什么呢?
```glsl
pct = distance(st,vec2(0.4)) + distance(st,vec2(0.6));
pct = distance(st,vec2(0.4)) * distance(st,vec2(0.6));
pct = min(distance(st,vec2(0.4)),distance(st,vec2(0.6)));
pct = max(distance(st,vec2(0.4)),distance(st,vec2(0.6)));
pct = pow(distance(st,vec2(0.4)),distance(st,vec2(0.6)));
```
* 用这种技巧制作三个元素,如果它们是运动的,那就再好不过啦!
#### 添加自己的工具箱
就计算效率而言,[```sqrt()```](../glossary/?search=sqrt)函数,以及所有依赖它的运算,都耗时耗力。[```dot()```](../glossary/?search=dot)点乘是另外一种用来高效计算圆形距离场的方式。
<div class="codeAndCanvas" data="circle.frag"></div>
### 距离场的特点
![Zen garden](zen-garden.jpg)
距离场几乎可以用来画任何东西。显然,图形越复杂,方程也越复杂。但是一旦你找到某个特定图形的公式,就很容易添加图形或应用像过渡边界的效果。正因如此,距离场经常用于字体渲染,例如[Mapbox GL Labels](https://www.mapbox.com/blog/text-signed-distance-fields/), [Matt DesLauriers](https://twitter.com/mattdesl) [Material Design Fonts](http://mattdesl.svbtle.com/material-design-on-the-gpu) 和 [as is describe on Chapter 7 of iPhone 3D Programming, OReilly](http://chimera.labs.oreilly.com/books/1234000001814/ch07.html#ch07_id36000921).
看看下面的代码:
<div class="codeAndCanvas" data="rect-df.frag"></div>
我们一开始把坐标系移到中心并把它映射到-1到1之间。在 *24行* 这儿,我们用一个[```fract()```](../glossary/?search=fract) 函数来呈现这个距离场产生的图案。这个距离场不断重复,就像在禅花园看到的环一样。
现在我们来看下 *19行* 的距离场方程。这里我们在计算点 ```(.3,.3)``` 或 ```vec3(.3)```到所有四象限的距离(这就是 [```abs()```](../glossary/?search=abs) 在起作用)。
如果你取消第 *20行* 的注释,你会发现我们把到四个点的距离用[```min()```](../glossary/?search=min) 函数合并到0并产生了一个有趣的图案。
现在再试着取消第 *21行* 的注释,我们做的和之前一样,只不过这次用的是 [```max()```](../glossary/?search=max) 函数。这次的记过是圆角矩形。注意距离场的环形是如何离中心越远越光滑的。
最后从*27 行到 29 行*一行行地取消注释,思考距离场的不同用途。
### 极坐标下的图形
![Robert Mangold - Untitled (2008)](mangold.jpg)
在关于颜色的章节我们通过如下的方程把每个像素的 *半径**角度* 笛卡尔坐标映射到极坐标。
```glsl
vec2 pos = vec2(0.5)-st;
float r = length(pos)*2.0;
float a = atan(pos.y,pos.x);
```
我们用了部分方程在这章的开头来画圆,即用 [```length()```](../glossary/?search=length) 计算到中心的距离。现在我们可以用极坐标来画圆。
极坐标这种方式虽然有所限制但却十分简单。
下面你会看到在同样在笛卡尔坐标下图形在极坐标下的着色器案例(在 *lines 21 和 25*之间)。对这些函数一个个取消注释,看看两坐标系之间的联系。
<div class="simpleFunction" data="y = cos(x*3.);
//y = abs(cos(x*3.));
//y = abs(cos(x*2.5))*0.5+0.3;
//y = abs(cos(x*12.)*sin(x*3.))*.8+.1;
//y = smoothstep(-.5,1., cos(x*10.))*0.2+0.5;"></div>
<div class="codeAndCanvas" data="polar.frag"></div>
试着:
* 让这些图形动起来。
* 结合不同的造型函数来 *雕刻* 图形,制作诸如花,雪花和齿轮。
* 用我们在 *造型函数* 章节的 ```plot()``` 函数画等高线。
### 整合的魅力
到目前为止,我们知道如何用[```atan()```](../glossary/?search=atan)函数来根据角度调整半径以获得不同的图形,以及如何用```atan()```结合所以和距离场有关的技巧得到可能的效果。
看下下面来自[Andrew Baldwin](https://twitter.com/baldand)的例子。这里的技巧是用极坐标的方式通过定义多边形的边数来构建一个距离场。
<div class="codeAndCanvas" data="shapes.frag"></div>
* 用这个例子,改造一个输入位置,指定图形(形状)的顶点数来返回一个距离场(的值)。
* 结合使用 [```min()```](../glossary/?search=min) 和 [```max()```](../glossary/?search=max) 函数混合距离场。
* 选一个几何logo永距离场来衍生生成出一个。用距离场画个自己感兴趣的logo白话
恭喜崎岖一路走来不容易啊休息下让这些概念沉淀一下吧当然不是用简单地用Processing来画些什么。在shader的世界里画图形变得有些别扭而且适应这种新的编程范式编程思维模式会有些精疲力竭。
既然现在你知道了如何画图形,我十分肯定你脑袋里已经充满了新的点子。在接下来的章节里你会学习到怎么移动,旋转以及缩放图形。这将使你的创作如虎添翼!

@ -10,10 +10,10 @@ uniform vec2 u_mouse;
uniform float u_time;
float circle(in vec2 _st, in float _radius){
vec2 l = _st-vec2(0.5);
vec2 dist = _st-vec2(0.5);
return 1.-smoothstep(_radius-(_radius*0.01),
_radius+(_radius*0.01),
dot(l,l)*4.0);
dot(dist,dist)*4.0);
}
void main(){

@ -0,0 +1,66 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265359
#define TWO_PI 6.28318530718
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
// Antialiazed Step function
// from http://webstaff.itn.liu.se/~stegu/webglshadertutorial/shadertutorial.html
float aastep(float threshold, float value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
// get distance field of a polygon in the center
// where N is the number of sides of it
// ================================
float shapeDF (vec2 st, int N) {
st = st *2.-1.;
float a = atan(st.x,st.y)+PI;
float r = TWO_PI/float(N);
return cos(floor(.5+a/r)*r-a)*length(st);
}
// draw a polygon in the center
// where N is the number of sides of it
// ================================
float shape (vec2 st, int N, float width) {
return 1.0-aastep(width,shapeDF(st,N));
}
// draw the border of a polygon in the center
// where N is the number of sides of it
// ================================
float shapeBorder (vec2 st, int N, float size, float width) {
return shape(st,N,size)-shape(st,N,size-width);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
color.r += shapeBorder(st, 3, .1, .06);
vec2 offset = vec2(.0,-.1);
color += shapeBorder(st+offset+vec2(.045,.125), 3, .05, .03);
color += shapeBorder(st+offset+vec2(-.045,.125), 3, .05, .03);
color += shapeBorder(st+offset+vec2(0.,0.05), 3, .05, .03);
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,39 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265359
#define TWO_PI 6.28318530718
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float shape(vec2 st, float N){
st = st *2.-1.;
float a = atan(st.x,st.y)+PI;
float r = TWO_PI/floor(N);
return cos(floor(.5+a/r)*r-a)*length(st);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
float sides = u_time*.5;
float minSides = 3.;
float maxSides = 6.;
float d = mix(shape(st,minSides+mod(sides,maxSides)),
shape(st,minSides+mod(sides+1.,maxSides)),
pow(fract(sides),20.));
// Size
d = step(.4,d);
gl_FragColor = vec4(vec3(1.0-d),1.0);
}

@ -0,0 +1,105 @@
## 2D Matrices 二维矩阵
<canvas id="custom" class="canvas" data-fragment-url="matrix.frag" width="700px" height="200px"></canvas>
### 平移
之前的章节我们学习了如何制作一些图形 - 而如何移动它们的技巧则是借助移动它们自身的参考坐标系。我们只需要给 ```st``` 变量加上一个包含每个片段的位置的向量。这样就移动了整个坐标系。
![](translate.jpg)
还是画着比较更容易解释,如上图所示:
* 取消下面代码中第35行的注释看下坐标空间是如何平移的。
<div class="codeAndCanvas" data="cross-translate.frag"></div>
现在尝试下下面的练习:
* 结合 ```u_time``` 和造型函数来移动十字,并试着让它有趣一点。找一个你觉得你感兴趣的某种运动形式,让这个十字也这样运动。记录“真实世界”的一些现象或许对你有所启发 — 可以是波的运动,摆动,弹球,汽车的加速运动,一辆自行车的刹车。
### 旋转
要移动物体,我们同样需要移动整个空间(坐标)系统。为此我们将使用一个[矩阵](http://en.wikipedia.org/wiki/Matrix_%28mathematics%29)。矩阵是一个通过行和列定义的一组数。用矩阵乘以一个向量是用一组精确的规则定义的,这样做是为了以一组特定的方式来改变向量的值。
[![Wikipedia entry for Matrix (mathematics) ](matrixes.png)](https://en.wikipedia.org/wiki/Matrix)
GLSL本身支持2维3维和4维方阵m*m矩阵[```mat2```](../glossary/?search=mat2) (2x2), [```mat3```](../glossary/?search=mat3) (3x3) 和 [```mat4```](../glossary/?search=mat4) (4x4)。GLSL同样支持矩阵相乘 (```*```)和特殊矩阵函数([```matrixCompMult()```](../glossary/?search=matrixCompMult))。
基于矩阵的特性,我们便有可能构造一个矩阵来产生特定的作用。比如我们可以用一个矩阵来平移一个向量:
![](3dtransmat.png)
更有趣的是,我们可以用矩阵来旋转坐标系统:
![](rotmat.png)
看下下面构成2维旋转的矩阵的代码。这个函数根据上面的[公式](http://en.wikipedia.org/wiki/Rotation_matrix),将二维向量绕 ```vec2(0.0)``` 点旋转。
```glsl
mat2 rotate2d(float _angle){
return mat2(cos(_angle),-sin(_angle),
sin(_angle),cos(_angle));
}
```
根据以往我们画形状的方式,这并不是我们想要的。我们的十字是画在画布中心的,对应于点 ```vec2(0.5)``` 。所以,再旋转坐标空间之前,我们需要先把图形移到中心点,坐标 ```vec2(0.0)``` ,再旋转坐标空间,最后在移动回原点。
![](rotate.jpg)
就像下面的代码:
<div class="codeAndCanvas" data="cross-rotate.frag"></div>
试试下面的练习:
* 取消第45行的代码看看会发生什么。
* 在37行和39行将旋转之前的平移注释掉观察结果。
* 用旋转改进在平移练习中模拟的动画。
### 缩放
我们看到了如何用矩阵平移和旋转物体。或者更准确的说如何通过变换坐标系统来旋转和移动物体。如果你用过3D建模软件或者 Processing中的 pushmatrix 和 popmatrix 函数,你会知道矩阵也可以被用来缩放物体的大小。
![](scale.png)
根据上面的公式我们知道如何构造一个2D缩放矩阵
```glsl
mat2 scale(vec2 _scale){
return mat2(_scale.x,0.0,
0.0,_scale.y);
}
```
<div class="codeAndCanvas" data="cross-scale.frag"></div>
试试下面的练习,尝试深入理解矩阵的工作机制:
* 取消上面代码中的第42行来观察空间坐标是如何被缩放的。
* 看看注释掉37和39行变换之前和之后的缩放会发生什么。
* 试着结合旋转矩阵和缩放矩阵。注意他们的先后顺序。先乘以一个矩阵,再乘以向量。
* 现在你知道如何画不同的图形,知道如何移动,旋转和缩放它们,是时候用这些来创作了。设计一个[fake UI or HUD (heads up display)](https://www.pinterest.com/patriciogonzv/huds/)。参考[Ndel](https://www.shadertoy.com/user/ndel)在ShaderToy上的例子。
<iframe width="800" height="450" frameborder="0" src="https://www.shadertoy.com/embed/4s2SRt?gui=true&t=10&paused=true" allowfullscreen></iframe>
### Other uses for matrices: YUV color 矩阵的其他应用YUV 颜色
[YUV](http://en.wikipedia.org/wiki/YUV) 是个用来模拟照片和视频的编码的色彩空间。这个色彩空间考虑人类的感知,减少色度的带宽。
下面的代码展现一种利用GLSL中的矩阵操作来切换颜色模式的有趣可能。
<div class="codeAndCanvas" data="yuv.frag"></div>
正如你所见,我们用对向量乘以矩阵的方式对待色彩。用这种方式,我们“移动”这些值。
这章我们学习如何运用矩阵变换来移动,旋转和缩放向量。除了之前章节学的图形,这些变换是创作的基础。在接下来的章节我们会应用我们所学的制作漂亮的程序纹理。你会发现编程的重复性和多样性是种令人兴奋的实践。

@ -0,0 +1,26 @@
// Author @patriciogv ( patriciogonzalezvivo.com ) - 2015
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
float box(vec2 _st, vec2 _size){
_size = vec2(0.5) - _size*0.5;
vec2 uv = smoothstep(_size,
_size+vec2(0.0001),
_st);
uv *= smoothstep(_size,
_size+vec2(0.0001),
vec2(1.0)-_st);
return uv.x*uv.y;
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
float pct = box(st, vec2(0.9,0.3)) + box(st, vec2(0.3,0.9));
gl_FragColor = vec4( vec3(1.-pct),pct );
}

@ -121,4 +121,4 @@ xをずらす必要があるかを決めるには、まず現在のスレッド
![Franz Sales Meyer - A handbook of ornament (1920)](geometricpatters.png)
この章で「アルゴリズムで絵を描く」のセクションは終わりです。続く章では多少の無秩序さをシェーダーに持ち込んでデザインを生成する方法について学びます。
この章で「アルゴリズムで絵を描く」のセクションは終わりです。続く章では、シェーダーに多少の無秩序さを持ち込んでデザインを生成する方法について学びます。

@ -0,0 +1,32 @@
// Author @patriciogv ( patriciogonzalezvivo.com ) - 2015
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 tile(vec2 _st, float _zoom){
_st *= _zoom;
return fract(_st);
}
float circle(vec2 _st, float _radius){
vec2 pos = vec2(0.5)-_st;
_radius *= 0.75;
return 1.-smoothstep(_radius-(_radius*0.05),_radius+(_radius*0.05),dot(pos,pos)*3.14);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
float pct = 0.0;
vec2 st_i = floor(st*10.);
pct += step(0.5,abs(mod(st_i.x,2.)-mod(st_i.y+1.,2.)));
gl_FragColor = vec4(vec3(pct),1.0);
}

@ -0,0 +1,40 @@
// Author @patriciogv ( patriciogonzalezvivo.com ) - 2015
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_tex0;
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float rows = 10.0;
vec2 brickTile(vec2 _st, float _zoom){
_st *= _zoom;
if (fract(_st.y * 0.5) > 0.5){
_st.x += 0.5;
}
return fract(_st);
}
float circle(vec2 _st, float _radius){
vec2 pos = vec2(0.5)-_st;
_radius *= 0.75;
return 1.-smoothstep(_radius-(_radius*0.01),_radius+(_radius*0.01),dot(pos,pos)*3.14);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec2 pos = st;
st = brickTile(st,50.);
float pattern = texture2D(u_tex0,pos).r;
pattern = circle(st, pattern);
gl_FragColor = vec4(1.-vec3(pattern),1.0);
}

@ -0,0 +1,33 @@
// Author @patriciogv ( patriciogonzalezvivo.com ) - 2015
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 tile(vec2 _st, float _zoom){
_st *= _zoom;
return fract(_st);
}
float circle(vec2 _st, float _radius){
vec2 pos = vec2(0.5)-_st;
_radius *= 0.75;
return 1.-smoothstep(_radius-(_radius*0.05),_radius+(_radius*0.05),dot(pos,pos)*3.14);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
st = tile(st,5.);
float size = .45;
float pct = circle(st,size);
st = abs(st-.5);
pct += circle(st,size);
gl_FragColor = vec4(vec3(pct),1.0);
}

@ -0,0 +1,30 @@
// Author @patriciogv ( patriciogonzalezvivo.com ) - 2015
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 tile(vec2 _st, float _zoom){
_st *= _zoom;
return fract(_st);
}
float circle(vec2 _st, float _radius){
vec2 pos = vec2(0.5)-_st;
_radius *= 0.75;
return 1.-smoothstep(_radius-(_radius*0.05),_radius+(_radius*0.05),dot(pos,pos)*3.14);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
st = tile(st,5.);
vec3 color = vec3(circle(st, 0.2));
gl_FragColor = vec4(color,1.0);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

@ -32,13 +32,13 @@ vec2 rotateTilePattern(vec2 _st){
float index = 0.0;
index += step(1., mod(_st.x,2.0));
index += step(1., mod(_st.y,2.0))*2.0;
// |
// 0 | 1
// 2 | 3
// |
//--------------
// |
// 2 | 3
// 0 | 1
// |
// Make each cell between 0.0 - 1.0

@ -15,7 +15,7 @@ float plot(vec2 _st, float _pct){
}
float random (in float _x) {
return fract(sin(_x)*1e4);
return fract(sin(_x)*43758.5453);
}
void main() {

@ -9,9 +9,9 @@ uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float random (vec2 st) {
float random (vec2 st) {
return fract(sin(dot(st.xy,
vec2(12.9898,78.233)))*
vec2(12.9898,78.233)))*
43758.5453123);
}

@ -1,47 +1,26 @@
# Generative designs
# ジェネラティブデザイン
It is not a surprise that after so much repetition and order the author is forced to bring some chaos.
繰り返しと秩序を十分に堪能したので、今度は多少の混沌を持ち込んでみましょう。
*いいまわし検討* so much な繰り返しと秩序の後では[筆者|作者]がいくらかの混沌を持ち込まざるを得なくなるとしても驚くには値しません。
*いいまわし2* 繰り返しと秩序を何回もやってみた後に、[筆者|作者]がある程度のカオスを持ち込まざるをえなくなることは、驚きではありません。
## Random
## ランダム
[![Ryoji Ikeda - test pattern (2008) ](ryoji-ikeda.jpg) ](http://www.ryojiikeda.com/project/testpattern/#testpattern_live_set)
(訳注:池田亮司を「ランダム」でくくることに抵抗があるという意見を各方面からいただきましたが、翻訳なのでそのままにしておきます。)
Randomness is a maximal expression of entropy. How can we generate randomness inside the seemingly predictable and rigid code environment?
ランダムはエントロピーが最大になった状態を表します。一見規則正く厳格なコードの世界、どのようにしてランダムな要素を生成することができるのでしょうか。
ランダムはエントロピーが最大になった状態です。一見厳格で規則正しいコードの世界で、どのようにしてランダムな要素を生成することができるのでしょうか。
Let's start by analyzing the following function:
下記の関数を検討することから始めましょう。
<div class="simpleFunction" data="y = fract(sin(x)*1.0);"></div>
Above we are extracting the fractional content of a sine wave. The [```sin()```](../glossary/?search=sin) values that fluctuate between ```-1.0``` and ```1.0``` have been chopped behind the floating point, returning all positive values between ```0.0``` and ```1.0```. We can use this effect to get some pseudo-random values by "breaking" this sine wave into smaller pieces. How? By multiplying the resultant of [```sin(x)```](../glossary/?search=sin) by larger numbers. Go ahead and click on the function above and start adding some zeros.
ここではサイン波から小数点部分を取り出しています。```-1.0``` から ```1.0``` の間を往復する [```sin()```](../glossary/?search=sin) の値から、```0.0``` から ```1.0``` の間の正の値だけが残るように小数点の後ろだけを切り取っています。サイン波を細かな部分に分割することで擬似的にランダムな値を得るために、これを応用することができます。どういうことでしょう。[```sin(x)```](../glossary/?search=sin) の結果の値に大きな数を掛けます。上の関数をクリックして 0 を幾つか書き加えてみましょう。
By the time you get to ```100000.0``` ( and the equation looks like this: ```y = fract(sin(x)*100000.0)``` ) you aren't able to distinguish the sine wave any more. The granularity of the fractional part has corrupted the flow of the sine wave into pseudo-random chaos.
```100000.0``` に至る頃には(式は ```y = fract(sin(x)*100000.0)``` のようになります)もうサインカーブには見えなくなっているでしょう。小数点部分は非常に細かくなり、サイン波の流れるカーブは潰されて混沌とした擬似的なランダム状態を作り出しています。
ここではサイン波から小数点部分を取り出しています。```-1.0``` から ```1.0``` の間を往復する [```sin()```](../glossary/?search=sin) の値から、小数点の後ろだけを切り取ると ```0.0``` から ```1.0``` の間の正の値だけが残ります。これを利用し、さらにサイン波を細かな部分に分割することで擬似的にランダムな値を得ることができます。どういうことのでしょう。[```sin(x)```](../glossary/?search=sin) の結果の値に大きな数を掛けてみます。上の関数をクリックして 0 を幾つか書き加えてみましょう。
```100000.0``` に至る頃には(式は ```y = fract(sin(x)*100000.0)``` のようになります)もうサインカーブには見えなくなっているでしょう。小数点部分のサイクルは非常に短くなりサイン波の流れるような曲線は潰されて、ランダムに見えるカオス状態を作り出しています。
## Controlling chaos
## カオスを制御する
Using random can be hard; it is both too chaotic and sometimes not random enough. Take a look at the following graph. To make it, we are using a ```rand()``` function which is implemented exactly like we describe above.
乱数を使いこなすのは易しいことではありません。無秩序すぎることも、十分にランダムでないこともあります。下記のグラフを見てください。このグラフは、上での述べた通りの方法で実装した ```rand()``` 関数を使って作られています。
Taking a closer look, you can see the [```sin()```](../glossary/?search=sin) wave crest at ```-1.5707``` and . I bet you now understand why - it's where the maximum and minimum of the sine wave happens.
乱数を使いこなすのは難しいこともあります。無秩序すぎたり、十分にランダムでないこともあります。下記のグラフを見てください。このグラフは、上で述べた通りの方法で実装した ```rand()``` 関数を使って作られています。
よく見ると [```sin()```](../glossary/?search=sin) の描く波が ```-1.5707``` と ```-1.5707``` で頂点を迎えています。お分かりですね。これは波が最大値と最小値になる場所です。*talking about "crests" doesn't make much sense here. Better mention the tiny gaps around +PI/2 and -PI/2 and explain why this happens, i.e. these are where the crest of the sine curve are?*
If look closely at the random distribution, you will note that the there is some concentration around the middle compared to the edges.
よく見ると ```-1.5707``` と ```1.5707``` のあたりに小さな裂け目のようなものがあるのが分かるでしょう。これは [```sin()```](../glossary/?search=sin) の描く波が最大と最小になる場所です。
乱数の分布に注目すると、端にくらべて中央に値が集中しているのが分かるでしょう。
@ -51,109 +30,68 @@ If look closely at the random distribution, you will note that the there is some
//y = sqrt(rand(x));
//y = pow(rand(x),5.);"></div>
A while ago [Pixelero](https://pixelero.wordpress.com) published an [interesting article about random distribution](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/). I've added some of the functions he uses in the previous graph for you to play with and see how the distribution can be changed. Uncomment the functions and see what happens.
以前に[Pixelero](https://pixelero.wordpress.com)は[ランダムな分布についての興味深い記事](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)を公開しました。上記のグラフにこの記事から幾つかの関数を加えておいたので、どのように値の分布が変化するか試してみてください。関数のコメントを外して何が起こるか見てみましょう。
If you read [Pixelero's article](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/), it is important to keep in mind that our ```rand()``` function is a deterministic random, also known as pseudo-random. Which means for example ```rand(1.)``` is always going to return the same value. [Pixelero](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/) makes reference to the ActionScript function ```Math.random()``` which is non-deterministic; every call will return a different value.
[Pixeleroの記事](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)を読むときには、ここで作った ```rand()``` 関数は擬似ランダムとも呼ばれる、決定的(結果の値が一意に定まる)乱数だということを覚えておくことが重要です。これはつまり、例えば ```rand(1.)``` は常に同じ値を返すということです。[Pixelero](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)が引き合いに出しているのはActionScriptの ```Math.random()``` で、これは非決定的な、つまり毎回異なる値を返す関数です。
以前に[Pixelero](https://pixelero.wordpress.com)は[ランダムな値の分布についての興味深い記事](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)を公開しました。この記事から、上記のグラフに幾つかの関数を加えておきました。どのように値の分布が変化するか試してみてください。関数のコメントを外して何が起こるか見てみましょう。
[Pixeleroの記事](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)を読むときには、ここで私たちが作った ```rand()``` 関数は擬似ランダムとも呼ばれる決定的な(結果の値が一意に定まる)乱数だということを頭に置いておいてください。例えば ```rand(1.)``` は常に同じ値を返します。[Pixelero](https://pixelero.wordpress.com/2008/04/24/various-functions-and-various-distributions-with-mathrandom/)が引き合いにしているのはActionScriptの ```Math.random()``` で、これは非決定的な、つまり毎回異なる値を返す関数です。
## 2D Random
## 2D ランダム
## 2Dランダム
Now that we have a better understanding of randomness, it's time to apply it in two dimensions, to both the ```x``` and ```y``` axis. For that we need a way to transform a two dimensional vector into a one dimensional floating point value. There are different ways to do this, but the [```dot()```](../glossary/?search=dot) function is particulary helpful in this case. It returns a single float value between ```0.0``` and ```1.0``` depending on the alignment of two vectors.
ランダムの性質についての理解が深まったところで、次に二次元、つまり ```x``` 軸と ```y``` 軸の両方に適用してみましょう。そのためには、二次元ベクトルを一次元の浮動小数点の値に変換することが必要です。いろいろなやり方がありますが、[```dot()```](../glossary/?search=dot) 関数は特に便利です。 [```dot()```](../glossary/?search=dot) 関数は2つのベクトルの組に対して、```0.0``` から ```1.0``` の間の浮動小数点の値を返してくれます。
ランダムな要素についてよく理解できたと思います。それでは次に、2次元、つまり ```x``` 軸と ```y``` 軸の両方に、応用してみましょう。そのためには、2次元ベクトルを、1次元の浮動小数点の値に変換することが必要です。いろいろなやり方がありますが、[```dot()```](../glossary/?search=dot) 関数は特に役に立ちます。 2つのベクトルの配置に従って、 ```0.0``` と ```1.0``` の間の浮動小数点の値を戻してくれます。
(訳注:値が ```0.0``` から ```1.0``` の間に収まるには2つのベクトルが正規化されている必要があります。下記のサンプルではベクトルは正規化されていないので ```dot``` の戻り値は ```1.0``` を大きく超えます。)
<div class="codeAndCanvas" data="2d-random.frag"></div>
Take a look at lines 13 to 15 and notice how we are comparing the ```vec2 st``` with another two dimensional vector ( ```vec2(12.9898,78.233)```).
13行目から15行目を見てみてみましょう。```vec2 st``` ともう1つの二次元ベクトルである ( ```vec2(12.9898,78.233)```) をどのように比べることができるか注目してください。
* Try changing the values on lines 14 and 15. See how the random pattern changes and think about what we can learn from this.
* 14行目と15行目の値を変えてみましょう。ランダム・パターンがどのように変化したか観察し、そこから何が学べるか考えてみましょう。
13行目から15行目を見てみましょう。```vec2 st``` ともう1つの二次元ベクトルである ( ```vec2(12.9898,78.233)```) の使い方に注目してください。
* Hook this random function to the mouse interaction (```u_mouse```) and time (```u_time```) to understand better how it works.
* 14行目と15行目の値を変えてみましょう。ランダムなパターンがどのように変化したか観察し、そこから何が学べるか考えてみましょう。
* このランダム関数を、マウスのインタラクション (```u_mouse```) と時間 (```u_time```) に適応させてみることは、どのように動くかについてのよりよい理解につながります
* このランダム関数の仕組みをより理解するために、マウスのインタラクション (```u_mouse```) と時間 (```u_time```) に反応させてみましょう。
## Using the chaos
## カオスを使う
## カオスを使いこなす
Random in two dimensions looks a lot like TV noise, right? It's a hard raw material to use to compose images. Let's learn how to make use of it.
二次元のランダムはまるでテレビのノイズのように見えますね。この未加工な素材からイメージを作り出すのは簡単なことではありません。ここでは、素材の料理方法を学んでいきましょう。
2次元のランダムは、テレビのノイズのようですよね。未加工な材料からイメージをつくりだしていくことは簡単ではありません。これから、それをどのように使うのか学んでいきましょう。
Our first step is to apply a grid to it; using the [```floor()```](../glossary/?search=floor) function we will generate an integer table of cells. Take a look at the following code, especially lines 22 and 23.
まずは最初のステップとして、グリッドを適用してみましょう。 [```floor()```](../glossary/?search=floor) 関数を使って、 セルの整数テーブルを生成します。下記のコードの、特に22行目と23行目を見てみてください。
まずは最初のステップとしてグリッドを適用してみましょう。 [```floor()```](../glossary/?search=floor) 関数を使って 整数の表を作り出します。下記のコードの、特に22行目と23行目を見てみてください。
<div class="codeAndCanvas" data="2d-random-mosaic.frag"></div>
After scaling the space by 10 (on line 21), we separate the integers of the coordinates from the fractional part. We are familiar with this last operation because we have been using it to subdivide a space into smaller cells that go from ```0.0``` to ```1.0```. By obtaining the integer of the coordinate we isolate a common value for a region of pixels, which will look like a single cell. Then we can use that common integer to obtain a random value for that area. Because our random function is deterministic, the random value returned will be constant for all the pixels in that cell.
*いいまわし検討* 空間を10ごとに区切った後に(21行目)、座標の整数部分を小数点部分から切り離します。この最後の演算はなじみ深いものです。なぜなら、```0.0``` から ```1.0``` に進む小さなセルに空間を細分するときに、それを使ったからです。座標から整数を取り出すことで、単体のセルに見えるピクセル達の範囲のために共通値を分離します。そして、そのエリアのためのランダムな値を得るために、共通な整数を使うことができます。ここで使っているランダム関数は決定性のものなので、返ってくるランダムな値は、そのセルの全てのピクセルに対し一定のものになります。
Uncomment line 29 to see that we preserve the floating part of the coordinate, so we can still use that as a coordinate system to draw things inside each cell.
空間座標を10倍に拡大した後に21行目、座標の整数部分を小数点部分から切り離します。続く23行目の処理は、前章で空間を ```0.0``` から ```1.0``` の座標値を持つ小さな部分に分けるときに使ったお馴染みの方法です。
座標から整数を取り出すことによって、それぞれのセル(描画領域を 10×10 に分割したマス目)に含まれるピクセルに共通の値を取り出します。
そしてこの整数をそのセルについてランダムな値を得るために使います。ここで使っているランダム関数は決定性のものなので、戻り値はそのセルの全てのピクセルに対し同じものになります。
*いいまわし検討* 29行目のコメントを外すと、座標軸の小数点部分を保持できます。このようにすることで、座標システムをそれぞれのセルの内部に描写することに使いつづけることができます。
座標の小数点部分の値も保持されていることを、29行目のコメントを外して確認しましょう。こうして、それぞれのセルの内部でもさらに座標系を用いて描画することができます。
Combining these two values - the integer part and the fractional part of the coordinate - will allow you to mix variation and order.
これら2つの値、つまり座標の整数部分と端数部分を組み合わせることで、変化と秩序を混ぜ合わせることができます。
これら2つの値、つまり座標の整数部分と端数部分を結合することで、変化と秩序を混ぜあわすことができます。
Take a look at this GLSL port of the famouse ```10 PRINT CHR$(205.5+RND(1)); : GOTO 10``` maze generator.
有名な ```10 PRINT CHR$(205.5+RND(1)); : GOTO 10``` の迷路ジェネレーターのGLSLポートを見てみましょう。
有名な [```10 PRINT CHR$(205.5+RND(1)); : GOTO 10```](https://www.google.com/search?q=10+PRINT+CHR%24%28205.5%2BRND%281%29%29%3B+%3A+GOTO+10) 迷路ジェネレーターのGLSL版を見てみましょう。
<div class="codeAndCanvas" data="2d-random-truchet.frag"></div>
Here I'm using the random values of the cells to draw a line in one direction or the other using the ```truchetPattern()``` function from the previous chapter (lines 41 to 47).
ここでは、以前の章に出て来た```truchetPattern()``` 関数とともに、セルのランダム関数を用いて、あっちに行ったりこっちに行ったりするラインを描いています。(41行目から47行目)。
ここでは、前章の ```truchetPattern()``` 関数と、セルのランダム関数を合わせて用い、あっちに行ったりこっちに行ったりする線を描いています41行目から47行目
You can get another interesting pattern by uncommenting the block of lines between 50 to 53, or animate the pattern by uncommenting lines 35 and 36.
50行目から53行目までのコメントを外すと、他の興味深いパターンを見ることができます。また、35行目と36行目を外すことで、パターンに動きを与えることができます。
50行目から53行目までのコメントを外すと、もう1つの興味深いパターンを見ることができます。また、35行目と36行目のコメントを外すと、パターンに動きを与えることができます。
## Master Random
## ランダムを熟達する
## ランダムを極める
[Ryoji Ikeda](http://www.ryojiikeda.com/), Japanese electronic composer and visual artist, has mastered the use of random; it is hard not to be touched and mesmerized by his work. His use of randomness in audio and visual mediums is forged in such a way that it is not annoying chaos but a mirror of the complexity of our technological culture.
*若干意訳* 日本の電子音楽家でありビジュアルアーティストである[Ryoji Ikeda](http://www.ryojiikeda.com/)は、ランダムの扱い方に熟達しています。彼の作品は大変魅力的です。彼は、音とビジュアル領域でランダムな要素を用いて、いらだたされるような無秩序を生み出すのではなく、現代テクノロジー文化の複雑さを鏡野ように映し出し、世界のアート表現の先端を切りひらいています。
日本の電子音楽家でありビジュアルアーティストでもある[池田亮司](http://www.ryojiikeda.com/)は、ランダムの扱い方に熟達しています。彼の作品は感動的で魅力的なものです。彼は音とビジュアルの領域でランダムな要素を、いらだたしい無秩序を生み出すのではなく、現代テクノロジー文化の複雑さを鏡映しにするかのように用います。
<iframe src="https://player.vimeo.com/video/76813693?title=0&byline=0&portrait=0" width="800" height="450" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Take a look at [Ikeda](http://www.ryojiikeda.com/)'s work and try the following exercises:
[Ryoji Ikeda](http://www.ryojiikeda.com/)さんの作品をよく観察しながら、下記のエクササイズをやってみましょう。
[池田亮司](http://www.ryojiikeda.com/)の作品をよく観察しながら、下記の課題に挑戦しましょう。
* Make rows of moving cells (in opposite directions) with random values. Only display the cells with brighter values. Make the velocity of the rows fluctuate over time.
* ランダムな値を用いて、(反対の方向に)動くセルの列をつくってみましょう。より明るい値のセルのみを表示させてみましょう。列の速さを、時間とともに変化させてみましょう。
* ランダムな値を用いて(反対の方向に)動くセルの列をつくってみましょう。より明るい値のセルのみを表示させてみましょう。列の速さを時間とともに変化させてみましょう。
<a href="../edit.html#10/ikeda-00.frag"><canvas id="custom" class="canvas" data-fragment-url="ikeda-00.frag" width="520px" height="200px"></canvas></a>
* Similarly make several rows but each one with a different speed and direction. Hook the position of the mouse to the threshold of which cells to show.
* 同様に、いくつかの列をつくってみましょう。ただし、今度は、違うスピードと方向にしてみます。どのセルを表示させるかの閾値として、マウスの位置情報を適用してみましょう。
* 同様に、違う方向に違うスピードで動くいくつかの列をつくってみましょう。どのセルを表示させるかの閾値として、マウスの位置情報を適用してみましょう。
<a href="../edit.html#10/ikeda-03.frag"><canvas id="custom" class="canvas" data-fragment-url="ikeda-03.frag" width="520px" height="200px"></canvas></a>
* Create other interesting effects.
* 何か他の興味深いエフェクトをつくりだしてみましょう。
* 他にも面白いエフェクトを作ってみましょう。
<a href="../edit.html#10/ikeda-04.frag"><canvas id="custom" class="canvas" data-fragment-url="ikeda-04.frag" width="520px" height="200px"></canvas></a>
Using random aesthetically can be problematic, especially if you want to make natural-looking simulations. Random is simply too chaotic and very few things look ```random()``` in real life. If you look at a rain pattern or a stock chart, which are both quite random, they are nothing like the random pattern we made at the begining of this chapter. The reason? Well, random values have no correlation between them what so ever, but most natural patterns have some memory of the previous state.
審美的にランダムを用いることには、疑問の余地がないとは言えません。特に、自然に見えるシミュレーションをつくりたいと思っているときはなおさらです。(コンピューターで生成する)ランダムは無秩序すぎるものであり、現実世界においてはごく少数のものだけが ```random()``` に見えるのです。雨の模様や株価のチャートはどちらもとてもランダムなものだと言えますが、私たちがこの章の始めに生成したランダムとはまったく似つかないものです。その理由は、(コンピューターで生成する)ランダムの値はまったく相関性をもっていないことに対し、多くの自然界で見られるパターンには過去の状態のメモリーが含まれているからです。
In the next chapter we will learn about noise, the smooth and *natural looking* way of creating computational chaos.
美しい表現のためにランダムを用いても上手くいかないことがあります。自然に見えるシミュレーションをつくりたいと思っているときは特にそうです。ランダムは単純に無秩序すぎて、現実世界には ```random()``` に見えるものはほんの少ししかありません。雨の様子や株価のチャートはどちらもとても不規則なものですが、私たちがこの章の始めに生成したランダムとは似ても似つかないものです。なぜでしょう。ランダムの値はそれぞれの値の間にまったく相関性をもっていないのに対し、多くの自然界のパターンには過去の状態の記憶が含まれているからです。
次の章では、ノイズ - つまり、コンピューターから生み出されるスムーズで *自然に見える* カオス - について学びます。
次の章ではノイズ、つまりスムーズで自然に見えるカオスを計算によって生み出す方法について学びます。

@ -9,7 +9,8 @@ uniform vec2 u_resolution;
uniform float u_time;
float random(in float x){ return fract(sin(x)*43758.5453); }
float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
// float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
float random(vec2 p) { return fract(1e4 * sin(17.0 * p.x + p.y * 0.1) * (0.1 + abs(sin(p.y * 13.0 + p.x)))); }
float bin(vec2 ipos, float n){
float remain = mod(n,33554432.);

@ -9,7 +9,8 @@ uniform vec2 u_resolution;
uniform float u_time;
float random(in float x){ return fract(sin(x)*43758.5453); }
float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
// float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
float random(vec2 p) { return fract(1e4 * sin(17.0 * p.x + p.y * 0.1) * (0.1 + abs(sin(p.y * 13.0 + p.x)))); }
float bin(vec2 ipos, float n){
float remain = mod(n,33554432.);

@ -26,10 +26,9 @@
<ul class="navigationBar" >
<li class="navigationBar" onclick="previusPage()">&lt; &lt; Previous</li>
<li class="navigationBar" onclick="homePage()"> Home </li>
<li class="navigationBar" onclick="nextPage()">Next &gt; &gt;</li>
</ul>';
include($path."/footer.php");
?>
<!-- <li class="navigationBar" onclick="nextPage()">Next &gt; &gt;</li> -->

Binary file not shown.

After

Width:  |  Height:  |  Size: 807 B

@ -0,0 +1,156 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
float random (in float x) { return fract(sin(x)*1e4);}
float random (in vec2 st) { return fract(1e4 * sin(17.0 * st.x + st.y * 0.1) * (0.1 + abs(sin(st.y * 13.0 + st.x)))); }
float binChar (vec2 ipos, float n) {
float remain = mod(n,33554432.);
for (float i = 0.0; i < 15.0; i++) {
if ( floor(i/3.) == ipos.y && mod(i,3.) == ipos.x ) {
return step(1.0,mod(remain,2.));
}
remain = ceil(remain/2.);
}
return 0.0;
}
float char (vec2 st, float n) {
st.x = st.x*2.-0.5;
st.y = st.y*1.2-0.1;
vec2 grid = vec2(3.,5.);
vec2 ipos = floor(st*grid);
vec2 fpos = fract(st*grid);
n = floor(mod(n,10.));
float digit = 0.0;
if (n < 1. ) { digit = 31600.; }
else if (n < 2. ) { digit = 9363.0; }
else if (n < 3. ) { digit = 31184.0; }
else if (n < 4. ) { digit = 31208.0; }
else if (n < 5. ) { digit = 23525.0; }
else if (n < 6. ) { digit = 29672.0; }
else if (n < 7. ) { digit = 29680.0; }
else if (n < 8. ) { digit = 31013.0; }
else if (n < 9. ) { digit = 31728.0; }
else if (n < 10. ) { digit = 31717.0; }
float pct = binChar(ipos, digit);
vec2 borders = vec2(1.);
// borders *= step(0.01,fpos.x) * step(0.01,fpos.y); // inner
borders *= step(0.0,st)*step(0.0,1.-st); // outer
return step(.5,1.0-pct) * borders.x * borders.y;
}
float binBar (vec2 ipos, float n) {
float remain = mod(n,128.);
for(float i = 0.0; i < 8.0; i++){
if ( mod(i,10.) == ipos.x ) {
return step(1.0,mod(remain,2.));
}
remain = ceil(remain/2.);
}
return 0.0;
}
// Standard UPC-E Barcode reference from
// https://en.wikipedia.org/wiki/Universal_Product_Code
float bar (vec2 st, float n, bool L) {
vec2 grid = vec2(7.,1.);
if (L) { st = 1.0-st; }
vec2 ipos = floor(st*grid);
vec2 fpos = fract(st*grid);
n = floor(mod(n,10.));
float digit = 0.0;
if (n < 1. ) { digit = 114.; }
else if (n < 2. ) { digit = 102.0; }
else if (n < 3. ) { digit = 108.0; }
else if (n < 4. ) { digit = 66.0; }
else if (n < 5. ) { digit = 92.0; }
else if (n < 6. ) { digit = 78.0; }
else if (n < 7. ) { digit = 80.0; }
else if (n < 8. ) { digit = 68.0; }
else if (n < 9. ) { digit = 72.0; }
else if (n < 10. ) { digit = 116.0; }
float pct = binBar(ipos, digit+1.);
if (L) { pct = 1.-pct; }
return step(.5,pct);
}
float bar (vec2 st, float n) {
return bar(st,n,true);
}
float barStart (vec2 st) {
vec2 grid = vec2(7.,1.);
vec2 ipos = floor((1.0-st)*grid);
float digit = 122.0;
float pct = binBar(ipos, digit+1.);
return step(.5,1.0-pct);
}
float barEnd(vec2 st) {
vec2 grid = vec2(7.,1.);
vec2 ipos = floor((1.0-st)*grid);
float digit = 85.0;
float pct = binBar(ipos, digit+1.);
return step(.5,1.0-pct);
}
float barCode(vec2 st, float rows, float value) {
rows = ceil(rows);
vec2 ipos = floor(st*rows);
vec2 fpos = fract(st*rows);
value = value*pow(10.,ipos.x)*0.0000000001+0.1;
if (ipos.x == 0.0 ) {
return barStart(fpos);
} else if (ipos.x == rows-1.) {
return barEnd(fpos);
} else {
if (ipos.y == 0.0) {
return 1.0-char(fpos,value);
} else {
return bar(fpos,value);
}
}
}
void main(){
vec2 st = gl_FragCoord.st/u_resolution.xy;
st *= 3.;
vec2 ipos = floor(st);
vec2 fpos = fract(st);
fpos.y *= u_resolution.y/u_resolution.x;
vec3 color = vec3(0.0);
if (ipos.x == 1. && ipos.y == 1.) {
float value = 0.0;
// value = 123456789.0;
value += floor(u_time);
value = random(floor(u_time*10.))*1000000000.;
color += barCode(fpos,12.,value);
} else {
color += 1.;
}
gl_FragColor = vec4( color , 1.0);
}

@ -0,0 +1,69 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
float random (in float x) { return fract(sin(x)*1e4);}
float random (in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
float bin(vec2 ipos, float n){
float remain = mod(n,33554432.);
for(float i = 0.0; i < 25.0; i++){
if ( floor(i/3.) == ipos.y && mod(i,3.) == ipos.x ) {
return step(1.0,mod(remain,2.));
}
remain = ceil(remain/2.);
}
return 0.0;
}
float char(vec2 st, float n){
st.x = st.x*2.-0.5;
st.y = st.y*1.2-0.1;
vec2 grid = vec2(3.,5.);
vec2 ipos = floor(st*grid);
vec2 fpos = fract(st*grid);
n = floor(mod(n,10.));
float digit = 0.0;
if (n < 1. ) { digit = 31600.; }
else if (n < 2. ) { digit = 9363.0; }
else if (n < 3. ) { digit = 31184.0; }
else if (n < 4. ) { digit = 31208.0; }
else if (n < 5. ) { digit = 23525.0; }
else if (n < 6. ) { digit = 29672.0; }
else if (n < 7. ) { digit = 29680.0; }
else if (n < 8. ) { digit = 31013.0; }
else if (n < 9. ) { digit = 31728.0; }
else if (n < 10. ) { digit = 31717.0; }
float pct = bin(ipos, digit);
vec2 borders = vec2(1.);
// borders *= step(0.01,fpos.x) * step(0.01,fpos.y); // inner
borders *= step(0.0,st)*step(0.0,1.-st); // outer
return step(.5,1.0-pct) * borders.x * borders.y;
}
void main(){
vec2 st = gl_FragCoord.st/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
float rows = 34.0;
vec2 ipos = floor(st*rows);
vec2 fpos = fract(st*rows);
ipos += vec2(0.,floor(u_time*20.*random(ipos.x+1.)));
float pct = random(ipos);
vec3 color = vec3(char(fpos,100.*pct));
color = mix(color,vec3(color.r,0.,0.),step(.99,pct));
gl_FragColor = vec4( color , 1.0);
}

@ -0,0 +1,13 @@
#!/bin/bash
FILE=$1
SEC=$2
COUNTER=0
for i in `seq -w 0.01 .031 $SEC`; do
echo $i
`glslViewer img-glitch.frag --u_tex $FILE -s $i -o frame-$COUNTER.png`
let COUNTER=COUNTER+1
done
convert -delay 3.5 -loop 1 frame-*.png animated.gif

@ -0,0 +1,42 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float random (in float x) { return fract(sin(x)*1e4); }
float noise (in float x) {
float i = floor(x);
float f = fract(x);
float u = f * f * (3.0 - 2.0 * f);
return mix(random(i), random(i + 1.0), u);
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
vec2 grid = vec2(20.0,2.0);
float t = u_time*max(grid.x,grid.y)*.5;
vec2 ipos = floor(st*grid);
vec2 fpos = fract(st*grid);
float offset = ipos.x+floor(t);
float value = pow(noise(offset*0.2),2.)+noise(offset*0.9)*.5;
if (mod(ipos.y,2.) == 0.) {
fpos.y = 1.0-fpos.y;
}
color += step(fpos.y*1.5,value)*step(.5,fpos.x);
gl_FragColor = vec4(color,1.0);
}

@ -9,13 +9,8 @@ uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float random (in float x) {
return fract(sin(x)*1e4);
}
float random (in vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233)))* 43758.5453123);
}
float random (in float x) { return fract(sin(x)*1e4);}
float random (in vec2 st) { return fract(1e4 * sin(17.0 * st.x + st.y * 0.1) * (0.1 + abs(sin(st.y * 13.0 + st.x)))); }
float pattern(vec2 st, vec2 v, float t) {
vec2 p = floor(st+v);

@ -0,0 +1,71 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_tex;
uniform vec2 u_resolution;
uniform float u_time;
float random(in float x){ return fract(sin(x)*43758.5453); }
float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
float noise(in vec2 x) {
vec2 i = floor(x);
vec2 f = fract(x);
float a = random(i);
float b = random(i + vec2(1.0, 0.0));
float c = random(i + vec2(0.0, 1.0));
float d = random(i + vec2(1.0, 1.0));
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(a, b, u.x) + (c - a) * u.y * (1.0 - u.x) + (d - b) * u.x * u.y;
}
float fbm( in vec2 p ){
float s = 0.0;
float m = 0.0;
float a = 0.5;
for(int i=0; i<2; i++ ){
s += a * noise(p);
m += a;
a *= 0.5;
p *= 2.0;
}
return s/m;
}
void main(){
vec2 st = gl_FragCoord.st/u_resolution.xy;
float aspect = u_resolution.x/u_resolution.y;
vec2 grain_st = st-.5;
float grain = 0.0;
grain = mix(1., 0.9, dot(grain_st,grain_st) + (fbm(gl_FragCoord.xy*0.6)*0.1) );
// Random blocks
vec2 blocks_st = floor(st*vec2(5.*random(floor(u_time*10.)),10.*(1.+random(floor(u_time*3.))) ));
float t = u_time*2.+random(blocks_st);
float time_i = floor(t);
float time_f = fract(t);
float block = step(0.9,random(blocks_st+time_i))*(1.0-time_f);
vec2 offset = vec2(block*0.01,block*0.005)+(1.0-grain)*.08;
vec4 color = vec4(1.);
color.r = texture2D(u_tex,st+offset).r;
color.g = texture2D(u_tex,st).r;
color.b = texture2D(u_tex,st-offset).r;
color.a = max(texture2D(u_tex,st+offset).a,max(texture2D(u_tex,st).a, texture2D(u_tex,st-offset).a));
if (block > .5) {
color.rgb = abs(block*grain-color.rgb);
}
color.rgb *= 0.4+sin((st.y*3.1415+u_time)*500.);
gl_FragColor = color;
}

@ -1,77 +0,0 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.1415926535897932384626433832795
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float random (in vec2 _st) {
return fract(sin(dot(_st.xy,
vec2(12.9898,78.233)))*
43758.5453123);
}
vec2 tile(vec2 _st, float _zoom){
_st *= _zoom;
return fract(_st);
}
float circle(vec2 _st, float _radius){
vec2 pos = vec2(0.5)-_st;
_radius *= 0.75;
return 1.-smoothstep(_radius-(_radius*0.01),_radius+(_radius*0.01),dot(pos,pos)*3.14);
}
float box(vec2 _st, vec2 _size){
_size = vec2(0.5)-_size*0.5;
vec2 uv = smoothstep(_size-vec2(0.0001),_size,_st);
uv *= smoothstep(_size-vec2(0.0001),_size,vec2(1.0)-_st);
return uv.x*uv.y;
}
vec3 pattern(inout vec2 st){
st *= 5.0;
st.x += u_time*0.5;
vec3 normal = vec3(0.0);
vec2 ivec = floor(st); // integer
vec2 fvec = fract(st); // fraction
vec2 pos = fvec;
float index = random(ivec);
if(index > 0.5){
normal.x = step(0.5,pos.y)*2.-1.;
normal *= (1.0-vec3(box(fvec,vec2(1.0,0.95))));
} else {
normal.y = step(0.5,pos.x)*2.-1.;
normal *= (1.0-vec3(box(fvec,vec2(0.95,1.))));
}
st = fvec;
return normal;
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 normal = pattern(st);
st = tile(st,2.);
vec2 pos = st-0.5;
float a = atan(pos.y,pos.x);
normal += vec3(cos(a),sin(a),0.)*circle(st,0.4);
normal *= 1.0-circle(st,0.26);
normal.b = 1.0;
gl_FragColor = vec4(normal*0.5+0.5,1.0);
}

@ -0,0 +1,178 @@
// Author @patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
float random(in float x){ return fract(sin(x)*43758.5453); }
float random(in vec2 st){ return fract(sin(dot(st.xy ,vec2(12.9898,78.233))) * 43758.5453); }
float noise(in vec2 x) {
vec2 i = floor(x);
vec2 f = fract(x);
float a = random(i);
float b = random(i + vec2(1.0, 0.0));
float c = random(i + vec2(0.0, 1.0));
float d = random(i + vec2(1.0, 1.0));
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(a, b, u.x) + (c - a) * u.y * (1.0 - u.x) + (d - b) * u.x * u.y;
}
float fbm( in vec2 p ){
float s = 0.0;
float m = 0.0;
float a = 0.5;
for(int i=0; i<2; i++ ){
s += a * noise(p);
m += a;
a *= 0.5;
p *= 2.0;
}
return s/m;
}
float bin(vec2 ipos, float n){
float remain = mod(n,33554432.);
for(float i = 0.0; i < 25.0; i++){
if ( floor(i/3.) == ipos.y && mod(i,3.) == ipos.x ) {
return step(1.0,mod(remain,2.));
}
remain = ceil(remain/2.);
}
return 0.0;
}
float char(vec2 st, float n){
st.x = st.x*2.-0.5;
st.y = st.y*1.2-0.1;
vec2 grid = vec2(3.,5.);
vec2 ipos = floor(st*grid);
vec2 fpos = fract(st*grid);
n = floor(mod(n,10.));
float digit = 0.0;
if (n < 1. ) { digit = 31600.; }
else if (n < 2. ) { digit = 9363.0; }
else if (n < 3. ) { digit = 31184.0; }
else if (n < 4. ) { digit = 31208.0; }
else if (n < 5. ) { digit = 23525.0; }
else if (n < 6. ) { digit = 29672.0; }
else if (n < 7. ) { digit = 29680.0; }
else if (n < 8. ) { digit = 31013.0; }
else if (n < 9. ) { digit = 31728.0; }
else if (n < 10. ) { digit = 31717.0; }
float pct = bin(ipos, digit);
vec2 borders = vec2(1.);
// borders *= step(0.01,fpos.x) * step(0.01,fpos.y); // inner
borders *= step(0.0,st)*step(0.0,1.-st); // outer
return step(.5,1.0-pct) * borders.x * borders.y;
}
float grid(vec2 st, float res) {
vec2 grid = fract(st*res);
return 1.-(step(res,grid.x) * step(res,grid.y));
}
float superGrid(vec2 st) {
return 1.*grid(st,0.01) +
0.5*grid(st,0.02) +
0.6*grid(st,0.1);
}
float box(in vec2 st, in vec2 size){
size = vec2(0.5) - size*0.5;
vec2 uv = smoothstep(size,
size+vec2(0.001),
st);
uv *= smoothstep(size,
size+vec2(0.001),
vec2(1.0)-st);
return uv.x*uv.y;
}
float cross(in vec2 st, vec2 size){
return clamp(box(st, vec2(size.x*0.5,size.y*0.125)) +
box(st, vec2(size.y*0.125,size.x*0.5)),0.,1.);
}
void main(){
vec2 st = gl_FragCoord.st/u_resolution.xy;
float aspect = u_resolution.x/u_resolution.y;
vec2 grain_st = st-.5;
vec3 color = vec3(0.0);
float grain = 0.0;
grain = mix(1., 0.8, dot(grain_st,grain_st) + (fbm(gl_FragCoord.xy*0.6)*0.1) );
// Fix Aspect ration
st -= .5;
st.x *= aspect;
// Zoom
st *= 2.8;
// Random blocks
vec2 blocks_st = floor((st-.25)*6.);
float t = u_time*.3+random(blocks_st);
float time_i = floor(t);
float time_f = fract(t);
float block = step(0.9,random(blocks_st+time_i))*(1.0-time_f);
vec2 offset = vec2(block*0.02,block*0.001)+(1.0-grain)*.08;
// Grid
vec2 grid_st = st*300.;
vec3 grid_chroma = vec3(0.0);
grid_chroma.r = superGrid(grid_st+offset*100.);
grid_chroma.g = superGrid(grid_st);
grid_chroma.b = superGrid(grid_st-offset*100.);
color += vec3(0.1,0.08,0.08)*grid_chroma;
// Crosses
vec2 crosses_st = st + .5;
crosses_st *= 3.;
vec2 crosses_st_f = fract(crosses_st);
color *= 1.-cross(crosses_st_f,vec2(.2,.2));
vec3 cross_chroma = vec3(0.0);
cross_chroma.r = cross(crosses_st_f+offset,vec2(.15,.15));
cross_chroma.g = cross(crosses_st_f,vec2(.15,.15));
cross_chroma.b = cross(crosses_st_f-offset,vec2(.15,.15));
color += vec3(.7)*cross_chroma;
// Digits
vec2 digits_st = mod(st*60.,20.);
vec2 digits_st_i = floor(digits_st);
float digits_n = ceil(block*5.);
offset *= 10.;
if (block > 0.0 &&
digits_st_i.y == 1. &&
digits_st_i.x > 0. && digits_st_i.x < digits_n ) {
vec2 digits_st_f = fract(digits_st);
float pct = random(digits_st_i+floor(crosses_st)+floor(u_time*20.));
color.r += block*char(digits_st_f+offset,100.*pct);
color.g += block*char(digits_st_f,100.*pct);
color.b += block*char(digits_st_f-offset,100.*pct);
} else if ( block > 0.0 &&
digits_st_i.y == 2. &&
digits_st_i.x > 0. && digits_st_i.x < digits_n ) {
vec2 digits_st_f = fract(digits_st);
float pct = random(digits_st_i+floor(crosses_st)+floor(u_time*20.));
color.r += block*char(digits_st_f+offset,100.*pct);
color.g += block*char(digits_st_f,100.*pct);
color.b += block*char(digits_st_f-offset,100.*pct);
}
gl_FragColor = vec4( (1.0-color) * grain, 1.0);
}

@ -5,9 +5,13 @@
## ノイズ
It's time for a break! We have been playing with all this random functions that looks like TV white noise, our head is still spinning around thinking on shaders, and our eyes are tired. Time to get out for a walk!
It's time for a break! We've been playing with random functions that look like TV white noise, our head is still spinning thinking about shaders, and our eyes are tired. Time to go out for a walk!
We feel the air in our skin, the sun in our face. The world is such a vivid and rich place. Colors, textures, sounds. While we walk we can't avoid noticing the surface of the roads, rocks, trees and clouds.
少し休憩しましょう!ここまでテレビのホワイトノイズに見えるようなランダム関数を扱ってきましたが、私たちの頭はまだシェーダーのことを考えることでいっぱいで、目が疲れてきました。散歩に出る時間です!
We feel the air on our skin, the sun in our face. The world is such a vivid and rich place. Colors, textures, sounds. While we walk we can't avoid noticing the surface of the roads, rocks, trees and clouds.
(散歩をしていると)空気を肌に、太陽を顔に感じます。世界はこのように鮮やかで豊かな場所なのです。色、テキスチャー、音。歩いている間、道や岩、木や雲のサーフェスに注目せざるにはいられません。
![](texture-00.jpg)
![](texture-01.jpg)
@ -17,13 +21,19 @@ We feel the air in our skin, the sun in our face. The world is such a vivid and
![](texture-05.jpg)
![](texture-06.jpg)
The stochasticity of this textures could be call "random", but definitely they don't look like the random we were playing before. The “real world” is such a rich and complex place! So, how can we approximate to this level of variety computationally?
The unpredictability of these textures could be called "random," but they don't look like the random we were playing with before. The “real world” is such a rich and complex place! How can we approximate this variety computationally?
これらのテキスチャーは予測不可能という点では「ランダム」と呼ぶことができるでしょう。でも、これらの外見は明らかに前の章で私たちが扱ってきたランダムとは違います。「現実世界」はこのように豊かで複雑な場所なのです!それでは、コンピューターを用いてこのようなレベルの多様性に近づくためにはどうしたらいいのでしょうか。
This was the question [Ken Perlin](https://mrl.nyu.edu/~perlin/) was trying to solve in the eary 1980s when he was commissioned to generate more realistic textures for the movie "Tron." In response to that, he came up with an elegant *Oscar winning* noise algorithm. (No biggie.)
This was the question [Ken Perlin](https://mrl.nyu.edu/~perlin/) was trying to solve arround 1982 when he was commissioned with the job of generating a "more realistic" textures for a new disney movie call "Tron". In response to that he came up with an elegant *oscar winner* noise algorithm. No biggie.
この疑問は、[Ken Perlin](https://mrl.nyu.edu/~perlin/) が1980年代初期に、「TRON」というディズニーの新作映画のために「もっとリアルな」テキスチャーを生成するという仕事を依頼されたときに、取り組んでいた問題です。 結果として、彼は *オスカー賞を受賞した* エレガントなノイズ・アルゴリズムにたどりつきました。たいしたことではありません。
![Disney - Tron (1982)](tron.jpg)
The following is not the clasic Perlin noise algorithm, but is a good starting point to understand how to generate *smooth random* aka *noise*.
The following is not the classic Perlin noise algorithm, but it is a good starting point to understand how to generate noise.
下記は、クラシックなパーリンノイズアルゴリズムではありませんが、どのようにして *スムーズなランダム* つまり *ノイズ* を生成するかについて理解するよい出発点になります。
<div class="simpleFunction" data="
float i = floor(x); // integer
@ -33,99 +43,146 @@ y = rand(i);
//y = mix(rand(i), rand(i + 1.0), smoothstep(0.,1.,f));
"></div>
In this lines we are doing something similar than the previus chapters, We are subdividing a continus floating value (```x````) in integers (```i```) using [```floor()```](.../glossary/?search=floor) and obtaining a random (```rand()```) number for each integer. At the same time we are storing the fractional part of each section using [```fract()```](.../glossary/?search=fract) and storing it on the ```f``` variable.
In these lines we are doing something similar to what we did in the previous chapter. We are subdividing a continuous floating number (```x```) into its integer (```i```) and fractional (```f```) components. We use [```floor()```](.../glossary/?search=floor) to obtain ```i``` and [```fract()```](.../glossary/?search=fract) to obtain ```f```. Then we apply ```rand()``` to the integer part of ```x```, which gives a unique random value for each integer.
この箇所では、前の章と同じようなことを行っています。連続的な小数点の値 (```x````) を [```floor()```](.../glossary/?search=floor) を用いて整数 (```i```) に細分し、それぞれの整数にランダム (```rand()```) な数値を獲得します。それと同時に、 [```fract()```](.../glossary/?search=fract) を使って各セクションの端数部分をたくわえ、それを ```f``` 変数に格納します。
After that you see two commented lines. The first one interpolates each random value linearly.
After that you will also see, two commented lines. The first one interpolates each random value linearly.
その後に、コメントのついた2行があります。始めの1行はそれぞれのランダム値をリニアに補間しています。
```glsl
y = mix(rand(i), rand(i + 1.0), f);
```
Go ahead and uncomment this line an see how that looks. We use the [```fract()```](.../glossary/?search=fract) value store in `f` to [```mix()```](.../glossary/?search=mix) the two random values.
Go ahead and uncomment this line to see how this looks. We use the [```fract()```](.../glossary/?search=fract) value store in `f` to [```mix()```](.../glossary/?search=mix) the two random values.
At this point on the book, we learned that we can do better than a linear interpolation. Right?
Now try uncommenting the following line, which use a [```smoothstep()```](.../glossary/?search=smoothstep) interpolation instead of a linear one.
先に進みましょう。この行のコメントを外して、どのように見えるか観察してみます。2つのランダムの値を [```mix()```](.../glossary/?search=mix) するために、`f` に格納されている [```fract()```](.../glossary/?search=fract) の値を使います。
At this point in the book, we've learned that we can do better than a linear interpolation, right?
Now try uncommenting the following line, which uses a [```smoothstep()```](.../glossary/?search=smoothstep) interpolation instead of a linear one.
この本のこの時点で、私たちはリニアな補間よりもよいやり方を学びました。わかりますか?
それでは次に、リニアなもののかわりに[```smoothstep()```](.../glossary/?search=smoothstep) な補間を使っている下記の行のコメントを外してみましょう。
```glsl
y = mix(rand(i), rand(i + 1.0), smoothstep(0.,1.,f));
```
After uncommenting it, notice how the transition between the peaks got smooth. On some noise implementations you will find that some programers prefere to code their own cubic curves (like the following formula) instead of using the [```smoothstep()```](.../glossary/?search=smoothstep).
After uncommenting it, notice how the transition between the peaks gets smooth. In some noise implementations you will find that programmers prefer to code their own cubic curves (like the following formula) instead of using the [```smoothstep()```](.../glossary/?search=smoothstep).
コメントを外すと、ピークとピークの間の変遷がスムーズになっていることに気づくと思います。ノイズを履行するときに、あるプログラマーは[```smoothstep()```](.../glossary/?search=smoothstep) を使うかわりに、自分自身の3次曲線のカーブ (例えば下記のような公式) を好むことに気がつくでしょう。
```glsl
float u = f * f * (3.0 - 2.0 * f ); // custom cubic curve
y = mix(rand(i), rand(i + 1.0), u); // using it in the interpolation
```
The *smooth random* is a game changer for graphical engeneers, it provides the hability to generate images and geometries with an organic feeling. Perlin's Noise Algorithm have been reimplemented over and over in different lenguage and dimensions for all kind of creative uses to make all sort of mesmerizing pieces.
```glsl
float u = f * f * (3.0 - 2.0 * f ); // カスタムな3次曲線
y = mix(rand(i), rand(i + 1.0), u); // それを補間に使う
```
This *smooth randomness* is a game changer for graphical engineers or artists - it provides the ability to generate images and geometries with an organic feeling. Perlin's Noise Algorithm has been implemented over and over in different languages and dimensions to make mesmerizing pieces for all sorts of creative uses.
*スムーズなランダム* は、グラフィックのエンジニアにとって革新的な存在です。それはオーガニックな感触のイメージやジオメトリーを生成する力を与えてくれます。パーリンノイズアルゴリズムは、催眠術的な作品をつくるためのあらゆるクリエイティブな手法のために、様々な言語や次元で繰り返し使われてきました。
![Robert Hodgin - Written Images (2010)](robert_hodgin.jpg)
Now is your turn:
Now it's your turn:
次はあなたの番です。
* Make your own ```float noise(float x)``` function.
* あなた自身の ```float noise(float x)``` 関数をつくってみましょう。
* Use the noise funtion to animate a shape by moving it, rotating it or scaling it.
* Use your noise function to animate a shape by moving it, rotating it or scaling it.
* ノイズ関数を使って、1つの形が動いたり、回転したり、大きさが変わったりするアニメーションをつくりましょう。
* Make an animated composition of several shapes 'dancing' together using noise.
* ノイズを用いていくつかの形が 'ダンスしている' ようなアニメーションをつくりましょう。
* Construct "organic-looking" shapes using the noise function.
* ノイズ関数を使って "オーガニックな外見" の形を描いてみましょう。
* Once you have your "creature", try to develop further this into a character by assigning it a particular movement.
* Once you have your "creature," try to develop it further into a character by assigning it a particular movement.
* あなたの "創造物" を手にしたら、特定の動きを割り当てることで、キャラクターをどんどん発展させていきましょう。
## 2D Noise
## 2D ノイズ
![](02.png)
Now that we know how to do noise in 1D, is time to port it to 2D. For that instead of interpolating between two points of a line (```fract(x)``` and ```fract(x)+1.0```) we are going to interpolate between the four coorners of a square area of a plane(```fract(st)```, ```fract(st)+vec2(1.,0.)```, ```fract(st)+vec2(0.,1.)``` and ```fract(st)+vec2(1.,1.)```).
Now that we know how to do noise in 1D, it's time to move on to 2D. In 2D, instead of interpolating between two points of a line (```fract(x)``` and ```fract(x)+1.0```), we are going to interpolate between the four corners of the square area of a plane (```fract(st)```, ```fract(st)+vec2(1.,0.)```, ```fract(st)+vec2(0.,1.)``` and ```fract(st)+vec2(1.,1.)```).
ここまででどのようにしてDのイズを扱うかを学んできました。次はDにうつりましょう。線の点 (```fract(x)``` と ```fract(x)+1.0```)の間を補間するかわりに、Dにおいては、四角い正方形の平面領域のつの角の点 (```fract(st)```, ```fract(st)+vec2(1.,0.)```, ```fract(st)+vec2(0.,1.)``` and ```fract(st)+vec2(1.,1.)```) を補間します。
![](01.png)
Similarly if we want to obtain 3D noise we need to interpolate between the eight coorners of a cube. This technique it's all about interpolating values of random. That's why is call **value noise**.
Similarly, if we want to obtain 3D noise we need to interpolate between the eight corners of a cube. This technique is all about interpolating random values, which is why it's called **value noise**.
同様にして、Dイズを手に入れるには、立方体のつの角の間を補間する必要があります。このテクニックはすべてランダムな値の補間についてなので、 **value noise** とよばれています。
![](04.jpg)
Similarly to the previus example this interpolation is not liner but cubic, which smoothly interpolates any points inside our squared grid
Like the 1D example, this interpolation is not linear but cubic, which smoothly interpolates any points inside our square grid.
Dの例のように、この補間はリニアではなく、四角いグリッドの中のどの点もスムーズに補間できるキュービック三次関数なものです。
![](05.jpg)
Take a look to the following noise function.
Take a look at the following noise function.
下記のノイズ関数を見てみてください。
<div class="codeAndCanvas" data="2d-noise.frag"></div>
We start by scaling the space by 5 (line 45) in order. Then inside the noise function we subdived the space in cells similarly that we have done before. We store the integer position of the cell together with fractional inside values. We use the integer position to calculate the four corners corinates and obtain a random value for each one (lines 23-26). Then, finally in line 35 we interpolate 4 random values of the coorners using the fractional value we store before.
We start by scaling the space by 5 (line 45) in order to see the interpolation between the squares of the grid. Then inside the noise function we subdivide the space into cells. We store the integer position of the cell along with the fractional positions inside the cell. We use the integer position to calculate the four corners' coordinates and obtain a random value for each one (lines 23-26). Finally, in line 35 we interpolate between the 4 random values of the corners using the fractional positions we stored before.
Now is your turn, try the following excersices:
グリッドの四角の間の補間を見るために、空間を5ずつに区切る所からはじめます (45行目)。次に、ノイズ関数の中に、空間をセルに細分していきます。セルの整数ポジションとともに端数ポジションを、セルの中に格納します。整数ポジションを四つの角の座標点の計算に用い、それぞれに対するランダム値を獲得します (23-26行)。最後に、35行目で、前に格納した端数ポジションを使って、角の4つのランダム値の間を補間します。
Now it's your turn. Try the following exercises:
今度はあなたの番です。下記のエクササイズをやってみましょう。
* Change the multiplier of line 45. Try to animate it.
* At what level of zoom the noise start looking like random again?
* 45行目の乗数を変えてみましょう。アニメーションにしてみましょう。
* At what level of zoom does the noise start looking like random again?
* どのレベルのズームで、ノイズはランダムに再び見え始めるようになりますか?
* At what zoom level the noise is imperceptible.
* At what zoom level is the noise is imperceptible?
* どのレベルのズームで、ノイズがわからなくなりますか?
* Try to hook up this noise function to the mouse coordinates.
* What if we treat the gradient of the noise as a distance field? Make something interesing with it.
* このノイズ関数を、マウスの位置に適応させてみましょう。
* What if we treat the gradient of the noise as a distance field? Make something interesting with it.
* ノイズのグラデーションを距離として扱ってみるとどうなりますか?これを使って何か興味深いものをつくってください。
* Now that you achieve some control over order and chaos, is time to use that knowledge. Make a composition of rectangles, colors and noise that resemble some of the complexity of the texture of the following painting made by [Mark Rothko](http://en.wikipedia.org/wiki/Mark_Rothko).
* Now that you've achieved some control over order and chaos, it's time to use that knowledge. Make a composition of rectangles, colors and noise that resembles some of the complexity of a [Mark Rothko](http://en.wikipedia.org/wiki/Mark_Rothko) painting.
* これであなたもある程度、秩序とカオスのコントロールができるようになりました。今度はこの知識を活用する時間です。 [Mark Rothko](http://en.wikipedia.org/wiki/Mark_Rothko) の絵画の複雑さのような四角と色、そしてノイズのコンポジションをつくってみましょう。
![Mark Rothko - Three (1950)](rothko.jpg)
## Using Noise on generative designs
As we saw, noise algorithms was original designed to give a natural *je ne sais quoi* to digital textures. So far all the implementations in 1D and 2D we saw, were interpolation between values reason why is usually call **Value Noise**, but there are more...
## Using Noise in Generative Designs
Noise algorithms were originally designed to give a natural *je ne sais quoi* to digital textures. The 1D and 2D implementations we've seen so far were interpolations between random *values*, which is why they're called **Value Noise**, but there are more ways to obtain noise...
[ ![Inigo Quilez - Value Noise](value-noise.png) ](../edit.html#11/2d-vnoise.frag)
As you discovery on the previus excercises this type of noise tends to look "block", as a solution to this effect in 1985, again, [Ken Perlin](https://mrl.nyu.edu/~perlin/) develop another implementation of the algorithm call **Gradient Noise**. In it Ken figure out how to interpolate **random gradients** instead of values. This gradients where the result of 2D noise function that returns directions (represented by a ```vec2```) instead of single values (```float```). Click in the foolowing image to see the code and how it works.
As you discovered in the previous exercises, value noise tends to look "blocky." To diminish this blocky effect, in 1985 [Ken Perlin](https://mrl.nyu.edu/~perlin/) developed another implementation of the algorithm called **Gradient Noise**. Ken figured out how to interpolate random *gradients* instead of values. These gradients were the result of a 2D random function that returns directions (represented by a ```vec2```) instead of single values (```float```). Click on the following image to see the code and how it works.
[ ![Inigo Quilez - Gradient Noise](gradient-noise.png) ](../edit.html#11/2d-gnoise.frag)
Take a minute to look to these two examples by [Inigo Quilez](http://www.iquilezles.org/) and pay attention on the differences between [value noise](https://www.shadertoy.com/view/lsf3WH) and [gradient noise](https://www.shadertoy.com/view/XdXGW8).
As a painter that understand how the pigments of their paints works, the more we know about noise implementations the better we will learn how to use it. The following step is to find interesting way of combine and use it.
Take a minute to look at these two examples by [Inigo Quilez](http://www.iquilezles.org/) and pay attention to the differences between [value noise](https://www.shadertoy.com/view/lsf3WH) and [gradient noise](https://www.shadertoy.com/view/XdXGW8).
For example, if we use a two dimensional noise implementation to rotate the space where strign lines are render, we can produce the following swirly effect that looks like wood. Again you can click on the image to see how the code looks like
Like a painter who understands how the pigments of their paints work, the more we know about noise implementations the better we will be able to use them. For example, if we use a two dimensional noise implementation to rotate the space where straight lines are rendered, we can produce the following swirly effect that looks like wood. Again you can click on the image to see what the code looks like.
[ ![Wood texture](wood.png) ](../edit.html#11/wood.frag)
@ -134,7 +191,7 @@ For example, if we use a two dimensional noise implementation to rotate the spac
pattern = lines(pos,.5); // draw lines
```
Another way to get interesting patterns from noise is to treat it like a distance field and apply some of the tricks described on the [Shapes chapter](../07/).
Another way to get interesting patterns from noise is to treat it like a distance field and apply some of the tricks described in the [Shapes chapter](../07/).
[ ![Splatter texture](splatter.png) ](../edit.html#11/splatter.frag)
@ -143,50 +200,50 @@ Another way to get interesting patterns from noise is to treat it like a distanc
color -= smoothstep(.35,.4,noise(st*10.)); // Holes on splatter
```
A third way of using the noise function is to modulate a shapes, this also requires some of the techniques we learn on the [chapter about shapes](../07/)
A third way of using the noise function is to modulate a shape. This also requires some of the techniques we learned in the [chapter about shapes](../07/).
<a href="../edit.html#11/circleWave-noise.frag"><canvas id="custom" class="canvas" data-fragment-url="circleWave-noise.frag" width="300px" height="300"></canvas></a>
For you to practice:
* What other generative pattern can you make? What about granite? marble? magma? water? Find three pictures of textures you are interested and implement them algorithmically using noise.
* Use noise to modulate a shapes.
* What about using noise for motion? Go back to the [Matrix chapter](../08/) and use the translation example that move a the "+" around to apply some *random* and *noise* movements to it.
* Make a generative Jackson Pollock
* What other generative pattern can you make? What about granite? marble? magma? water? Find three pictures of textures you are interested in and implement them algorithmically using noise.
* Use noise to modulate a shape.
* What about using noise for motion? Go back to the [Matrix chapter](../08/). Use the translation example that moves the "+" around, and apply some *random* and *noise* movements to it.
* Make a generative Jackson Pollock.
![Jackson Pollock - Number 14 gray (1948)](pollock.jpg)
## Simplex Noise
For Ken Perlin the success of his algorithm wasn't enough. He thought it could performance better. In Siggraph 2001 he presented the "simplex noise" in wich he achive the following improvements over the previus one:
For Ken Perlin the success of his algorithm wasn't enough. He thought it could perform better. At Siggraph 2001 he presented the "simplex noise" in which he achieved the following improvements over the previous algorithm:
* An algorithm with lower computational complexity and fewer multiplications.
* A noise that scales to higher dimensions with less computational cost.
* A noise without directional artifacts
* A noise with well-defined and continuous gradients that can be computed quite cheaply
* An algorithm that is easy to implemnt in hardware
* A noise without directional artifacts.
* A noise with well-defined and continuous gradients that can be computed quite cheaply.
* An algorithm that is easy to implement in hardware.
Yeah, right? I know what you are thinking... "Who is this man?". Yes, his work is fantastic! But seriusly, How he did that? Well we saw how for two dimensions he was interpolating 4 points (coorners of a square); also we can correctly preasume that for [three (see an implementation here)](../edit.html#11/3d-noise.frag) and four dimensions we need to interpolate 8 and 16 points. Right? In other words for N dimensions you need to smoothly interpolate 2 to the N points (2^N). But Ken smartly notice that although the obvious choice for a space-filling shape is a squar, the actually simpliest shape in 2D is the equilateral triangle. So he start by replace the squared grid (we finnaly learn how to use) for a simplex grid of equilateral triangles.
I know what you are thinking... "Who is this man?" Yes, his work is fantastic! But seriously, how did he improve the algorithm? Well, we saw how for two dimensions he was interpolating 4 points (corners of a square); so we can correctly guess that for [three (see an implementation here)](../edit.html#11/3d-noise.frag) and four dimensions we need to interpolate 8 and 16 points. Right? In other words for N dimensions you need to smoothly interpolate 2 to the N points (2^N). But Ken smartly noticed that although the obvious choice for a space-filling shape is a square, the simplest shape in 2D is the equilateral triangle. So he started by replacing the squared grid (we just learned how to use) for a simplex grid of equilateral triangles.
![](simplex-grid-00.png)
The simplex shape for N dimensions is a shape with N + 1 corners. In other words one less corner to compute in 2D, 4 less coorners in 3D and 11 less coorners in 4D! That's a huge improvement!
The simplex shape for N dimensions is a shape with N + 1 corners. In other words one fewer corner to compute in 2D, 4 fewer corners in 3D and 11 fewer corners in 4D! That's a huge improvement!
So then, in two dimension the interpolation happens, similarly than regular noise, by interpolating the values of the corners of a section. But, in this particular case, because we are using a simplex grid, we just need to interpolate the sum of only 3 coornes ( contributors).
In two dimensions the interpolation happens similarly to regular noise, by interpolating the values of the corners of a section. But in this case, by using a simplex grid, we only need to interpolate the sum of 3 corners.
![](simplex-grid-01.png)
How the simplex grid works? In another brillant and elegant move, the simplex grid can be obtain by subdividing the cells of a regular 4 corners grid into two isoceles triangles and then skewing it unitl each triangle is equilateral.
How is the simplex grid made? In another brilliant and elegant move, the simplex grid can be obtained by subdividing the cells of a regular 4 cornered grid into two isosceles triangles and then skewing it until each triangle is equilateral.
![](simplex-grid-02.png)
Then, as [Stefan Gustavson describe in this paper](http://staffwww.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf): _"...by looking at the integer parts of the transformed coordinates (x,y) for the point we want to evaluate, we can quickly determine which cell of two simplices that contain the point. By also compareing the magnitudes of x and y, we can determine whether the points is in the upper or the lower simplex, and traverse the correct three corners points."_.
Then, as [Stefan Gustavson describes in this paper](http://staffwww.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf): _"...by looking at the integer parts of the transformed coordinates (x,y) for the point we want to evaluate, we can quickly determine which cell of two simplices that contains the point. By also comparing the magnitudes of x and y, we can determine whether the point is in the upper or the lower simplex, and traverse the correct three corner points."_
On the following code you can uncomment the line 44 to see how the grid is skew and then line 47 to see how a simplex grid can be constructed. Note how on line 22 we are subdiving the skewed squared on two equilateral triangles just buy detecting if ```x > y``` ("lower" triangle) or ```y > x``` ("upper" triangle).
In the following code you can uncomment line 44 to see how the grid is skewed, and then uncomment line 47 to see how a simplex grid can be constructed. Note how on line 22 we are subdividing the skewed square into two equilateral triangles just by detecting if ```x > y``` ("lower" triangle) or ```y > x``` ("upper" triangle).
<div class="codeAndCanvas" data="simplex-grid.frag"></div>
Other improvements introduce by Perlin in the **Simplex Noise**, is the replacement of the Cubic Hermite Curve ( _f(x) = 3x^2-2x^3_ , wich is identical to the (.../glossary/?search=smoothstep) function ) for a Quintic Hermite Curve ( _f(x) = 6x^5-15x^4+10x^3_ ). This makes both ends of the curve more "flat" and by gracefully stiching with the next one. On other words you get more continuos transition between the cells. You can watch this by uncommenting the second formula on the following graph example (or by see the [two equations side by side here](https://www.desmos.com/calculator/2xvlk5xp8b)).
Another improvement introduced by Perlin with **Simplex Noise**, is the replacement of the Cubic Hermite Curve ( _f(x) = 3x^2-2x^3_ , which is identical to the [```smoothstep()```](.../glossary/?search=smoothstep) function) for a Quintic Hermite Curve ( _f(x) = 6x^5-15x^4+10x^3_ ). This makes both ends of the curve more "flat" so each border gracefully stiches with the next one. In other words you get a more continuous transition between the cells. You can see this by uncommenting the second formula in the following graph example (or see the [two equations side by side here](https://www.desmos.com/calculator/2xvlk5xp8b)).
<div class="simpleFunction" data="
// Cubic Hermine Curve. Same as SmoothStep()
@ -195,27 +252,27 @@ y = x*x*(3.0-2.0*x);
//y = x*x*x*(x*(x*6.-15.)+10.);
"></div>
Note how the ends of the curve changes. You can read more about this in [on words of Ken it self in this paper](http://mrl.nyu.edu/~perlin/paper445.pdf).
Note how the ends of the curve change. You can read more about this in [Ken's own words](http://mrl.nyu.edu/~perlin/paper445.pdf).
All this improvements result on a master peach of algorithms known as **Simplex Noise**. The following is an GLSL implementation of this algorithm made by Ian McEwan (and presented in [this paper](http://webstaff.itn.liu.se/~stegu/jgt2012/article.pdf)) which is probably over complicated for educational porposes because have been higly optimized, but you will be happy to click on it and see that is less cryptic than you expect.
All these improvements result in an algorithmic masterpiece known as **Simplex Noise**. The following is a GLSL implementation of this algorithm made by Ian McEwan (and presented in [this paper](http://webstaff.itn.liu.se/~stegu/jgt2012/article.pdf)) which is overcomplicated for educational purposes, but you will be happy to click on it and see that it is less cryptic than you might expect.
[ ![Ian McEwan of Ashima Arts - Simplex Noise](simplex-noise.png) ](../edit.html#11/2d-snoise-clear.frag)
Well enought technicalities, is time for you to use this resource in your own expressive way:
Well... enough technicalities, it's time for you to use this resource in your own expressive way:
* Contemplate how each noise implementation looks. Imagine them as a raw material. Like a marble rock for a sculptor. What you can say about about the "feeling" that each one have? Squinch your eyes to trigger your imagination, like when you want to find shapes on a cloud, What do you see? what reminds you off? How do you imagine each noise implementation could be model into? Following your guts try to make it happen on code.
* Contemplate how each noise implementation looks. Imagine them as a raw material, like a marble rock for a sculptor. What can you say about about the "feeling" that each one has? Squinch your eyes to trigger your imagination, like when you want to find shapes in a cloud. What do you see? What are you reminded of? What do you imagine each noise implementation could be made into? Following your guts and try to make it happen in code.
* Make a shader that project the ilusion of flow. Like a lava lamp, ink drops, watter, etc.
* Make a shader that projects the illusion of flow. Like a lava lamp, ink drops, water, etc.
<a href="../edit.html#11/lava-lamp.frag"><canvas id="custom" class="canvas" data-fragment-url="lava-lamp.frag" width="520px" height="200px"></canvas></a>
* Use Signed Noise to add some texture to a work you already made.
* Use Simplex Noise to add some texture to a work you've already made.
<a href="../edit.html#11/iching-03.frag"><canvas id="custom" class="canvas" data-fragment-url="iching-03.frag" width="520px" height="520px"></canvas></a>
In this chapter we have introduce some control over the chaos. Is not an easy job! Becoming a noise-bender-master takes time and efford.
In this chapter we have introduced some control over the chaos. It was not an easy job! Becoming a noise-bender-master takes time and effort.
On the following chapters we will see some "well-know" techniques to perfect your skills and get more out of your noise to design quality generative content with shaders. Until then enjoy some time outside contemplating nature and their intricate patterns. Your hability to observe need's equal (or probably more) dedication than your making skills. Go outside and enjoy the rest of day!
In the following chapters we will see some well known techniques to perfect your skills and get more out of your noise to design quality generative content with shaders. Until then enjoy some time outside contemplating nature and its intricate patterns. Your ability to observe needs equal (or probably more) dedication than your making skills. Go outside and enjoy the rest of the day!
<p style="text-align:center; font-style: italic;">"Talk to the tree, make friends with it." Bob Ross
</p>
</p>

@ -3,9 +3,9 @@
## Noise
It's time for a break! We have been playing with all this random functions that looks like TV white noise, our head is still spinning around thinking on shaders, and our eyes are tired. Time to get out for a walk!
It's time for a break! We've been playing with random functions that look like TV white noise, our head is still spinning thinking about shaders, and our eyes are tired. Time to go out for a walk!
We feel the air in our skin, the sun in our face. The world is such a vivid and rich place. Colors, textures, sounds. While we walk we can't avoid noticing the surface of the roads, rocks, trees and clouds.
We feel the air on our skin, the sun in our face. The world is such a vivid and rich place. Colors, textures, sounds. While we walk we can't avoid noticing the surface of the roads, rocks, trees and clouds.
![](texture-00.jpg)
![](texture-01.jpg)
@ -15,115 +15,113 @@ We feel the air in our skin, the sun in our face. The world is such a vivid and
![](texture-05.jpg)
![](texture-06.jpg)
The stochasticity of this textures could be call "random", but definitely they don't look like the random we were playing before. The “real world” is such a rich and complex place! So, how can we approximate to this level of variety computationally?
The unpredictability of these textures could be called "random," but they don't look like the random we were playing with before. The “real world” is such a rich and complex place! How can we approximate this variety computationally?
This was the question [Ken Perlin](https://mrl.nyu.edu/~perlin/) was trying to solve arround 1982 when he was commissioned with the job of generating a "more realistic" textures for a new disney movie call "Tron". In response to that he came up with an elegant *oscar winner* noise algorithm. No biggie.
This was the question [Ken Perlin](https://mrl.nyu.edu/~perlin/) was trying to solve in the eary 1980s when he was commissioned to generate more realistic textures for the movie "Tron." In response to that, he came up with an elegant *Oscar winning* noise algorithm. (No biggie.)
![Disney - Tron (1982)](tron.jpg)
The following is not the clasic Perlin noise algorithm, but is a good starting point to understand how to generate *smooth random* aka *noise*.
The following is not the classic Perlin noise algorithm, but it is a good starting point to understand how to generate noise.
<div class="simpleFunction" data="
float i = floor(x); // integer
float f = fract(x); // fraction
y = rand(i);
y = rand(i); //rand() is described in the previous chapter
//y = mix(rand(i), rand(i + 1.0), f);
//y = mix(rand(i), rand(i + 1.0), smoothstep(0.,1.,f));
"></div>
In this lines we are doing something similar than the previus chapters, We are subdividing a continus floating value (```x````) in integers (```i```) using [```floor()```](.../glossary/?search=floor) and obtaining a random (```rand()```) number for each integer. At the same time we are storing the fractional part of each section using [```fract()```](.../glossary/?search=fract) and storing it on the ```f``` variable.
In these lines we are doing something similar to what we did in the previous chapter. We are subdividing a continuous floating number (```x```) into its integer (```i```) and fractional (```f```) components. We use [```floor()```](.../glossary/?search=floor) to obtain ```i``` and [```fract()```](.../glossary/?search=fract) to obtain ```f```. Then we apply ```rand()``` to the integer part of ```x```, which gives a unique random value for each integer.
After that you will also see, two commented lines. The first one interpolates each random value linearly.
After that you see two commented lines. The first one interpolates each random value linearly.
```glsl
y = mix(rand(i), rand(i + 1.0), f);
```
Go ahead and uncomment this line an see how that looks. We use the [```fract()```](.../glossary/?search=fract) value store in `f` to [```mix()```](.../glossary/?search=mix) the two random values.
Go ahead and uncomment this line to see how this looks. We use the [```fract()```](.../glossary/?search=fract) value store in `f` to [```mix()```](.../glossary/?search=mix) the two random values.
At this point on the book, we learned that we can do better than a linear interpolation. Right?
Now try uncommenting the following line, which use a [```smoothstep()```](.../glossary/?search=smoothstep) interpolation instead of a linear one.
At this point in the book, we've learned that we can do better than a linear interpolation, right?
Now try uncommenting the following line, which uses a [```smoothstep()```](.../glossary/?search=smoothstep) interpolation instead of a linear one.
```glsl
y = mix(rand(i), rand(i + 1.0), smoothstep(0.,1.,f));
```
After uncommenting it, notice how the transition between the peaks got smooth. On some noise implementations you will find that some programers prefere to code their own cubic curves (like the following formula) instead of using the [```smoothstep()```](.../glossary/?search=smoothstep).
After uncommenting it, notice how the transition between the peaks gets smooth. In some noise implementations you will find that programmers prefer to code their own cubic curves (like the following formula) instead of using the [```smoothstep()```](.../glossary/?search=smoothstep).
```glsl
float u = f * f * (3.0 - 2.0 * f ); // custom cubic curve
y = mix(rand(i), rand(i + 1.0), u); // using it in the interpolation
```
The *smooth random* is a game changer for graphical engeneers, it provides the hability to generate images and geometries with an organic feeling. Perlin's Noise Algorithm have been reimplemented over and over in different lenguage and dimensions for all kind of creative uses to make all sort of mesmerizing pieces.
This *smooth randomness* is a game changer for graphical engineers or artists - it provides the ability to generate images and geometries with an organic feeling. Perlin's Noise Algorithm has been implemented over and over in different languages and dimensions to make mesmerizing pieces for all sorts of creative uses.
![Robert Hodgin - Written Images (2010)](robert_hodgin.jpg)
Now is your turn:
Now it's your turn:
* Make your own ```float noise(float x)``` function.
* Use the noise funtion to animate a shape by moving it, rotating it or scaling it.
* Use your noise function to animate a shape by moving it, rotating it or scaling it.
* Make an animated composition of several shapes 'dancing' together using noise.
* Construct "organic-looking" shapes using the noise function.
* Once you have your "creature", try to develop further this into a character by assigning it a particular movement.
* Once you have your "creature," try to develop it further into a character by assigning it a particular movement.
## 2D Noise
![](02.png)
Now that we know how to do noise in 1D, is time to port it to 2D. For that instead of interpolating between two points of a line (```fract(x)``` and ```fract(x)+1.0```) we are going to interpolate between the four coorners of a square area of a plane(```fract(st)```, ```fract(st)+vec2(1.,0.)```, ```fract(st)+vec2(0.,1.)``` and ```fract(st)+vec2(1.,1.)```).
Now that we know how to do noise in 1D, it's time to move on to 2D. In 2D, instead of interpolating between two points of a line (```fract(x)``` and ```fract(x)+1.0```), we are going to interpolate between the four corners of the square area of a plane (```fract(st)```, ```fract(st)+vec2(1.,0.)```, ```fract(st)+vec2(0.,1.)``` and ```fract(st)+vec2(1.,1.)```).
![](01.png)
Similarly if we want to obtain 3D noise we need to interpolate between the eight coorners of a cube. This technique it's all about interpolating values of random. That's why is call **value noise**.
Similarly, if we want to obtain 3D noise we need to interpolate between the eight corners of a cube. This technique is all about interpolating random values, which is why it's called **value noise**.
![](04.jpg)
Similarly to the previus example this interpolation is not liner but cubic, which smoothly interpolates any points inside our squared grid
Like the 1D example, this interpolation is not linear but cubic, which smoothly interpolates any points inside our square grid.
![](05.jpg)
Take a look to the following noise function.
Take a look at the following noise function.
<div class="codeAndCanvas" data="2d-noise.frag"></div>
We start by scaling the space by 5 (line 45) in order. Then inside the noise function we subdived the space in cells similarly that we have done before. We store the integer position of the cell together with fractional inside values. We use the integer position to calculate the four corners corinates and obtain a random value for each one (lines 23-26). Then, finally in line 35 we interpolate 4 random values of the coorners using the fractional value we store before.
We start by scaling the space by 5 (line 45) in order to see the interpolation between the squares of the grid. Then inside the noise function we subdivide the space into cells. We store the integer position of the cell along with the fractional positions inside the cell. We use the integer position to calculate the four corners' coordinates and obtain a random value for each one (lines 23-26). Finally, in line 35 we interpolate between the 4 random values of the corners using the fractional positions we stored before.
Now is your turn, try the following excersices:
Now it's your turn. Try the following exercises:
* Change the multiplier of line 45. Try to animate it.
* At what level of zoom the noise start looking like random again?
* At what level of zoom does the noise start looking like random again?
* At what zoom level the noise is imperceptible.
* At what zoom level is the noise is imperceptible?
* Try to hook up this noise function to the mouse coordinates.
* What if we treat the gradient of the noise as a distance field? Make something interesing with it.
* What if we treat the gradient of the noise as a distance field? Make something interesting with it.
* Now that you achieve some control over order and chaos, is time to use that knowledge. Make a composition of rectangles, colors and noise that resemble some of the complexity of the texture of the following painting made by [Mark Rothko](http://en.wikipedia.org/wiki/Mark_Rothko).
* Now that you've achieved some control over order and chaos, it's time to use that knowledge. Make a composition of rectangles, colors and noise that resembles some of the complexity of a [Mark Rothko](http://en.wikipedia.org/wiki/Mark_Rothko) painting.
![Mark Rothko - Three (1950)](rothko.jpg)
## Using Noise on generative designs
## Using Noise in Generative Designs
As we saw, noise algorithms was original designed to give a natural *je ne sais quoi* to digital textures. So far all the implementations in 1D and 2D we saw, were interpolation between values reason why is usually call **Value Noise**, but there are more...
Noise algorithms were originally designed to give a natural *je ne sais quoi* to digital textures. The 1D and 2D implementations we've seen so far were interpolations between random *values*, which is why they're called **Value Noise**, but there are more ways to obtain noise...
[ ![Inigo Quilez - Value Noise](value-noise.png) ](../edit.html#11/2d-vnoise.frag)
As you discovery on the previus excercises this type of noise tends to look "block", as a solution to this effect in 1985, again, [Ken Perlin](https://mrl.nyu.edu/~perlin/) develop another implementation of the algorithm call **Gradient Noise**. In it Ken figure out how to interpolate **random gradients** instead of values. This gradients where the result of 2D noise function that returns directions (represented by a ```vec2```) instead of single values (```float```). Click in the foolowing image to see the code and how it works.
As you discovered in the previous exercises, value noise tends to look "blocky." To diminish this blocky effect, in 1985 [Ken Perlin](https://mrl.nyu.edu/~perlin/) developed another implementation of the algorithm called **Gradient Noise**. Ken figured out how to interpolate random *gradients* instead of values. These gradients were the result of a 2D random function that returns directions (represented by a ```vec2```) instead of single values (```float```). Click on the following image to see the code and how it works.
[ ![Inigo Quilez - Gradient Noise](gradient-noise.png) ](../edit.html#11/2d-gnoise.frag)
Take a minute to look to these two examples by [Inigo Quilez](http://www.iquilezles.org/) and pay attention on the differences between [value noise](https://www.shadertoy.com/view/lsf3WH) and [gradient noise](https://www.shadertoy.com/view/XdXGW8).
Take a minute to look at these two examples by [Inigo Quilez](http://www.iquilezles.org/) and pay attention to the differences between [value noise](https://www.shadertoy.com/view/lsf3WH) and [gradient noise](https://www.shadertoy.com/view/XdXGW8).
As a painter that understand how the pigments of their paints works, the more we know about noise implementations the better we will learn how to use it. The following step is to find interesting way of combine and use it.
For example, if we use a two dimensional noise implementation to rotate the space where strign lines are render, we can produce the following swirly effect that looks like wood. Again you can click on the image to see how the code looks like
Like a painter who understands how the pigments of their paints work, the more we know about noise implementations the better we will be able to use them. For example, if we use a two dimensional noise implementation to rotate the space where straight lines are rendered, we can produce the following swirly effect that looks like wood. Again you can click on the image to see what the code looks like.
[ ![Wood texture](wood.png) ](../edit.html#11/wood.frag)
@ -132,7 +130,7 @@ For example, if we use a two dimensional noise implementation to rotate the spac
pattern = lines(pos,.5); // draw lines
```
Another way to get interesting patterns from noise is to treat it like a distance field and apply some of the tricks described on the [Shapes chapter](../07/).
Another way to get interesting patterns from noise is to treat it like a distance field and apply some of the tricks described in the [Shapes chapter](../07/).
[ ![Splatter texture](splatter.png) ](../edit.html#11/splatter.frag)
@ -141,50 +139,50 @@ Another way to get interesting patterns from noise is to treat it like a distanc
color -= smoothstep(.35,.4,noise(st*10.)); // Holes on splatter
```
A third way of using the noise function is to modulate a shapes, this also requires some of the techniques we learn on the [chapter about shapes](../07/)
A third way of using the noise function is to modulate a shape. This also requires some of the techniques we learned in the [chapter about shapes](../07/).
<a href="../edit.html#11/circleWave-noise.frag"><canvas id="custom" class="canvas" data-fragment-url="circleWave-noise.frag" width="300px" height="300"></canvas></a>
For you to practice:
* What other generative pattern can you make? What about granite? marble? magma? water? Find three pictures of textures you are interested and implement them algorithmically using noise.
* Use noise to modulate a shapes.
* What about using noise for motion? Go back to the [Matrix chapter](../08/) and use the translation example that move a the "+" around to apply some *random* and *noise* movements to it.
* Make a generative Jackson Pollock
* What other generative pattern can you make? What about granite? marble? magma? water? Find three pictures of textures you are interested in and implement them algorithmically using noise.
* Use noise to modulate a shape.
* What about using noise for motion? Go back to the [Matrix chapter](../08/). Use the translation example that moves the "+" around, and apply some *random* and *noise* movements to it.
* Make a generative Jackson Pollock.
![Jackson Pollock - Number 14 gray (1948)](pollock.jpg)
## Simplex Noise
For Ken Perlin the success of his algorithm wasn't enough. He thought it could performance better. In Siggraph 2001 he presented the "simplex noise" in wich he achive the following improvements over the previus one:
For Ken Perlin the success of his algorithm wasn't enough. He thought it could perform better. At Siggraph 2001 he presented the "simplex noise" in which he achieved the following improvements over the previous algorithm:
* An algorithm with lower computational complexity and fewer multiplications.
* A noise that scales to higher dimensions with less computational cost.
* A noise without directional artifacts
* A noise with well-defined and continuous gradients that can be computed quite cheaply
* An algorithm that is easy to implemnt in hardware
* A noise without directional artifacts.
* A noise with well-defined and continuous gradients that can be computed quite cheaply.
* An algorithm that is easy to implement in hardware.
Yeah, right? I know what you are thinking... "Who is this man?". Yes, his work is fantastic! But seriusly, How he did that? Well we saw how for two dimensions he was interpolating 4 points (coorners of a square); also we can correctly preasume that for [three (see an implementation here)](../edit.html#11/3d-noise.frag) and four dimensions we need to interpolate 8 and 16 points. Right? In other words for N dimensions you need to smoothly interpolate 2 to the N points (2^N). But Ken smartly notice that although the obvious choice for a space-filling shape is a squar, the actually simpliest shape in 2D is the equilateral triangle. So he start by replace the squared grid (we finnaly learn how to use) for a simplex grid of equilateral triangles.
I know what you are thinking... "Who is this man?" Yes, his work is fantastic! But seriously, how did he improve the algorithm? Well, we saw how for two dimensions he was interpolating 4 points (corners of a square); so we can correctly guess that for [three (see an implementation here)](../edit.html#11/3d-noise.frag) and four dimensions we need to interpolate 8 and 16 points. Right? In other words for N dimensions you need to smoothly interpolate 2 to the N points (2^N). But Ken smartly noticed that although the obvious choice for a space-filling shape is a square, the simplest shape in 2D is the equilateral triangle. So he started by replacing the squared grid (we just learned how to use) for a simplex grid of equilateral triangles.
![](simplex-grid-00.png)
The simplex shape for N dimensions is a shape with N + 1 corners. In other words one less corner to compute in 2D, 4 less coorners in 3D and 11 less coorners in 4D! That's a huge improvement!
The simplex shape for N dimensions is a shape with N + 1 corners. In other words one fewer corner to compute in 2D, 4 fewer corners in 3D and 11 fewer corners in 4D! That's a huge improvement!
So then, in two dimension the interpolation happens, similarly than regular noise, by interpolating the values of the corners of a section. But, in this particular case, because we are using a simplex grid, we just need to interpolate the sum of only 3 coornes ( contributors).
In two dimensions the interpolation happens similarly to regular noise, by interpolating the values of the corners of a section. But in this case, by using a simplex grid, we only need to interpolate the sum of 3 corners.
![](simplex-grid-01.png)
How the simplex grid works? In another brillant and elegant move, the simplex grid can be obtain by subdividing the cells of a regular 4 corners grid into two isoceles triangles and then skewing it unitl each triangle is equilateral.
How is the simplex grid made? In another brilliant and elegant move, the simplex grid can be obtained by subdividing the cells of a regular 4 cornered grid into two isosceles triangles and then skewing it until each triangle is equilateral.
![](simplex-grid-02.png)
Then, as [Stefan Gustavson describe in this paper](http://staffwww.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf): _"...by looking at the integer parts of the transformed coordinates (x,y) for the point we want to evaluate, we can quickly determine which cell of two simplices that contain the point. By also compareing the magnitudes of x and y, we can determine whether the points is in the upper or the lower simplex, and traverse the correct three corners points."_.
Then, as [Stefan Gustavson describes in this paper](http://staffwww.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf): _"...by looking at the integer parts of the transformed coordinates (x,y) for the point we want to evaluate, we can quickly determine which cell of two simplices that contains the point. By also comparing the magnitudes of x and y, we can determine whether the point is in the upper or the lower simplex, and traverse the correct three corner points."_
On the following code you can uncomment the line 44 to see how the grid is skew and then line 47 to see how a simplex grid can be constructed. Note how on line 22 we are subdiving the skewed squared on two equilateral triangles just buy detecting if ```x > y``` ("lower" triangle) or ```y > x``` ("upper" triangle).
In the following code you can uncomment line 44 to see how the grid is skewed, and then uncomment line 47 to see how a simplex grid can be constructed. Note how on line 22 we are subdividing the skewed square into two equilateral triangles just by detecting if ```x > y``` ("lower" triangle) or ```y > x``` ("upper" triangle).
<div class="codeAndCanvas" data="simplex-grid.frag"></div>
Other improvements introduce by Perlin in the **Simplex Noise**, is the replacement of the Cubic Hermite Curve ( _f(x) = 3x^2-2x^3_ , wich is identical to the (.../glossary/?search=smoothstep) function ) for a Quintic Hermite Curve ( _f(x) = 6x^5-15x^4+10x^3_ ). This makes both ends of the curve more "flat" and by gracefully stiching with the next one. On other words you get more continuos transition between the cells. You can watch this by uncommenting the second formula on the following graph example (or by see the [two equations side by side here](https://www.desmos.com/calculator/2xvlk5xp8b)).
Another improvement introduced by Perlin with **Simplex Noise**, is the replacement of the Cubic Hermite Curve ( _f(x) = 3x^2-2x^3_ , which is identical to the [```smoothstep()```](.../glossary/?search=smoothstep) function) for a Quintic Hermite Curve ( _f(x) = 6x^5-15x^4+10x^3_ ). This makes both ends of the curve more "flat" so each border gracefully stiches with the next one. In other words you get a more continuous transition between the cells. You can see this by uncommenting the second formula in the following graph example (or see the [two equations side by side here](https://www.desmos.com/calculator/2xvlk5xp8b)).
<div class="simpleFunction" data="
// Cubic Hermine Curve. Same as SmoothStep()
@ -193,27 +191,27 @@ y = x*x*(3.0-2.0*x);
//y = x*x*x*(x*(x*6.-15.)+10.);
"></div>
Note how the ends of the curve changes. You can read more about this in [on words of Ken it self in this paper](http://mrl.nyu.edu/~perlin/paper445.pdf).
Note how the ends of the curve change. You can read more about this in [Ken's own words](http://mrl.nyu.edu/~perlin/paper445.pdf).
All this improvements result on a master peach of algorithms known as **Simplex Noise**. The following is an GLSL implementation of this algorithm made by Ian McEwan (and presented in [this paper](http://webstaff.itn.liu.se/~stegu/jgt2012/article.pdf)) which is probably over complicated for educational porposes because have been higly optimized, but you will be happy to click on it and see that is less cryptic than you expect.
All these improvements result in an algorithmic masterpiece known as **Simplex Noise**. The following is a GLSL implementation of this algorithm made by Ian McEwan (and presented in [this paper](http://webstaff.itn.liu.se/~stegu/jgt2012/article.pdf)) which is overcomplicated for educational purposes, but you will be happy to click on it and see that it is less cryptic than you might expect.
[ ![Ian McEwan of Ashima Arts - Simplex Noise](simplex-noise.png) ](../edit.html#11/2d-snoise-clear.frag)
Well enought technicalities, is time for you to use this resource in your own expressive way:
Well... enough technicalities, it's time for you to use this resource in your own expressive way:
* Contemplate how each noise implementation looks. Imagine them as a raw material. Like a marble rock for a sculptor. What you can say about about the "feeling" that each one have? Squinch your eyes to trigger your imagination, like when you want to find shapes on a cloud, What do you see? what reminds you off? How do you imagine each noise implementation could be model into? Following your guts try to make it happen on code.
* Contemplate how each noise implementation looks. Imagine them as a raw material, like a marble rock for a sculptor. What can you say about about the "feeling" that each one has? Squinch your eyes to trigger your imagination, like when you want to find shapes in a cloud. What do you see? What are you reminded of? What do you imagine each noise implementation could be made into? Following your guts and try to make it happen in code.
* Make a shader that project the ilusion of flow. Like a lava lamp, ink drops, watter, etc.
* Make a shader that projects the illusion of flow. Like a lava lamp, ink drops, water, etc.
<a href="../edit.html#11/lava-lamp.frag"><canvas id="custom" class="canvas" data-fragment-url="lava-lamp.frag" width="520px" height="200px"></canvas></a>
* Use Signed Noise to add some texture to a work you already made.
* Use Simplex Noise to add some texture to a work you've already made.
<a href="../edit.html#11/iching-03.frag"><canvas id="custom" class="canvas" data-fragment-url="iching-03.frag" width="520px" height="520px"></canvas></a>
In this chapter we have introduce some control over the chaos. Is not an easy job! Becoming a noise-bender-master takes time and efford.
In this chapter we have introduced some control over the chaos. It was not an easy job! Becoming a noise-bender-master takes time and effort.
On the following chapters we will see some "well-know" techniques to perfect your skills and get more out of your noise to design quality generative content with shaders. Until then enjoy some time outside contemplating nature and their intricate patterns. Your hability to observe need's equal (or probably more) dedication than your making skills. Go outside and enjoy the rest of day!
In the following chapters we will see some well known techniques to perfect your skills and get more out of your noise to design quality generative content with shaders. Until then enjoy some time outside contemplating nature and its intricate patterns. Your ability to observe needs equal (or probably more) dedication than your making skills. Go outside and enjoy the rest of the day!
<p style="text-align:center; font-style: italic;">"Talk to the tree, make friends with it." Bob Ross
</p>

@ -44,7 +44,7 @@ void main() {
// color.rg = fract(skew(st));
// Subdivide the grid into to equilateral triangles
color = simplexGrid(st);
// color = simplexGrid(st);
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,13 @@
#!/bin/bash
FILE=$1
SEC=$2
COUNTER=0
for i in `seq -w 0.01 .031 $SEC`; do
echo $i
`glslViewer $FILE -s $i -o frame-$COUNTER.png`
let COUNTER=COUNTER+1
done
convert -delay 3.5 -loop 1 frame-*.png animated.gif

@ -0,0 +1,64 @@
// Author @patriciogv - 2015 - patriciogonzalezvivo.com
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 skew (vec2 st) {
vec2 r = vec2(0.0);
r.x = 1.1547*st.x;
r.y = st.y+0.5*r.x;
return r;
}
vec3 simplexGrid (vec2 st) {
vec3 xyz = vec3(0.0);
vec2 p = fract(skew(st));
if (p.x > p.y) {
xyz.xy = 1.0-vec2(p.x,p.y-p.x);
xyz.z = p.y;
} else {
xyz.zx = 1.-vec2(p.x-p.y,p.y);
xyz.y = p.x;
}
return fract(xyz);
}
// Antialiazed Step function
// from http://webstaff.itn.liu.se/~stegu/webglshadertutorial/shadertutorial.html
float aastep(float threshold, float value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
float t = u_time*2.;
// Scale the space to see the grid
st *= 1.733;
st *= 2.;
vec3 S = simplexGrid(st*3.);
S.z += (S.x*S.y)*(0.5);
// color = S;
color += step(.5,abs(sin(.5-S.b*3.1415*5.-t)));
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,87 @@
// Author @patriciogv - 2015 - patriciogonzalezvivo.com
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 skew (vec2 st) {
vec2 r = vec2(0.0);
r.x = 1.1547*st.x;
r.y = st.y+0.5*r.x;
return r;
}
vec3 simplexGrid (vec2 st) {
vec3 xyz = vec3(0.0);
vec2 p = fract(skew(st));
if (p.x > p.y) {
xyz.xy = 1.0-vec2(p.x,p.y-p.x);
xyz.z = p.y;
} else {
xyz.yz = 1.0-vec2(p.x-p.y,p.y);
xyz.x = p.x;
// xyz.zx = 1.-vec2(p.x-p.y,p.y);
// xyz.y = p.x;
}
return fract(xyz);
}
// Antialiazed Step function
// from http://webstaff.itn.liu.se/~stegu/webglshadertutorial/shadertutorial.html
float aastep(float threshold, float value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
vec2 aastep(float threshold, vec2 value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
vec3 aastep(float threshold, vec3 value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
// Scale the space to see the grid
float t = u_time*.5;
float pct = smoothstep(.1,.9, abs(sin(length(st-.5)*3.14-t)) );
color = vec3(pct);
st *= 1.733;
st *= 2.;
vec3 S = simplexGrid(st*3.);
color = S;
color = aastep(pct-.01,1.-S);
color = 1.-vec3(color.r + color.g + color.b);
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,190 @@
// Author @patriciogv - 2015 - patriciogonzalezvivo.com
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives: enable
#endif
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 skew (vec2 st) {
vec2 r = vec2(0.0);
r.x = 1.1547*st.x;
r.y = st.y+0.5*r.x;
return r;
}
vec3 simplexGrid (vec2 st) {
vec3 xyz = vec3(0.0);
vec2 p = fract(skew(st));
if (p.x > p.y) {
xyz.xy = 1.0-vec2(p.x,p.y-p.x);
xyz.z = p.y;
} else {
xyz.yz = 1.0-vec2(p.x-p.y,p.y);
xyz.x = p.x;
// xyz.zx = 1.-vec2(p.x-p.y,p.y);
// xyz.y = p.x;
}
return fract(xyz);
}
// Antialiazed Step function
// from http://webstaff.itn.liu.se/~stegu/webglshadertutorial/shadertutorial.html
float aastep(float threshold, float value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
vec2 aastep(float threshold, vec2 value) {
return vec2(aastep(threshold, value.x),
aastep(threshold, value.y));
}
vec3 aastep(float threshold, vec3 value) {
return vec3(aastep(threshold, value.x),
aastep(threshold, value.y),
aastep(threshold, value.z));
}
float isoGrid(vec2 st, float pct) {
vec3 S = simplexGrid(st);
S = aastep(pct-.01,1.-S);
return S.r + S.g + S.b;
}
vec2 sphereCoords(vec2 _st, float _scale){
float maxFactor = sin(1.570796327);
vec2 uv = vec2(0.0);
vec2 xy = 2.0 * _st.xy - 1.0;
float d = length(xy);
if (d < (2.0-maxFactor)){
d = length(xy * maxFactor);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.1415926535 * _scale;
float phi = atan(xy.y, xy.x);
uv.x = r * cos(phi) + 0.5;
uv.y = r * sin(phi) + 0.5;
} else {
uv = _st.xy;
}
return uv;
}
//
// Description : Array and textureless GLSL 2D/3D/4D simplex
// noise functions.
// Author : Ian McEwan, Ashima Arts.
// Maintainer : ijm
// Lastmod : 20110822 (ijm)
// License : Copyright (C) 2011 Ashima Arts. All rights reserved.
// Distributed under the MIT License. See LICENSE file.
// https://github.com/ashima/webgl-noise
//
vec3 mod289(vec3 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; }
vec4 mod289(vec4 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; }
vec4 permute(vec4 x) { return mod289(((x*34.0)+1.0)*x); }
vec4 taylorInvSqrt(vec4 r) { return 1.79284291400159 - 0.85373472095314 * r; }
float snoise(vec3 v) {
const vec2 C = vec2(1.0/6.0, 1.0/3.0) ;
const vec4 D = vec4(0.0, 0.5, 1.0, 2.0);
// First corner
vec3 i = floor(v + dot(v, C.yyy) );
vec3 x0 = v - i + dot(i, C.xxx) ;
// Other corners
vec3 g = step(x0.yzx, x0.xyz);
vec3 l = 1.0 - g;
vec3 i1 = min( g.xyz, l.zxy );
vec3 i2 = max( g.xyz, l.zxy );
vec3 x1 = x0 - i1 + C.xxx;
vec3 x2 = x0 - i2 + C.yyy; // 2.0*C.x = 1/3 = C.y
vec3 x3 = x0 - D.yyy; // -1.0+3.0*C.x = -0.5 = -D.y
// Permutations
i = mod289(i);
vec4 p = permute( permute( permute(
i.z + vec4(0.0, i1.z, i2.z, 1.0 ))
+ i.y + vec4(0.0, i1.y, i2.y, 1.0 ))
+ i.x + vec4(0.0, i1.x, i2.x, 1.0 ));
// Gradients: 7x7 points over a square, mapped onto an octahedron.
// The ring size 17*17 = 289 is close to a multiple of 49 (49*6 = 294)
float n_ = 0.142857142857; // 1.0/7.0
vec3 ns = n_ * D.wyz - D.xzx;
vec4 j = p - 49.0 * floor(p * ns.z * ns.z); // mod(p,7*7)
vec4 x_ = floor(j * ns.z);
vec4 y_ = floor(j - 7.0 * x_ ); // mod(j,N)
vec4 x = x_ *ns.x + ns.yyyy;
vec4 y = y_ *ns.x + ns.yyyy;
vec4 h = 1.0 - abs(x) - abs(y);
vec4 b0 = vec4( x.xy, y.xy );
vec4 b1 = vec4( x.zw, y.zw );
vec4 s0 = floor(b0)*2.0 + 1.0;
vec4 s1 = floor(b1)*2.0 + 1.0;
vec4 sh = -step(h, vec4(0.0));
vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy ;
vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww ;
vec3 p0 = vec3(a0.xy,h.x);
vec3 p1 = vec3(a0.zw,h.y);
vec3 p2 = vec3(a1.xy,h.z);
vec3 p3 = vec3(a1.zw,h.w);
//Normalise gradients
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;
// Mix final noise value
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot( m*m, vec4(dot(p0,x0), dot(p1,x1),
dot(p2,x2), dot(p3,x3) ) );
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
// Blend black the edge of the sphere
float radius = 1.0-length( vec2(0.5)-st )*2.0;
// Scale the space to see the grid
st = sphereCoords(st, 1.0);
float t = u_time*.5;
float pct = clamp((snoise(vec3(st*2.,t))*.5+.5)*.8 + abs(sin(dot(st-.5,st-.5)*3.14+t))*.5,0.,1.);
// color = vec3(pct);
st *= 1.733*20.;
color = vec3(1.-isoGrid(st,.1+pct*.9));
color *= step(0.001,radius);
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,158 @@
// Author @patriciogv - 2015 - patriciogonzalezvivo.com
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives: enable
#endif
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 skew (vec2 st) {
vec2 r = vec2(0.0);
r.x = 1.1547*st.x;
r.y = st.y+0.5*r.x;
return r;
}
vec3 simplexGrid (vec2 st) {
vec3 xyz = vec3(0.0);
vec2 p = fract(skew(st));
if (p.x > p.y) {
xyz.xy = 1.0-vec2(p.x,p.y-p.x);
xyz.z = p.y;
} else {
xyz.yz = 1.0-vec2(p.x-p.y,p.y);
xyz.x = p.x;
// xyz.zx = 1.-vec2(p.x-p.y,p.y);
// xyz.y = p.x;
}
return fract(xyz);
}
// Antialiazed Step function
// from http://webstaff.itn.liu.se/~stegu/webglshadertutorial/shadertutorial.html
float aastep(float threshold, float value) {
#ifdef GL_OES_standard_derivatives
float afwidth = 0.7 * length(vec2(dFdx(value), dFdy(value)));
return smoothstep(threshold-afwidth, threshold+afwidth, value);
#else
return step(threshold, value);
#endif
}
vec2 aastep(float threshold, vec2 value) {
return vec2(aastep(threshold, value.x),
aastep(threshold, value.y));
}
vec3 aastep(float threshold, vec3 value) {
return vec3(aastep(threshold, value.x),
aastep(threshold, value.y),
aastep(threshold, value.z));
}
//
// Description : Array and textureless GLSL 2D/3D/4D simplex
// noise functions.
// Author : Ian McEwan, Ashima Arts.
// Maintainer : ijm
// Lastmod : 20110822 (ijm)
// License : Copyright (C) 2011 Ashima Arts. All rights reserved.
// Distributed under the MIT License. See LICENSE file.
// https://github.com/ashima/webgl-noise
//
vec3 mod289(vec3 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; }
vec4 mod289(vec4 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; }
vec4 permute(vec4 x) { return mod289(((x*34.0)+1.0)*x); }
vec4 taylorInvSqrt(vec4 r) { return 1.79284291400159 - 0.85373472095314 * r; }
float snoise(vec3 v) {
const vec2 C = vec2(1.0/6.0, 1.0/3.0) ;
const vec4 D = vec4(0.0, 0.5, 1.0, 2.0);
// First corner
vec3 i = floor(v + dot(v, C.yyy) );
vec3 x0 = v - i + dot(i, C.xxx) ;
// Other corners
vec3 g = step(x0.yzx, x0.xyz);
vec3 l = 1.0 - g;
vec3 i1 = min( g.xyz, l.zxy );
vec3 i2 = max( g.xyz, l.zxy );
vec3 x1 = x0 - i1 + C.xxx;
vec3 x2 = x0 - i2 + C.yyy; // 2.0*C.x = 1/3 = C.y
vec3 x3 = x0 - D.yyy; // -1.0+3.0*C.x = -0.5 = -D.y
// Permutations
i = mod289(i);
vec4 p = permute( permute( permute(
i.z + vec4(0.0, i1.z, i2.z, 1.0 ))
+ i.y + vec4(0.0, i1.y, i2.y, 1.0 ))
+ i.x + vec4(0.0, i1.x, i2.x, 1.0 ));
// Gradients: 7x7 points over a square, mapped onto an octahedron.
// The ring size 17*17 = 289 is close to a multiple of 49 (49*6 = 294)
float n_ = 0.142857142857; // 1.0/7.0
vec3 ns = n_ * D.wyz - D.xzx;
vec4 j = p - 49.0 * floor(p * ns.z * ns.z); // mod(p,7*7)
vec4 x_ = floor(j * ns.z);
vec4 y_ = floor(j - 7.0 * x_ ); // mod(j,N)
vec4 x = x_ *ns.x + ns.yyyy;
vec4 y = y_ *ns.x + ns.yyyy;
vec4 h = 1.0 - abs(x) - abs(y);
vec4 b0 = vec4( x.xy, y.xy );
vec4 b1 = vec4( x.zw, y.zw );
vec4 s0 = floor(b0)*2.0 + 1.0;
vec4 s1 = floor(b1)*2.0 + 1.0;
vec4 sh = -step(h, vec4(0.0));
vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy ;
vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww ;
vec3 p0 = vec3(a0.xy,h.x);
vec3 p1 = vec3(a0.zw,h.y);
vec3 p2 = vec3(a1.xy,h.z);
vec3 p3 = vec3(a1.zw,h.w);
//Normalise gradients
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;
// Mix final noise value
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot( m*m, vec4(dot(p0,x0), dot(p1,x1),
dot(p2,x2), dot(p3,x3) ) );
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec3 color = vec3(0.0);
float t = u_time*.5;
float pct = clamp((snoise(vec3(st*2.,t))*.5+.5)*.2 + abs(sin(dot(st-.5,st-.5)*3.14-t))*.5,0.,1.);
st *= 1.733*10.+5.*clamp(abs(sin(dot(st-.5,st-.5)*3.14+t*0.5))*.5,0.2,1.);
vec3 S = simplexGrid(st);
S = aastep(abs(sin(pct*3.1415+u_time)),S);
pct = S.r * S.g * S.b;
color = vec3(pct);
gl_FragColor = vec4(color,1.0);
}

@ -0,0 +1,10 @@
http://github.prideout.net/coordinate-fields/
https://briansharpe.wordpress.com/2011/12/01/optimized-artifact-free-gpu-cellular-noise/
http://www.rhythmiccanvas.com/research/papers/worley.pdf
http://webstaff.itn.liu.se/~stegu/GLSL-cellular/GLSL-cellular-notes.pdf
http://www.iquilezles.org/www/articles/voronoise/voronoise.htm
http://www.iquilezles.org/www/articles/smoothvoronoi/smoothvoronoi.htm
http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm

@ -1,6 +1,7 @@
<?php
$path = "..";
$subtitle = ": More noise";
$README = "README";
$language = "";

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

@ -4,25 +4,15 @@
precision mediump float;
#endif
uniform sampler2D u_tex0;
uniform vec2 u_resolution;
uniform float u_time;
// Cellular noise ("Worley noise") in 2D in GLSL.
// Copyright (c) Stefan Gustavson 2011-04-19. All rights reserved.
// This code is released under the conditions of the MIT license.
// See LICENSE file for details.
// Permutation polynomial: (34x^2 + x) mod 289
vec4 permute(vec4 x) {
return mod((34.0 * x + 1.0) * x, 289.0);
}
// Cellular noise, returning F1 and F2 in a vec2.
// Speeded up by using 2x2 search window instead of 3x3,
// at the expense of some strong pattern artifacts.
// F2 is often wrong and has sharp discontinuities.
// If you need a smooth F2, use the slower 3x3 version.
// F1 is sometimes wrong, too, but OK for most purposes.
vec2 cellular2x2(vec2 P) {
#define K 0.142857142857 // 1/7
#define K2 0.0714285714285 // K/2
@ -58,10 +48,10 @@ vec2 cellular2x2(vec2 P) {
void main(void) {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
vec2 F = cellular2x2(st*50.);
vec2 F = cellular2x2(st*100.);
float pct = st.x;
float pct = texture2D(u_tex0,st).r;
pct = step(1.-pct,F.x);
float n = step(pct,F.x*1.3);
gl_FragColor = vec4(n, n, n, 1.0);
gl_FragColor = vec4(vec3(pct), 1.0);
}

@ -0,0 +1,2 @@
http://heman.readthedocs.org/en/latest/generate.html#archipelagos

@ -1,8 +1,67 @@
## Fractal Brownian Motion
http://www.iquilezles.org/www/articles/warp/warp.htm
http://www.iquilezles.org/www/articles/morenoise/morenoise.htm
Noise is one of those subjects that you can dig and always find new exciting formulas. In fact noise tends to means different things for different people. Musicians will think in audio noise, communicators into interference, and astrophysics on cosmic microwave background. In fact noise could be interpreted as audio signals, and noise as well as sound can be constructed by the manipulation of the amplitud and frequency of the waves that compose it.
```glsl
y = amplitud + sin( frequency );
```
An interesting property of waves in general is that they can be add up. The following graph shows what happen if you add sine waves of different frequencies and amplitudes.
<div class="simpleFunction" data="
float t = 0.01*(-u_time*130.0);
y += sin(x*2.1 + t)*4.5;
y += sin(x*1.72 + t*1.121)*4.0;
y += sin(x*2.221 + t*0.437)*5.0;
y += sin(x*3.1122+ t*4.269)*2.5;
y *= 0.1;
"></div>
Think on it as the surface of the ocean. Massive amount of water propagating waves across it surface. Waves of different heights (amplitud) and rhythms (frequencies) bouncing and interfering each other.
Musicians learn long time ago that there are sounds that play well with each other. Those sound, carried by waves of air, vibrate in such a particular way that the resultan sound seams to be bust and enhance. Those sounds are call [harmonics](http://en.wikipedia.org/wiki/Harmonic).
Back to code, we can add harmonics together and see how the resultant looks like. Try the following code on the previous graph.
```glsl
y = 0.;
for( int i = 0; i < 5; ++i) {
y += sin(PI*x*float(i))/float(i);
}
y *= 0.6;
```
As you can see in the above code, on every iteration the frequency increase by the double. By augmenting the number of iterations (chaining the 5 for a 10, a 20 or 50) the wave tends to break into smaller fractions, with more details and sharper fluctuations.
## Fractal Brownian Motion
So we try adding different waves together, and the result was chaotic, we add up harmonic waves and the result was a consistent fractal pattern. We can use the best of both worlds and add up harmonic noise waves to exacerbate a noise pattern.
By adding different octaves of increasing frequencies and decreasing amplitudes of noise we can obtain a bigger level of detail or granularity. This technique is call Fractal Brownian Motion and usually consist on a fractal sum of noise functions.
Take a look to the following example and progressively change the for loop to do 2,3,4,5,6,7 and 8 iterations. See want happens
<div class="simpleFunction" data="
float a = 0.5;
for( int i = 0; i < 1; ++i) {
y += a * noise(x);
x = x * 2.0;
a *= 0.5;
}"></div>
If we apply this one dimensional example to a bidimentional space it will look like the following example:
<div class="codeAndCanvas" data="2d-fbm.frag"></div>
## Using Fractal Brownian Motion
In this [article](http://www.iquilezles.org/www/articles/warp/warp.htm) Iñigo Quilez describe an interesting use of fractal brownian motion constructing patterns by adding successive results of fractal brownian motions.
Take a look to the code and how it looks
<div class="codeAndCanvas" data="clouds.frag"></div>
https://briansharpe.wordpress.com/2011/12/01/optimized-artifact-free-gpu-cellular-noise/
http://www.rhythmiccanvas.com/research/papers/worley.pdf
http://webstaff.itn.liu.se/~stegu/GLSL-cellular/GLSL-cellular-notes.pdf
http://www.iquilezles.org/www/articles/voronoise/voronoise.htm
http://www.iquilezles.org/www/articles/smoothvoronoi/smoothvoronoi.htm
http://www.iquilezles.org/www/articles/voronoilines/voronoilines.htm

@ -1,7 +1,7 @@
<?php
$path = "..";
$subtitle = ": More noise";
$subtitle = ": Fractal Brownian Motion";
$README = "README";
$language = "";

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

@ -1,67 +1,27 @@
## Fractal Brownian Motion
## Fractals
http://www.iquilezles.org/www/articles/warp/warp.htm
http://www.iquilezles.org/www/articles/morenoise/morenoise.htm
https://www.shadertoy.com/view/lsX3W4
Noise is one of those subjects that you can dig and always find new exciting formulas. In fact noise tends to means different things for different people. Musicians will think in audio noise, communicators into interference, and astrophysics on cosmic microwave background. In fact noise could be interpreted as audio signals, and noise as well as sound can be constructed by the manipulation of the amplitud and frequency of the waves that compose it.
https://www.shadertoy.com/view/Mss3Wf
```glsl
y = amplitud + sin( frequency );
```
https://www.shadertoy.com/view/4df3Rn
An interesting property of waves in general is that they can be add up. The following graph shows what happen if you add sine waves of different frequencies and amplitudes.
https://www.shadertoy.com/view/Mss3R8
<div class="simpleFunction" data="
float t = 0.01*(-u_time*130.0);
y += sin(x*2.1 + t)*4.5;
y += sin(x*1.72 + t*1.121)*4.0;
y += sin(x*2.221 + t*0.437)*5.0;
y += sin(x*3.1122+ t*4.269)*2.5;
y *= 0.1;
"></div>
https://www.shadertoy.com/view/4dfGRn
Think on it as the surface of the ocean. Massive amount of water propagating waves across it surface. Waves of different heights (amplitud) and rhythms (frequencies) bouncing and interfering each other.
https://www.shadertoy.com/view/lss3zs
Musicians learn long time ago that there are sounds that play well with each other. Those sound, carried by waves of air, vibrate in such a particular way that the resultan sound seams to be bust and enhance. Those sounds are call [harmonics](http://en.wikipedia.org/wiki/Harmonic).
https://www.shadertoy.com/view/4dXGDX
Back to code, we can add harmonics together and see how the resultant looks like. Try the following code on the previous graph.
https://www.shadertoy.com/view/XsXGz2
```glsl
y = 0.;
for( int i = 0; i < 5; ++i) {
y += sin(PI*x*float(i))/float(i);
}
y *= 0.6;
```
https://www.shadertoy.com/view/lls3D7
As you can see in the above code, on every iteration the frequency increase by the double. By augmenting the number of iterations (chaining the 5 for a 10, a 20 or 50) the wave tends to break into smaller fractions, with more details and sharper fluctuations.
https://www.shadertoy.com/view/XdB3DD
## Fractal Brownian Motion
So we try adding different waves together, and the result was chaotic, we add up harmonic waves and the result was a consistent fractal pattern. We can use the best of both worlds and add up harmonic noise waves to exacerbate a noise pattern.
By adding different octaves of increasing frequencies and decreasing amplitudes of noise we can obtain a bigger level of detail or granularity. This technique is call Fractal Brownian Motion and usually consist on a fractal sum of noise functions.
Take a look to the following example and progressively change the for loop to do 2,3,4,5,6,7 and 8 iterations. See want happens
<div class="simpleFunction" data="
float a = 0.5;
for( int i = 0; i < 1; ++i) {
y += a * noise(x);
x = x * 2.0;
a *= 0.5;
}"></div>
If we apply this one dimensional example to a bidimentional space it will look like the following example:
<div class="codeAndCanvas" data="2d-fbm.frag"></div>
## Using Fractal Brownian Motion
In this [article](http://www.iquilezles.org/www/articles/warp/warp.htm) Iñigo Quilez describe an interesting use of fractal brownian motion constructing patterns by adding successive results of fractal brownian motions.
Take a look to the code and how it looks
<div class="codeAndCanvas" data="clouds.frag"></div>
https://www.shadertoy.com/view/XdBSWw
https://www.shadertoy.com/view/llfGD2
https://www.shadertoy.com/view/Mlf3RX

@ -1,7 +1,7 @@
<?php
$path = "..";
$subtitle = ": Fractal Brownian Motion";
$subtitle = ": Fractals";
$README = "README";
$language = "";

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

@ -1,27 +1,72 @@
## Fractals
# Image processing
https://www.shadertoy.com/view/lsX3W4
## Textures
https://www.shadertoy.com/view/Mss3Wf
![](01.jpg)
https://www.shadertoy.com/view/4df3Rn
Graphic cards (GPUs) have special memory types for images. Usually on CPUs images are stores as arrays of bites but on GPUs store images as ```sampler2D``` which is more like a table (or matrix) of floating point vectors. More interestingly is that the values of this *table* of vectors are continously. That means that value between pixels are interpolated in a low level.
https://www.shadertoy.com/view/Mss3R8
In order to use this feature we first need to *upload* the image from the CPU to the GPU, to then pass the ```id``` of the texture to the right [```uniform```](../05). All that happens outside the shader.
https://www.shadertoy.com/view/4dfGRn
Once the texture is loaded and linked to a valid ```uniform sampler2D``` you can ask for specific color value at specific coordinates (formated on a [```vec2```](index.html#vec2.md) variable) usin the [```texture2D()```](index.html#texture2D.md) function which will return a color formated on a [```vec4```](index.html#vec4.md) variable.
https://www.shadertoy.com/view/lss3zs
```glsl
vec4 texture2D(sampler2D texture, vec2 coordinates)
```
https://www.shadertoy.com/view/4dXGDX
Check the following code where we load Hokusai's Wave (1830) as ```uniform sampler2D u_tex0``` and we call every pixel of it inside the billboard:
https://www.shadertoy.com/view/XsXGz2
<div class="codeAndCanvas" data="texture.frag" data-imgs="hokusai.jpg"></div>
https://www.shadertoy.com/view/lls3D7
If you pay attention you will note that the coordinates for the texture are normalized! What a surprise right? Textures coordenates are consisten with the rest of the things we had saw and their coordenates are between 0.0 and 1.0 whitch match perfectly with the normalized space coordinates we have been using.
https://www.shadertoy.com/view/XdB3DD
Now that you have seen how we load correctly a texture is time to experiment to discover what we can do with it, by trying:
https://www.shadertoy.com/view/XdBSWw
* Scaling the previus texture by half.
* Rotating the previus texture 90 degrees.
* Hooking the mouse position to the coordenates to move it.
https://www.shadertoy.com/view/llfGD2
Why you should be excited about textures? Well first of all forget about the sad 255 values for channel, once your image is trasformed into a ```uniform sampler2D``` you have all the values between 0.0 and 1.0 (depending on what you set the ```precision``` to ). That's why shaders can make really beatiful post-processing effects.
https://www.shadertoy.com/view/Mlf3RX
Second, the [```vec2()```](index.html#vec2.md) means you can get values even between pixels. As we said before the textures are a continum. This means that if you set up your texture correctly you can ask for values all arround the surface of your image and the values will smoothly vary from pixel to pixel with no jumps!
Finnally, you can setup your image to repeat in the edges, so if you give values over or lower of the normalized 0.0 and 1.0, the values will wrap around starting over.
All this features makes your images more like an infinit spandex fabric. You can streach and shrinks your texture without noticing the grid of bites they originally where compose of or the ends of it. To experience this take a look to the following code where we distort a texture using [the noise function we already made](../11/).
<div class="codeAndCanvas" data="texture-noise.frag" data-imgs="hokusai.jpg"></div>
## Texture resolution
Aboves examples play well with squared images, where both sides are equal and match our squared billboard. But for non-squared images things can be a little more tricky, and unfortunatly centuries of picturical art and photography found more pleasent to the eye non-squared proportions for images.
![Joseph Nicéphore Niépce (1826)](nicephore.jpg)
How we can solve this problem? Well we need to know the original proportions of the image to know how to streatch the texture correctly in order to have the original [*aspect ratio*](http://en.wikipedia.org/wiki/Aspect_ratio). For that the texture width and height is pass to the shader as an ```uniform```. Which in our example framework are pass as an ```uniform vec2``` with the same name of the texture followed with proposition ```Resolution```. Once we have this information on the shader he can get the aspect ration by dividing the ```width``` for the ```height``` of the texture resolution. Finally by multiplying this ratio to the coordinates on ```y``` we will shrink these axis to match the original proportions.
Uncomment line 21 of the following code to see this in action.
<div class="codeAndCanvas" data="texture-resolution.frag" data-imgs="nicephore.jpg"></div>
* What we need to do to center this image?
## Digital upholstery
![](03.jpg)
You may be thinking that this is unnesesary complicated... and you are probably right. Also this way of working with images leave a enought room to different hacks and creative tricks. Try to imagine that you are an upholster and by streaching and folding a fabric over a structure you can create better and new patterns and techniques.
![Eadweard's Muybridge study of motion](muybridge.jpg)
This level of craftsmanship links back to some of the first optical experiments ever made. For example on games *sprite animations* are very common, and is inevitably to see on it reminicence to phenakistoscope, zoetrope and praxinoscope.
This could seam simple but the posibilities of modifing textures coordinates is enormus. For example: .
<div class="codeAndCanvas" data="texture-sprite.frag" data-imgs="muybridge.jpg"></div>
Now is your turn:
* Can you make a kaleidoscope using what we have learn?
* What other optical toys can you re-create using textures?
In the next chapters we will learn how to do some image processing using shaders. You will note that finnaly the complexity of shader makes sense, because was in a big sense designed to do this type of process. We will start doing some image operations!

Before

Width:  |  Height:  |  Size: 516 KiB

After

Width:  |  Height:  |  Size: 516 KiB

@ -1,7 +1,6 @@
<?php
$path = "..";
$subtitle = ": Fractals";
$README = "README";
$language = "";

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 148 KiB

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 218 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 80 KiB

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

@ -1,72 +1,18 @@
# Image processing
## Image operations
## Textures
![](01.jpg)
### Invert
Graphic cards (GPUs) have special memory types for images. Usually on CPUs images are stores as arrays of bites but on GPUs store images as ```sampler2D``` which is more like a table (or matrix) of floating point vectors. More interestingly is that the values of this *table* of vectors are continously. That means that value between pixels are interpolated in a low level.
<div class="codeAndCanvas" data="inv.frag" data-imgs="00.jpg,01.jpg"></div>
In order to use this feature we first need to *upload* the image from the CPU to the GPU, to then pass the ```id``` of the texture to the right [```uniform```](../05). All that happens outside the shader.
### Add, Substract, Multiply and others
Once the texture is loaded and linked to a valid ```uniform sampler2D``` you can ask for specific color value at specific coordinates (formated on a [```vec2```](index.html#vec2.md) variable) usin the [```texture2D()```](index.html#texture2D.md) function which will return a color formated on a [```vec4```](index.html#vec4.md) variable.
![](02.jpg)
```glsl
vec4 texture2D(sampler2D texture, vec2 coordinates)
```
<div class="codeAndCanvas" data="operations.frag" data-imgs="00.jpg,01.jpg"></div>
Check the following code where we load Hokusai's Wave (1830) as ```uniform sampler2D u_tex0``` and we call every pixel of it inside the billboard:
<div class="codeAndCanvas" data="texture.frag" data-imgs="hokusai.jpg"></div>
If you pay attention you will note that the coordinates for the texture are normalized! What a surprise right? Textures coordenates are consisten with the rest of the things we had saw and their coordenates are between 0.0 and 1.0 whitch match perfectly with the normalized space coordinates we have been using.
Now that you have seen how we load correctly a texture is time to experiment to discover what we can do with it, by trying:
* Scaling the previus texture by half.
* Rotating the previus texture 90 degrees.
* Hooking the mouse position to the coordenates to move it.
Why you should be excited about textures? Well first of all forget about the sad 255 values for channel, once your image is trasformed into a ```uniform sampler2D``` you have all the values between 0.0 and 1.0 (depending on what you set the ```precision``` to ). That's why shaders can make really beatiful post-processing effects.
Second, the [```vec2()```](index.html#vec2.md) means you can get values even between pixels. As we said before the textures are a continum. This means that if you set up your texture correctly you can ask for values all arround the surface of your image and the values will smoothly vary from pixel to pixel with no jumps!
Finnally, you can setup your image to repeat in the edges, so if you give values over or lower of the normalized 0.0 and 1.0, the values will wrap around starting over.
All this features makes your images more like an infinit spandex fabric. You can streach and shrinks your texture without noticing the grid of bites they originally where compose of or the ends of it. To experience this take a look to the following code where we distort a texture using [the noise function we already made](../11/).
<div class="codeAndCanvas" data="texture-noise.frag" data-imgs="hokusai.jpg"></div>
## Texture resolution
Aboves examples play well with squared images, where both sides are equal and match our squared billboard. But for non-squared images things can be a little more tricky, and unfortunatly centuries of picturical art and photography found more pleasent to the eye non-squared proportions for images.
![Joseph Nicéphore Niépce (1826)](nicephore.jpg)
How we can solve this problem? Well we need to know the original proportions of the image to know how to streatch the texture correctly in order to have the original [*aspect ratio*](http://en.wikipedia.org/wiki/Aspect_ratio). For that the texture width and height is pass to the shader as an ```uniform```. Which in our example framework are pass as an ```uniform vec2``` with the same name of the texture followed with proposition ```Resolution```. Once we have this information on the shader he can get the aspect ration by dividing the ```width``` for the ```height``` of the texture resolution. Finally by multiplying this ratio to the coordinates on ```y``` we will shrink these axis to match the original proportions.
Uncomment line 21 of the following code to see this in action.
<div class="codeAndCanvas" data="texture-resolution.frag" data-imgs="nicephore.jpg"></div>
* What we need to do to center this image?
## Digital upholstery
### PS Blending modes
![](03.jpg)
You may be thinking that this is unnesesary complicated... and you are probably right. Also this way of working with images leave a enought room to different hacks and creative tricks. Try to imagine that you are an upholster and by streaching and folding a fabric over a structure you can create better and new patterns and techniques.
![Eadweard's Muybridge study of motion](muybridge.jpg)
This level of craftsmanship links back to some of the first optical experiments ever made. For example on games *sprite animations* are very common, and is inevitably to see on it reminicence to phenakistoscope, zoetrope and praxinoscope.
This could seam simple but the posibilities of modifing textures coordinates is enormus. For example: .
<div class="codeAndCanvas" data="texture-sprite.frag" data-imgs="muybridge.jpg"></div>
Now is your turn:
* Can you make a kaleidoscope using what we have learn?
* What other optical toys can you re-create using textures?
In the next chapters we will learn how to do some image processing using shaders. You will note that finnaly the complexity of shader makes sense, because was in a big sense designed to do this type of process. We will start doing some image operations!
<div class="codeAndCanvas" data="blend.frag" data-imgs="04.jpg,05.jpg"></div>

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save