This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy. X

Basics of Stage3D part IV: working with shaders [expert]

Nowadays shaders have become an inharent part of the 3D graphics, it is basically impossible to display a picture without creating a single shading program. Which is why, this time around we will take a quick look on how to work with shaders by adding a simple lighting to our virtual world. In Flash shaders are created using a series of instructions called AGAL, which is a very different language from ActionSctipt that requires a complete walkthrough on its own for full understanding - those however are easy to find on the Internet so for now we will only focus on the most basic stuff needed for lighting to work.
Returning to the code from previous article first thing we should do is add new Fragment Shader:
private const FRAGMENT_SHADER_LIGHT:String = "dp3 ft1, fc2, v1 n"+
"neg ft1, ft1 n"+
"max ft1, ft1, fc0 n"+
"mul ft2, fc4, ft1 n"+
"mul ft2, ft2, fc3 n"+
"add oc, ft2, fc1";
and Vertex Shader:
private const VERTEX_SHADER_LIGHT:String = "mov vt0, va0n"+
"m44 op, vt0, vc0n"+
"nrm vt1.xyz, va0.xyzn"+
"mov vt1.w, va0.wn"+	
"mov v1, vt1n" +
"mov v2, va1";
(The code has been borrowed from http://blog.norbz.net/2012/04/stage3d-agal-from-scratch-part-vii-let-there-be-light)
Above shaders will render triangles by taking each pixel and applying a light or dark color depending on their relative position to the camera. Before we do anything with that code we still need to compile it into a special "program" using the AGALMiniAssembler:
vertexAssembly.assemble(Context3DProgramType.VERTEX, VERTEX_SHADER_LIGHT, false);
fragmentAssembly.assemble(Context3DProgramType.FRAGMENT, FRAGMENT_SHADER_LIGHT, false);

programPairLight = renderContext.createProgram();
programPairLight.upload(vertexAssembly.agalcode, fragmentAssembly.agalcode);
(Don't forget to prepare the programPairLight variable)
So, right now we have two shaders: the standardprogramPair and light generating programPairLight. Only thing left to do is to decide which one we will be using by calling the setProgram function in Context3D. To better showcase the difference lets create another Box3D (lets call it "box3DLight") and then add the following code before present() but after the "box3D" rendering:
boxMat = box3DLight.getMatrix3D();
boxMat.identity();
boxMat.append(rotMat);
boxMat.appendTranslation(posVec.x + 1, posVec.y + 1, posVec.z+1);

finalTransform.identity();
finalTransform.append(boxMat);
finalTransform.append(world);
finalTransform.append(camera);
finalTransform.append(projection);
	
renderContext.setProgram(programPairLight);
renderContext.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, finalTransform, true);

var p:Vector3D = camera.position.clone();
p.normalize();
var m3d:Matrix3D = finalTransform.clone();
m3d.invert();
p = m3d.transformVector(p);
p.normalize();
p.negate();

renderContext.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.<Number>([0,0,0,0]));
renderContext.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.<Number>([0,0,0,0]));
renderContext.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.<Number>([p.x,p.y,p.z,1]));
renderContext.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.<Number>([1, 1, 1, 1]));
renderContext.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 4, Vector.<Number>([0.6, 0.6, 0.6, 1]) );

box3DLight.render(renderContext);
That is a lot to take in, but what really matters to use happens after setProgram ("boxMat" and "finalTransform" operations are previously discussed transformations from 3D to 2D). The term "program" is not used here without a reason as just like any other program on our computer shaders do a mathematical operations on variables and constants, only by using a very low-leveled language which is also quite limited as it only support up to 8 variables at a time and they only in program's memory. That is not the case for constants, although they are still limited to a maximum of 8, they can be supplied from the outside using either setProgramConstantsFromVector or setProgramConstantsFromMatrix functions - precisely what the code above is doing, it supplies the camera position ("p" variable) and light's color.
When done, we should see something like this:
This application only uses two shaders but there is nothing stopping us from create much, much and then just switch them around using the setProgram function just as we are doing with the lineStyle function when working with Graphics.
Whole code is available here: Tutorial3D_light.zip.

Basics of Stage3D part III: working with rotation [expert]

Continuing from where we left off this time around I want to write a bit about rotations. Anyone who started working with Flash's Stage3D without previous experience in 3D will sooner or later run into unforeseen results with appendRotation() function. To make things even less inviting for newcomers, looking for support on this topic on the Internet most routes lead to one answer "you will need complex numbers for that (quaternions to be exact). Hmm? Okay, quaternions might be really useful, but if you don't need 3D calculations to solve 2D rotations, it would be logical to assume that you also don't need 4D numbers to solve 3D problems, especially when being a newcomer to this topic.
Well, it doesn't matter right now so lets just move along, starting with the code from the previous post: Tutorial3D-2.zip, lets remove all those cubes and only leave one box, as that is all we need. By the way you could also create two additional variables: one for a Vector3D and one for Matrix3D - it will come in handy.
Generally rotation can be added in two ways: locally and globally, or in other words by either taking into the account the existing transformation or not. Which ever way we choose it is worth noting that existing transformation also includes any translation that might be applied to the matrix, which is why it is a good idea to make our lives easier and the rotation in separate Matrix3D object, only appending it to the existing object when needed. So lets start with the global transformation and creating a separate matrix (if you haven't done so already):
private var rotMat:Matrix3D = new Matrix3D();
Next, we will modify the handling of keyboard arrow keys by including the rotation for rotMat:
switch(key&0x0000ff) {
	case 0x00000f: // forward
		rotMat.appendRotation(SPEED, Vector3D.X_AXIS);
		break;
	case 0x0000f0: // backward
		rotMat.appendRotation(-SPEED, Vector3D.X_AXIS);
		break;
}
switch(key&0x00ff00) {
	case 0x000f00: // left
		rotMat.appendRotation(SPEED, Vector3D.Y_AXIS);
		break;
	case 0x00f000: // right
		rotMat.appendRotation(-SPEED, Vector3D.Y_AXIS);
		break;
}
Now only thing left to do is add that rotation to the box3D object, before it has its final transformation created in finalTransform:
var boxMat:Matrix3D = box3D.getMatrix3D();
boxMat.identity();
boxMat.append(rotMat);
Which should result in this:
To make this example a bit more interesting and also check if the rotation is correctly applied, that is without being affected by the translation, lets use the Vector3D I've mentioned earlier. Like this:
private var posVec:Vector3D = new Vector3D();
Working with vectors should not bring any troubles at this stage, so I will only mention we will have to append it in proper place - right before the boxMat.append(rotMat); insert this line:
boxMat.appendTranslation(posVec.x, posVec.y, posVec.z);
In turn getting this:
So the global transformation isn't really that complicated, but there is still the matter of local rotation. Fortunately it is quite trivial and comes down to only finding the right axis:
switch(key&0x0000ff) {
	case 0x00000f: // forward
		rotMat.appendRotation(SPEED, rotMat.transformVector(Vector3D.X_AXIS));
		posVec.y += SPEED/50;
		break;
	case 0x0000f0: // backward
		rotMat.appendRotation(-SPEED, rotMat.transformVector(Vector3D.X_AXIS));
		posVec.y -= SPEED/50;
		break;
}
switch(key&0x00ff00) {
	case 0x000f00: // left
		rotMat.appendRotation(SPEED, rotMat.transformVector(Vector3D.Y_AXIS));
		posVec.x -= SPEED/50;
		break;
	case 0x00f000: // right
		rotMat.appendRotation(-SPEED, rotMat.transformVector(Vector3D.Y_AXIS));
		posVec.x += SPEED/50;
		break;
}
The X_AXIS vector will be transformed the object's local system, which will give us the exact location of the X axis we will use for the rotation. If done right at least one edge (depending on the direction of the rotation) will stay in same place (minus the translation):
Source code: Tutorial3D_rot.zip

Basics of Stage3D part II: camera movement and pointAt function [expert]

In the previous part of this article I wrote about extending an ActionScript3 Documentation example of 3D graphics to make it work with multiple objects in such way that every object could still be transformed separately. The example also has a bit about the matrix used for camera movement ("view"), which truth to be told did not work like it should because it was missing some basic set-up. And this is where it gets unpleasant for people that are new to the 3d graphics, as information available on the Internet about that set-up are rather mixed and mostly confusing, especially since proper camera movement will actually depend on 3D environment settings. That is why this part of Stage3D basics article will be dedicated to virtual camera and its construction, positioning and pointAt function usage.

Starting where we left of two weeks ago (source code from the guide's previous part: Tutorial3D.zip) is a good idea since we already have some basic camera stuff laid down. Although creating a separate class for the camera would probably be a good idea, the already existing matrix by it self is sufficient for now. In any case, the matrix is ready, however results of its transformation are somewhat… unpredictable, or at least not something we could expect (i.e. x+=-5 will actually result in movement to the right instead of left). This is related to a small omission on Adobe's part in the rendering function, which can cause some sleepless nights. What I am exactly talking about? Lets have a look how finalTransform matrix is created in render function:

finalTransform.identity();   
finalTransform.append(box.getMatrix3D());   
finalTransform.append(world);   
finalTransform.append(view);   
finalTransform.append(projection);
It doesn't seem like there is anything out of ordinary here, but if we stop for a moment and take a close look - lets say we want to move whole world to the left, and camera to the right, and for that we set world to x = -10 and view to x = 5. In perfect world there would 15 units of space between world and camera now, but of course it won't be the case in our application and everything because how finalTransform is being made. Lets take another look, world is moved by -10 and view by +5:
finalTransform.append(-10);   
finalTransform.append(5);
And what the result will be? Of course every object will be moved to the left (-10), then to the right (+5), in the end ending up on x = -5. True, it's very basic stuff, but it is very important to understand that it is not the camera that we are moving but the whole world around it. And this is that little detail thingy we were missing: we aren't moving the camera to the left, but whole to the right! Of course it could be done by just saying "simply move the camera in opposite direction", but … is bad idea - rotation and positioning could be done that way, but poinatAt function will be pretty much useless. Fortunately the solution is very simple, we just have to invert the view matrix:
var camera:Matrix3D = view.clone();  
camera.invert();
Yup, that is it. Whole render function will look like this:
private function render(event:Event):void {  
renderContext.clear(.3, .3, .3);  
var camera:Matrix3D = view.clone();  
camera.invert();  
for each(var box:Box3D in vBox) {  
finalTransform.identity();  
finalTransform.append(box.getMatrix3D());  
finalTransform.append(world);  
finalTransform.append(camera);  
finalTransform.append(projection);  
renderContext.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, finalTransform, true);  
box.render(renderContext);  
}  
renderContext.present();  
}
Those two lines of code are enough to make our camera work as intended, view translation to the left will in face move it to the left and what is most important pointAt will actually work, which I will explain shortly. Right now it is best to test out the camera movement and hopefully get something like this:
(Movement on the arrow keys)
Just in case you can always use the code available at the very and of this post.
Well, now that everything is properly set up, it is time to use pointAt function. And to think that without the knowledge about view matrix inversion prior to finalTransform construction, pointAt would never work as intended! This however doesn't solve all of the problems, there is still a question of parameters used by pointAt, because obviously default ones won't work. Those parameters are:
1. Target's position - matrix will be turned to face that point.
2. Facing direction - in other words "front" direction without any transformations; for camera it always will be Vector3D(0, 0, -1).
3. Up direction - which is the "rising" direction; by default Vector3D(0, -1, 0).
That should be enough. Properly working pointAt should look like this:
Unfortunately, translation won't be affect by the pointAt transformation - lets leave that for another day.
Source code: Tutorial3D-2.zip

Basics of Stage3D part I: displaying multiple objects [advanced]

Two weeks ago I've wrote about support in Flash Professional for Flash Player 11 while mentioning about using the example of 3D rendering from the documentation ( http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display3D/Context3D.html#includeExamplesSummary). However using a pre-made example is one thing and understanding it is whole different matter - that is why this time around I will write about Stage3D (as Adobe like to call there 3D acceleration in Flash).
Just in case the whole code from that example (along with additional libraries needed to run it) is available here: Context3DExample.zip

In the first part of this article I will write a bit about creating and displaying multiple objects while still allowing for separate transformations on each of them. To make this simpler I suggest making a class for 3D cubes, using the already existing code (from the example in the documentation) responsible for creating geometry: IndexBuffer and VertexBuffer. Such class might look like this:

private var vbBuffer:VertexBuffer3D;
private var ibIndex:IndexBuffer3D;
private var m3D:Matrix3D;
private var nCount:int;

public function Box3D(context:Context3D) {
			
            var triangles:Vector.<uint> = Vector.<uint>([...]);
	ibIndex = context.createIndexBuffer(triangles.length);
	ibIndex.uploadFromVector(triangles, 0, triangles.length);
			
	const dataPerVertex:int = 6;
	var vertexData:Vector.<Number> = Vector.<Number>([...]);
	vbBuffer = context.createVertexBuffer(vertexData.length / dataPerVertex, dataPerVertex);
	vbBuffer.uploadFromVector(vertexData, 0, vertexData.length / dataPerVertex);
					
	m3D = new Matrix3D();
	nCount = 12;	
}
				
public function render(context:Context3D):void {
	context.setVertexBufferAt( 0, vbBuffer, 0, Context3DVertexBufferFormat.FLOAT_3 );
	context.setVertexBufferAt( 1, vbBuffer, 3, Context3DVertexBufferFormat.FLOAT_3 );
	context.drawTriangles(ibIndex, 0, nCount);
}
				
public function getMatrix3D():Matrix3D {return m3D;}
Nothing especially complicated here - just remember that m3D matrix will hold the transformation (position, scale) so it is best to build the cube around 0,0,0 coordinates and then later move it with appendTranslation function. Also, one more thing worth pointing out is that setVertexBufferAt is used to set which VertexBuffer will be used to draw triangles (drawTriangles function) and it is called twice, because first it points out the vertexes and next the colours.
So, creating a class for cubes is pretty simple, the problems start when we decide to display them. Assuming all those boxes are stored in vBox vector, the render function used in Context3Dexample needs to be changed to this:
private function render(event:Event):void {
	renderContext.clear(.3, .3, .3);
			
	for each(var box:Box3D in vBox) {
		finalTransform.identity();
		finalTransform.append(box.getMatrix3D());
		finalTransform.append(world);
		finalTransform.append(view);
		finalTransform.append(projection);
	renderContext.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, finalTransform, true);
				
		box.render(renderContext);
	}
			
	renderContext.present();
}
Here it gets a bit more interesting thanks to the two most important things: setProgramConstantsFromMatrix function and finalTransform matrix construction. With setProgramConstantsFromMatrix function we set the matrix (transformations) for the rendered object and it cannot be skipped (or at least it won't make any sense to do this, since you won't be able to see anything on the screen), while finalTransform will be used to create that matrix.
This stuff will probably be pretty confusing to people not familiar with 3D graphics, but the basic idea to remember is that every object (cubs in our case) is actually never modified during the runtime. In other words we create our cube, define its triangles and set some colours, but by it self the cube does not have specific position in the system - we use matrices for that. But the thing is matrices also don't do anything special by themselves, until we call the setProgramConstantsFromMatrix function to combine them. Note however this works somewhat like a filter - "Program" (external code used for rendering process) takes the geometry and passes it through the matrix, displaying results to the screen while not modifying any variables.
Now as for the finalTransform - we start each animation frame by resetting this matrix (identity function) to make sure it contains absolutely no transformations. This is very important for append operations, because existing transformations affect the results, so for example a translation by 10 units to the left followed by a 90 degrees turn will place an object in different spot than by starting with 90 degrees turn and then following it with a translation to the left by 10 units. Just like in the real world the direction we will take on "move to the left" command will be different depending on the side we are already facing. In any case it is best to experiment with this, while remembering that order of matrices addition is extremely important and that is why finalTransform is created by appending other matrices in this particular order: world, view (camera) and projection (defines how we see the world).

As usual the whole code is packed and ready to download from here: Tutorial3D.zip