Fuck This Jam post mortem : Strategy.min

This week (May 31st – June 6th) happened the Fuck This Jam online gamejam, hosted and created by the awesome dudes over at Vlambeer (Super Crate Box, Luftrauser, Ridiculous Fishing, Nuclear Throne, the awesomeness just goes on…). The point was to create a game in a genre you hate or don’t know much about and put it on itch.io. Since I already had 2 or 3 projects already running at work, I decided now was the right moment to get back into gamedev again. Fuck logic.

So yeah, I made a “RTS” game. I don’t particularly hate RTS, but it just so happens that I’ve never developed one before, mostly because my mentors where like “this is so out of fashion, facebook city builders is the shite”. It had occured to me to actually do a Clash of Clans clone or some stuff like that since I hate these kind of games. I ended up doing things a whole lot differently : a sort of defend your base against a brainless AI RTS with fixed camera, minimal graphics and no resource collection. Thus was born Strategy.min. But as a matter of fact, things went pretty wrong this week, because it seems I cannot lead 20 lives at the same time.


So, what went right:

  • I actually released, which is a thing
  • I had rather solid gamedesign, but I’ll talk more about it later
  • Architecture was really fine, I pushed Unity and Mono’s compiler to its limits
  • I only ever used one outside asset : Share Tech Mono from Google fonts
  • I suck at graphics design, and I ain’t even mad after my ugly units and buildings
  • What I could do with the time I had was rather neat
  • Unity export (y)

What went wrong:

  • Even though I has solid GD, the current released version has almost 3/4 cut out because I didn’t have the time to do it properly. In the intended GD, I had bases, player constructions, research/upgrades and even a basic AI that was totally doable. Problem is I really didn’t get the time for that (see “what went horribly wrong”)
  • Mouse management was actually a pain in the ass because my system is not flexible enough
  • Architecture was fine until I realized I didn’t have the time to do everything. Then things turned ugly. Working in a prototype specialized company surely helps.
  • Flight patterns for planes is rather weird and collision detection is absent
  • Didn’t get time to do sfx and music :’(
  • YOLO : didn’t do a single code versioning on this project

What went horribly wrong:

TIME MANAGEMENT. God I suck at this. For starters, I lost all the week end figuring out what I wanted to do, and I ended up doing game design at 1am on monday morning. Since I have a job during daytime, I had mostly reserved my evenings for code and beer. Things went right on the first night, but later on, there where spontaneous “heroic fantasy tv series” or “wassup dude, wanna hang out”. The point is, when I could have had a shitton of time to dev, I actually spent like 8 to 10 hours on the game only.

Notes to future self:

  • You suck at time management
  • stop wasting time on pointless details when you’re already short on it
  • pizza, you’re eating too much of it

Well, things never go according to plan anyways. I’ll take the time to finish it, I really enjoyed working on it. Thanks to Vlambeer for putting up that event, it feels great to do games again. Download the game for free on Itch.io :

Have fun, code safe,

Tuxic

Cheetah3D to Babylon.JS exporter

Lately, I’ve been working a lot on MacOS X on 3D stuff, and I got the chance to take a look at how Cheetah3D works. For those who don’t know about it, it’s a mac only 3d software that’s not expensive and simple enough for devs like me to handle. It’s powerful enough to be used as a quick prototyping tool and a complex scene renderer. I’ll leave the technical details to 3d artists, but the point is I had to use it for realtime stuff.

This realtime stuff included native opengl on iOS and more recently webgl on your browser. Not being influenced or whatsoever, I recommended Babylon.JS for the work, and we went with it. That’s all that I can say regarding my job, so let’s go straight to the point : I made a simple export script for Cheetah3D to transform your scene into a babylon file which you can load in your browser. You can download and fork it on my github. Install is simple enough : just drop the js file into /Users/YOUR_USER/Library/Application Support/Cheetah3D/Scripts/Macro/. This is a macro script for those who know a bit about Cheetah3D scripting, meaning it covers a lot of things : reading the scene, parsing, creating and saving a file. If you want to export your scene, just go to Tools/Scripts/Macro/Babylon file export in Cheetah3D.

So, what can you do right now? It’s far from totally complete compared to babylon file specifications, but here’s what you get:

  • mesh export with functional transform, naming and parenting
  • multicamera export (only perspective, no support for orthogonal right now)
  • light export with all babylon type managed :
    • Cheetah3D spot light is a babylon spot (duh)
    • Cheetah3D distant light is a babylon directional light
    • Cheetah3D ambiant light is a babylon hemispheric light
    • every other type is a babylon point light
    • supports diffuse and specular color
    • rotations must be on the -Y axis
  • materials export:
    • supports diffuse, emissive and specular color (plus specular power as “shininess”)
    • supports only diffuse textures, Cheetah3D api is realy sparse on that

That’s it for first version, more features will come over time. Don’t hesitate to review my code as I really suck at javascript. That said, Cheetah3D’s api has got to be the worst thing in the world, including documentation that’s at least 2 years old (Material class method rib() doesn’t exist and is still documented), properties that don’t even exist and the most random descriptions I’ve had the displeasure to see in a documentation. If you ever do an export script for Cheetah3D : uvCoord() method on the PolyCore class returns a 4D vector combining first UV (x,y) with second UV (z,w). That took me forever to find out even though it seems simple.

Regarding licence, this script is under WTFPL :

DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

0. You just DO WHAT THE FUCK YOU WANT TO.

To resume :

Anyways, enjoy, feedback pls. Protip : don’t go loading your 60 million polygons models into Babylon.JS, chances are your browser will create a singularity.

Have fun, code safe.

Tuxic

Unity2d entities in a 3d world, why you shouldn’t do this

Sometimes I have really stupid ideas and a bit of time to waste. The base architecture for my st’hack challenge is an example of that, how I can really push a bad idea pretty far.

As you probably know (I like this formula, I use it a lot), a few months ago, Unity released its native 2d layer, with tons of new components, and a fully integrated workflow with existing 3d tools. That actually caught my interest : can you do a 3d game with the new 2d entities? The st’hack project was the perfect sandbox for this test, as I was into random dungeon generation. So, with my tileset and some Sprite game objects, can I generate a full 3d dungeon. The point is that the Sprite workflow is really simple, and it’s basically a drag and drop to create a 3d transformable entity. If you’ve checked out my executable, you’ll realize that there are no height levels, the whole dungeon is flat. It is still rendered in 3d with Sprite game objects, and as you might have seen, it’s awfully buggy.

Let’s breakdown the problems:

  • sometimes, the dungeon won’t generate. This is actually fairly easy to understand and the base limit to my idea : Unity2d entities, even though 3d objects, limit themselves to 2 axis, X and Y. What happens, at random, is that at object generation, something resets the Z axis which I use for depth placement. This means that every gameobject gets positioned on the 0 value of the Z axis, and since the player starts around [-16,0,-16], well… you can’t see any dungeon (duh)
  • the Sprite class draws on call order, and since the Z axis is not taken in account, there is no notion of depth testing. Meaning that if a Sprite gameobject behind another is drawn after the latter, it gets drawn in front of it. Overlapsing and shit, and this call order is pretty random on the same draw layer. You can in fact organize them with the draw layer feature, but this is totally wrong when it comes to 3d projection, as this needs to be totally dynamic when you modify the view model.
  • shaders. I can write a lot on this, but it actually is a noticeable problem with base 2d workflow in Unity. There is no dynamic lighting notion for Sprite objects by default. No matter what type of light or render type you choose, it won’t do anything on a Sprite object. This can of course be bypassed by changing the draw shader to a manually created diffuse sprite shader (create a new material and select this shader, affect it as material to Sprite Renderer). This should be enough if you remain into a 2d projection, but when it comes to 3d, you get the previous problems plus a shitload of math what the fucks from the Sprite class.

So yeah, it’s pretty impractical. And definitely a bad idea. Don’t do this at home kids.

So, what’s next? I plan on redoing this demo properly, aka the hard way : create true 3d objects with textures instead of sprites, reuse my algorithm, release this open source. And everyone will be happy, and the world will keep turning.

Have fun, code safe.

Tuxic

#sthack : Unity2D, dllimport and Monty Python

On the 14th of march 2014 went on the 4th edition of St’hack, a computer security event in Bordeaux that consists of a whole day of security conferences followed up by a Capture The Flag contest, organized by our very own security guru @agixid. For the 4th year, I’ve been part of the staff, taking care of CTF support at first, then UI design last year. I had the opportunity to create a challenge this year (and hand out pizza and redbull).

Those who know me also know that I really suck at computer security. Idiots would be like “that’s normal, you use MS tech”, but I would be like “I fell more into video games than security these past years”. Yeah, I use Windows as my main OS, deal with it. So I thought about a game concept the powners would have to reverse to find the solution. Here’s what I came up with : a basic hack n’ slash in which the hero has to move from chamber to chamber in the correct direction (much like I don’t remember which zelda) in a limited amount of room changes. If the hero changes too many rooms without finding the correct path, he loses. He can also get killed by cheaty ogres and the level generation is rather buggy (I’ll talk about that in another blog post). There is also no information whatsoever, you are on your own.

DungeonOfLolwat

So, how do you win? I actually made the game with Unity and used a feature only available for pro versions (I think) : dllimport. The feature itself is something available with C# since the dawn of ages, but Unity has decided you have to pay to use it in your games. That appart, it requires you to understand a bit on how .Net (and Mono + Unity by extension) compile their dynamic libraries : it’s managed code, executed on a virtual machine and totally controlled by it. It means that when you compile your C#, VB.Net or managed c++ in Visual Studio, you create an intermediate language (IL) version of your code that the VM will compile on the run so that the CPU understands it. This is called Just In Time (JIT) compiler and is a rather complex and nasty business. Check out this Telerik blog post if you’d like to know more. As a sidenote, .Net is not the only tech using JIT compiling, it’s rather common these days, so haters go home (unless you just hate JIT, which I can understand).

Back to business : IL code means easy decompiling. And when I say easy, I mean that you can get like 99% of your base .Net code (excluding naming if it is obfuscated of course) in a single click in an app like ILSpy. That’s the same with Mono and Unity, at least for Windows (haven’t checked on other platforms, but it shouldn’t be that different, I suppose). You get a bunch of dlls when you build your game, in particular Assembly-CSharp.dll in your Data/Managed. This includes all your non editor/plugins custom classes, which means “hide yo code, hide yo data, there’s a library intruder”. If you open this dll with ILSpy, you’ll probably find all your code barely touched by Unity (they optimize stuff like loops on build time). From this point, if you want to hide something, there are two solutions:

  • hide your data inside gameobjects metadata : this gets compiled into their .assets files at build time and seems like a freaking mess to open. If your data is critical, this is probably where you want to put it.
  • hide your methods inside a non managed library and call it via dllimport : this gives another layer of protection to your code, even though it’s not impermeable. We’ll use this.

So the goal is : create a non managed dll that stocks the flag the powners have to find about, call it in our managed code and generate path in game with that data. Want to check out the challenge? Download it here (Windows only, sorry)! Let’s do a step by step on how to solve this:

  • Find out this is a Unity game : I’m handing out solution here, but during the ctf, it was not specified. There were a few hints though:
    • the exe icon is the default unity icon
    • data/Managed/ has UnityEngine.dll
    • data/Resources/ files have unity in their name
  • Open Data/Managed/Assembly-CSharp.dll with ILSpy.
  • Find the class code for HeroMove, decompile it and get the lines of the dllimport.
  • Find YeOldeContent.dll in data/Plugins/. You can try to open it in ILSpy, but you’re going to be insulted.
  • Two options here:
    • the dev way : develop a .Net app that dllimports the non managed library and get the flag
    • the hacker way : extract the static strings directly from the non managed library

Now there’s a hiccup : the flag is incomplete, you have to interpret it ingame to get the complete flag (cf. directions). There’s also another hiccup if you use the dev way : I’ve put an extra security on the dll function that requires a passkey to get the flag directly. This passkey is inside the metadata, so if you want to go hardcore, try to open the .asset files and get that passkey. Else, you’ll have to replay a famous scene from Monty Python’s Holy Grail.

That’s it, you can witness my noobitude in computer security. Anyways, it was fun to create and I’m glad that some powners actually were able to break it! Regarding game part, I’m going to have to redo it, because it’s really fucking buggy. I tried to transpose Unity2d components in a full 3d world, and it didn’t go well. I’ll make an article about that later, but it was a pretty stupid idea. I used the Dungeon tileset over at OpenGameArt on a sidenote. Really awesome art!

That being said, have fun, code safe.

Tuxic

A quick look at GLKit

Those who know or follow me should know by now I focus a lot on Microsoft tech. Not really that I only love their tech and go tell the others to fuck themselves, but I’ve had the opportunity to go into details professionally, and not much on concurrent stuff. I also specialize in 3d, that’s why you can read some stuff about that on my blog, and I’ve done a bit of MS 3d tech : XNA, DirectX/SharpDx, HLSL, Babylon.js (it’s done by my MS mentors <3 ), etc…

Anyways, once in a while, I get to do other stuff, like Polycode, or even Löve2D when I feel I need a laugh. More recently I’ve been hired on a mission involving iOS and a whole bunch of 3d stuff. I can’t really describe it, but I get to do realtime opengl on an ipad as a daily basis. That said, you might like to know I’m a total ass concerning opengl and iOS in general. I’ve got a 1st gen ipad that’s so slow we mostly use it for Spotify OH SORRY, YOU NEED iOS6 FOR THAT (ever since my Nexus 7 died out for no reason and said app shows no interest whatsoever in Windows RT (YEAH I’VE GOT A WRT TABLET, DEAL WITH IT)), and a 2009 Macbook pro that’s got a broken screen, no battery, and no charger. Soooo for the iOS part, I’m totally lacking the experience. Regarding OpenGL, I can’t say I’ve done much. Back in the days (that would be around 2006-2008) I used to fiddle with C templates in Code::blocks without having an idea of what I was actually doing. Yet, I felt like I could overcome this : I’ve seen the fires of DirectX, and have returned to say “lolnope”. So yeah, I took the mission.

Anyways, a week or so later, I can say I’m pretty surprised : GLKit is really good! Basically, GLKit is a set of helpers for OpenGL ES 2 on iOS. This includes blazing fast vector and matrix math, full integration with UIKit (iOS GUI framework), predefined constants for operations like vertex layout and base shaders that work like a charm! Of course, you still have low level gl stuff to handle, like buffer upload and manual VAO management (even though the concept of VAOs has got to be the best thing in the world), but you can setup a simple OpenGL program in not much lines of code. So yeah, GLKit gets my thumbs up. Only sad thing is that it’s exclusive to iOS, and I’d be doing a lot more of OpenGL if you could use it on other platforms. I suppose there are equivalents out there, but as always, finding time for something new is kind of a luxury nowadays.

Anyways, to show you what I mean by simplicity, let’s code a fairly simple program that outputs a quad with uniform color. If you’re an opengl/iOS expert, please insult me on twitter if I’m doing something wrong. First things first, everything included with Apple’s GLKit is prefixed with GLK, and everything that belongs to opengl directly is prefixed by gl.

We’ll start by making a new xcode Cocoa touch project, with a single view template, because the point is to show you how few lines are enough to get you doing opengl. Since I’m working exclusively for ipad, my project is not universal, so that’s up to you. Once we’ve created the project, we’ll go into the storyboard(s) and modify the base view class to GLKView. This ensures GLKit encapsulates the view to make it available as a canvas to draw on. Next, we’ll go into the ViewController.h to add a call to GLKit headers and define our controller as a GLKViewController. Your ViewController should look like this :

#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>

@interface ViewController : GLKViewController
@end

That’s all we’ll do in the header file, let’s head over to the ViewController.m ! We’ll add the most important stuff : the context, of type EAGLContext. This is the object that will bind OpenGL to our view and tell OpenGL to draw on it. So in your ViewDidLoad, you will initialize this context and attach it to the view :

@implementation ViewController
{
    EAGLContext* context;
}

- (void)viewDidLoad
{
    [super viewDidLoad];
    context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; // context creation
    // View setup
    GLKView* view = (GLKView*)self.view;
    view.context = context;
    view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
    [EAGLContext setCurrentContext:context];
}

You’ll notice the drawableDepthFormat in these lines. To make it simple, this is the depth format OpenGL will use for the render target, aka our view. If that’s not clear, Apple has a lot more to say about this. Now, let’s draw something! We’ll start by clear the screen with a horrible magenta color to ruin your eyes. As a reminder, the principle of clearing is to make sure you draw/render your calls on a clear frame. If you don’t do it, you’ll be drawing over the previous frame. And unless for some unknown reason this is what you are looking for, you want to clear your frame before drawing anything. It’s like an automatic background color. The color can be anything in the RGB + Alpha domain (limited by float precision in general), so you can get creative. So, we’ll start by overriding a method GLKViewController implements : -(void)glkView:(GLKView *)view drawInRect:(CGRect)rect. This convenient method is automatically called at regular intervals to draw a new frame. It’s 30 frames per second by default, and you can set this higher or lower with self.preferredFramesPerSecond in your controller. Aim for 60 FPS, your customers’ eyes will thank you. In this method, we’ll set the clear color and clear our main buffers : color and depth.

-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect{
    glClearColor(1.0f, 0.0f, 1.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}

The glClear method takes flags with specific masks to tell the GPU to clear specific buffers. Fortunately OpenGL is provided with a whole set of defines so that you don’t have to remember these masks. You can launch your app, and you should by now be screaming “DEAR LORD MY EYES”. Don’t thank me, it’s normal.

Okay, so that was fun, what more can we do? Let’s draw something! A quad! With indices (elements in opengl, don’t ask) to make it funnier! So like I said, GLKit provides a ton of math helpers, and we’re going to take advantage of 3d vectors to define our vertices in space. We’ll also define a bunch of indices to order our draw calls and send all of this to our GPU. Let’s add all of this inside the ViewDidLoad method, right after setting our context. Here’s the code as I explain it through the comments and bellow (scroll for more) :

    GLuint vertexArray; // our VAO
    GLuint vertexBuffer; // our VBO for vertices
    GLuint indexBuffer; // our VBO for indices

    GLKVector3 vertices[4];
    vertices[0] = GLKVector3Make(-0.5f, -0.5f, 0); // create a new GLKVector3 structure
    vertices[1] = GLKVector3Make(0.5f, -0.5f, 0);
    vertices[2] = GLKVector3Make(0.5f, 0.5f, 0);
    vertices[3] = GLKVector3Make(-0.5f, 0.5f, 0);
   
    GLuint indices[6];
    indices[0] = 0;
    indices[1] = 1;
    indices[2] = 2;
    indices[3] = 0;
    indices[4] = 2;
    indices[5] = 3;
   
    glGenVertexArraysOES(1, &vertexArray); // create the VAO
    glBindVertexArrayOES(vertexArray); // tell opengl we're using it
   
    // create the vertices VBO
    glGenBuffers(1, &vertexBuffer);
    glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); // tell opengl we're using it
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); // upload the date to the GPU
   
    // same story for the indices VBO
    glGenBuffers(1, &indexBuffer);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
   
    // tell opengl to activate position on shaders...
    glEnableVertexAttribArray(GLKVertexAttribPosition);
    // and use this part of our vertex data as input
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLKVector3), 0);
    // close our VAO because we're done here
    glBindVertexArrayOES(0);

That’s a lot of code, but it’s pretty straightforward. The first 3 variables are identifiers to GPU buffers and the VAO, don’t worry about them now. First we create 2 arrays to store our vertex and index data. For vertices, we’re using GLKit’s implementation of a 3d vector, GLKVector3. It’s a C struct, so init is old school style with C functions. Indices use a base type of GLuint which is an alias for a platform unsigned integer. It’s easier to read and indicates this part of your code is opengl stuff. Every lib does it in 3d anyways, so get used to these aliases. The fun part begins just after that : we create a VAO and use it. VAOs are kind of hard to explain, but basically, they’re memory addresses that act like containers to a set of buffers. So the point is, you create and bind a VAO first to set its data, and at render time, you bind it back to automatically get the buffers you defined in it. And this, kids, is why it’s awesome. This is pure opengl though, it’s present in ES 2 and recent opengl implementations for computers, so enjoy. You might notice a difference on other platforms : here we call glGenVertexArraysOES which is specific to ES 2. Just remove the OES suffix and it’ll work just fine on your computer. The point of our 3 variables at the start is that we store the identifiers for reuse (and disposal). Next, we create our index and vertex buffers GPU side and upload our data. The steps are always the same : create the buffer, tell opengl to use it as a specific buffer (using the defined masks), upload our data. The upload method takes a few parameters : the mask for which buffer we’re aiming, the size in bytes of the data we’re sending, the actual data, and its usage. Usage is how to specify how the GPU will act with the data regarding CPU : if you plan to update your vertices often, you might want this to be dynamic. That’s not the case here, so we’ll set it as static. More info on the official docs for glBufferData. This operation is exactly the same for indices and vertices, appart from their specific masks.

The next part is kind of where I consider black magic usually happens with DirectX : layout. The point of layout is to tell your GPU how to interpret the vertex data you send. In DirectX, this gets me going all “wat” as it is totally counter intuitive. OpenGL in raw is a bit better, and GLKit makes it pretty simple. So first of all, we tell OpenGL that we want position activated for our shaders with glEnableVertexAttribArray. We use a GLKit define to simplify this, so it’s clearly too easy. The tricky part is defining which part of your data is actually position. As you should have noticed, we uploaded GLKVector3 on our GPU. It’s a structure made of 3 floats, and we didn’t normalize it, and all of our vector defines the position. So when we define our data structure with glVertexAttribPointer, it’s pretty simple : first, what we are defining (position), then the number of values defining our position, the type of value (using a gl define and not an alias type), a boolean to tell if it’s normalized data, the size in bytes of 1 vertex (aka the stride, since we are in a flat array GPU side), and the offset to the start of our data in the structure we sent the GPU. That’s it (yeah, that’s already a lot of stuff). We close the VAO so that OpenGL knows we’re not writing into it anymore.

So, how do we draw? We need a shader, or else nothing will ever show on our screen. But right now, I don’t feel like writing GLSL, doing a shader program structure, compiling stuff and uploading it to the GPU, etc… fortunately, GLKit provides a helper shader that’ll do most of what you’re looking to do with a shader : GLKBaseEffect. Right after our previous code, let’s define one (make the variable on the instance level, we’re going to need it later) :

    GLKBaseEffect* effect;
    effect = [[GLKBaseEffect alloc] init]; // init our effect
    // set up a constant color to draw with
    effect.useConstantColor = GL_TRUE;
    effect.constantColor = GLKVector4Make(0, 1, 0, 1);
    // set up "camera"
    effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(60), fabsf(view.bounds.size.height / view.bounds.size.width), 0.1f, 100.f);
    effect.transform.modelviewMatrix = GLKMatrix4MakeTranslation(0, 0, -5);

So, initialization is easy : create an instance of the class, and have fun with the properties. There’s a ton of them, including 3 lights, texture management, diffuse, emissive, specular color, etc… we’ll just be using a constant green color for the sake of simplicity. We’ll also set up our “camera” so that we can view our render from a certain distance. We make use of another GLKit structure : GLKMatrix4. There are many helper functions to create the projection and view matrices (google it if you don’t know what it means), so basically we set our projection to a 60° FOV, with the aspect ratio of our screen, and a near/far to [0.1f, 100.f[, which is totally arbitrary. Small fact that can save you five minutes of rage : during ViewDidLoad, view’s width and height don’t take orientation in account. So if you’re in landscape like I am, invert width and height for aspect ratio math. This is fixed during runtime, so it’s just for the sake of having a perfect square on the screen for this example. So now, to draw our stuff, we’ll go back to our draw method and right after the clear order, add this:

    glBindVertexArrayOES(vertexArray); // use our VAO
    [effect prepareToDraw]; // do secret mysterious apple stuff
    glDrawElements(GL_TRIANGLES,  6, GL_UNSIGNED_INT, 0); // draw
    glBindVertexArrayOES(0); // don't use our VAO

As I said previously, loading our VAO allows us to specify to the GPU we want to use all the associated data (index and vertex buffer). We spool the effect to prepare all the shadery stuff in the back (legend wants it that shader is actually generated realtime depending on the properties used) and draw our indexed elements : first we specify what we are drawing (triangles max, no quads for ES 2, sorry), how much indices we’re reading, the kind of data the indices are, and the offset, useless thanks to the VAO. We then unbind the VAO to prevent memory leak GPU side (it’s a best practice, it doesn’t necessarily happen). Run it, and you should see some ugly green square which a magenta background burning your eyes. Success! One last thing though : unsetting all the data. Add this override to ViewDidUnload :

-(void)viewDidUnload
{
    [super viewDidUnload];
   
    // delete our buffers GPU side
    glDeleteBuffers(1, &vertexBuffer);
    glDeleteBuffers(1, &indexBuffer);
    glDeleteVertexArraysOES(1, &vertexArray);
    // delete our context
    [EAGLContext setCurrentContext:context];
    if ([EAGLContext currentContext] == context)
        [EAGLContext setCurrentContext:nil];
    context = nil;
}

This is important because you need to remember OpenGL is C and not Objective-C. Meaning that resources don’t get automatically released and GPU side can keep the garbage for a long time. So save your iDevice, release OpenGL stuff.

Well, that’s it. It might seem like a lot of code, but if I showed you the same thing with DirectX, you’d be screaming most probably. Anyways, this is just an introduction, and it covers approximately nothing of what kind of benefits GLKit brings. It’s really a great piece of code for fast app development, and the great part is that you don’t have to use it if you don’t want : it’s pure helper over OpenGL ES 2, and sweetens the stuff without making it dependant! I’d recommend you have a look at Jeff Lamarche’s code if you want to get started and also the book “Learning OpenGL ES for iOS” which is an excellent way to go way deeper into this tech.

You can find the full code for ViewController.m on this Gist !

Have fun, code safe.

Tuxic

Unity : metronome like a pro

I’ll be honest : after yesterday’s rant, I really thought about if I should sell you this code through the asset store or not. But hey, I’m not such a bitch, the trick’s pretty simple.

So here’s a nice way to get an acceptably precise metronome, with custom BPM and signature. The purpose is to create a MonoBehavior that you can stick to an entity to count your beats. Let’s start by creating a new C# script (javascript should be straight forward, but I don’t use it, sorry). I named it metronome, because that’s what it is. We’ll add in a few fields that will make sense soon enough :

public int Base;
public int Step;
public float BPM;
public int CurrentStep = 1;
public int CurrentMeasure;

private float interval;
private float nextTime;

First 3 fields should be straightforward if you know a little bit of music theory : signature represented by its base and step amount, and Beats Per Minute. CurrentStep and CurrentMeasure just let us keep track of what step/measure we’re on. Now this is where the trick starts in order to be as precise as possible : interval is the absolute amount of seconds there should be between 2 beats. nextTime is the relative moment of when the beat will actually occur.

I see the guys in the back of the room going all “WTF man”, but it’ll make sense in a second : we are going to use Unity Coroutines. Unity uses Mono, which supports a great load of C# thread operations, including Tasks. The problem is, Unity is not thread safe, and tends to go all “NOPENOPENOPE” when you use threads and Unity objects. This is were the coroutines come in : they are a sort of bastard “multitask” technique that consist in watering down heavy operations between frames. That’s the important word : frame. Basically, a coroutine is a yield return method that disperses its executions on a frame to frame basis. On its simplest form, it works much like a simple component update, as launched coroutines are each called by default once per frame until they are over. The interesting part is the kind of value you can send to your yield return.

UnityGems did a great article on coroutines, and this is what you can send as a return value (shameless copy from their article) :

  • null – the coroutine executes the next time that it is eligible
  • WaitForEndOfFrame – the coroutine executes on the frame, after all of the rendering and GUI is complete
  • WaitForFixedUpdate – causes this coroutine to execute at the next physics step, after all physics is calculated
  • WaitForSeconds – causes the coroutine not to execute for a given game time period
  • WWW – waits for a web request to complete (resumes as if WaitForSeconds or null)
  • Another coroutine – in which case the new coroutine will run to completion before the yielder is resumed

I highlighted the one we are going to enjoy very much : WaitForSeconds. It’s straightforward, give it an amount of time and it will execute when that time is consumed. Let’s write a first coroutine with that!

    IEnumerator DoTick() // yield methods return IEnumerator
    {
        for (; ; )
        {
            Debug.Log("bop");
            // do something with this beat
            yield return new WaitForSeconds(interval); // wait interval seconds before next beat
            CurrentStep++;
            if (CurrentStep > Step)
            {
                CurrentStep = 1;
                CurrentMeasure++;
            }
        }
    }

Simple coroutine : infinite loop that increments CurrentStep and CurrentMeasure. It works pretty fine, but the more discerning readers will have noticed that we never set interval. I’m going to do a simple public method for that, to be able to reset and change my coroutine :

    public void StartMetronome()
    {
        StopCoroutine("DoTick"); // stop any existing coroutine of the metronome
        CurrentStep = 1; // start at first step of new measure
        var multiplier = Base / 4f; // base time division in music is the quarter note, which is signature base 4, so we get a multiplier based on that
        var tmpInterval = 60f / BPM; // this is a basic inverse proportion operation where 60BPM at signature base 4 is 1 second/beat so x BPM is ((60 * 1 ) / x) seconds/beat
        interval = tmpInterval / multiplier; // final interval is modified by multiplier
        StartCoroutine("DoTick"); // start the fun
    }

This goes a bit more into music theory, but I suppose you can deal with that if you’ve read this far. So we get the absolute interval between each beat and store it in our field. You’ll notice I use StartCoroutine and StopCoroutine with a string of the coroutine method name. This method is more expensive but allows us to stop the coroutine at will, which is appreciable. You can call StartMetronome() in your Start(), create an entity and attach the script as a component, and set for example Base and Step to 4, BPM to 120 and launch. In your debug log, you’ll have a nice timed “bop” appearing in an endless loop. Mission accomplished.

Wait, we still have something to fix : on usage you’ll realize that this is precise but not enough. It tends to desynchronize pretty fast (at 120bpm, it’s be off tracks in less than 4 measures) and that’s bad if for instance you’re making a musical game. The reason is simple : coroutines are balanced by frames, and frames have a delta that you don’t really control. The problem is that your interval is fixed, but the WaitForSeconds might just decide that it’s too late to execute at this frame, let’s wait another one or two. Thus the wibbly wobbly bullshit the metronome outputs. This is where nextTime comes in. The purpose is to resync the metronome with effective time scales. The wait interval will thus never be constant. Let’s modify our methods :

    public void StartMetronome()
    {
        StopCoroutine("DoTick");
        CurrentStep = 1;
        var multiplier = Base / 4f;
        var tmpInterval = 60f / BPM;
        interval = tmpInterval / multiplier;
        nextTime = Time.time; // set the relative time to now
        StartCoroutine("DoTick");
    }

    IEnumerator DoTick() // yield methods return IEnumerator
    {
        for (; ; )
        {
            Debug.Log("bop");
            // do something with this beat
            nextTime += interval; // add interval to our relative time
            yield return new WaitForSeconds(nextTime - Time.time); // wait for the difference delta between now and expected next time of hit
            CurrentStep++;
            if (CurrentStep > Step)
            {
                CurrentStep = 1;
                CurrentMeasure++;
            }
        }
    }

This very simple trick allows you to fix this sync problem, as the wait delta will be fixed depending on actual time and expected time. Of course, this is far from perfect, but it does the trick : beats are really precise, even after 20 minutes of play. The rest is up to you. I decided to implement events on tick and on new measure, and you can find my code sample on this gist. Output is as follows with a visual thingy:

Anyways, have fun, code safe.

Tuxic

Unity2D : hail to the king, baby!

So yeah, like I previously stated on the blog, I’ve been testing Unity2D a lot these past days (and a bunch of other stuff, as always).

Overall impression is “GUDDAMITFINALY!1!!!11!”. And Unity delivers. So in the past I wasn’t the first Unity3D supporter, on the main purpose that 3D is not a subject to be treaded lightly, and a hell load of noobs and idiots would start making stuff with the free version. Eventually I got to work a lot with it this year and changed my mind concerning its raw power. I mean, just look at the time I lost on Kerbal Space Program, and this marvel of procrastination is being made with Unity. And workflow is really great, their Entity Component system is awesome. Always had problems dealing with Monodev, but as a MS tech dude, I redirected every single code line to VS. And I frantically never touched a line of Javascript. Still have to do some progress on my problems with this language, and Unity’s implementation is just… weird.

One of the things I’ve always regretted with Unity was the absence of real 2D support. It was until now a 3D tool, and you’d have to hack it to get things to show up in 2D. And the fucking asset store taking advantage of that : no sir, I’m not paying 60 bucks to show sprites on a free game making tool. So being limited to 3D is kind of drastic. Like I said, 3D is hard, people expect so much from it, you can’t just go around making 3D stuff without having previous experiences. Stuff like shaders, LOD and animations just give the creep to any dev that knows, and despite Unity3D being a very well conceived tool, you won’t escape these matters. And god I hate the asset store. This thing is evil. This thing should be held responsible for all the crap shitty mobile devs create. I’m exagerating of course, I just have a problem with these guys saying “oh, Unity doesn’t do that? Let’s go waste $150 on a plugin/asset/code library!” when it’s something as simple as post process shaders or block level design.

Alright, let’s stop ranting for a second.

To sum it up, Unity2D is Unity3D with one dimension less. This means you’ve got SpriteRenderer instead of MeshRenderer, that you’ve got 2D prefabs, 2D physics, 2D post process, 2D goodness! It’s freaking awesome, seriously. I’ve never tried other 2D alternatives like GameMaker, but I suppose they offer the same type of pleasure Unity2D sends your way. Drag&drop your sprites, add 2 or 3 entities, and you’ve got yourself a working prototype. The fabulous idea to natively support spritesheets and atlas textures is just enormous and fantastic, and the animation workflow is fan-tas-tic. Going back to physics, I was really impressed they were able to keep almost all of their 3D conventions, with the ease and simplicity it procured. Of course, basic primitive shapes like rectangles and ellipses are available, but the sprite based collider is art. Creating custom colliders has never been this simple! The greatest thing of this all will remain their 3D/2D mixing : a 2D scene is just an orthographic camera with a bunch of 2D components, but still in a 3D scene. The point is, you can easily merge 3D elements into a 2D scene, and the contrary too, because nothing stops you from using a SpriteRenderer in a 3D Scene.

So to try it out, I recreated the core gameplay of my SpaceShooterz game, and this took me.. at most something like half an hour : background parallax, player spritesheet, enemy + basic move routine, shooting and killing. 30 freaking minutes, and not even trying hard.

So this is great news for indie devs. We get an awesome tool for free and the raw power to create very complex 2D gameplays in no time. Since Unity exposes free exports to iOS and Android now, including the long-awaited Windows Phone and Windows 8, you devs have no excuse! Go make some games! This comes at a cost of course : Unity is heavy. In the last year only, the package went from 400megs to a whole freaking gig. The interface also is a marvelous work of the kraken, but you can get over that. I would recommend a computer with good perfs and a big screen though, your eyes will thank you.

Anyways, have fun!

Tuxic

Unity 2D in progress

Been testing out Unity3D’s new 2D features on the 4.3 update, I’ve got to say I’m quite impressed by the simplicity of the overall thing. Blogging on it really soon!

In the meantime, have a dogemon

Dogemon

Polycode Visual Studio setup

Setting up a Polycode project in Visual Studio is a royal pain in the ass. The fact that you have to use a Win32 project is the main reason. We can resume this in two dreaded words : HUNGARIAN NOTATION. God I hate this.

Of course, this is just for the entry point and fortunately, once this is done, you won’t have to face those horrible names in your code (unless you like this, which should be considered scary). Anyways, here’s a step by step on how to setup a basic Visual Studio project and get you running on trully interesting code in no time:

  1. First of all, we’re going to need a VC++ Win32 project. We’re going to make it empty (screw you, precompiled headers) so that we don’t get a load of files that we won’t need later
  2. We’ve got our neat empty project and I’m going to assume that you’ve got a release build of Polycode. The path for this example will be “C:\Polycode\Framework”. We’re going to go into project properties for our project and set your configuration to “All Configurations”
  3. Go to C/C++ > General and Edit on “Additional Include Directories”. Add the following folders
    • C:\Polycode\Framework\Core\include
    • C:\Polycode\Framework\Core\Dependencies\include
    • C:\Polycode\Framework\Core\Dependencies\include\AL
  4. Go to Linker > General and Edit on “Additional Library Directories”. Add the following folders
    • C:\Polycode\Framework\Core\lib
    • C:\Polycode\Framework\Core\Dependencies\lib
  5. Go to Linker > Input, Edit on “Additional Dependencies” and add the following after setting the Configuration to the correct value
    • For Debug Configuration :
      Polycore_d.lib
      zlibd.lib
      freetype_d.lib
      liboggd.lib
      libvorbisd.lib
      libvorbisfiled.lib
      OpenAL32d.lib
      physfsd.lib
      libpng15_staticd.lib
      opengl32.lib
      glu32.lib
      winmm.lib
      ws2_32.lib
    • For Release Configuration :
      Polycore.lib
      zlib.lib
      freetype.lib
      libogg.lib
      libvorbis.lib
      libvorbisfile.lib
      OpenAL32.lib
      physfs.lib
      libpng15_static.lib
      opengl32.lib
      glu32.lib
      winmm.lib
      ws2_32.lib
  6. Back in “All Configurations”, go to Build Events > Post-Build Event and Edit “Command Line” to add following command
    if not exist "$(ProjectDir)default.pak" copy "C:\Polycode\Framework\Core\Assets\default.pak" "$(ProjectDir)"

    if "$(ConfigurationName)" == "Debug" (
      if not exist "$(TargetDir)OpenAL32d.dll" copy "C:\Polycode\Framework\Core\Dependencies\bin\OpenAL32d.dll" "$(TargetDir)"
    ) else (
        if not exist "$(TargetDir)OpenAL32.dll" copy "C:\Polycode\Framework\Core\Dependencies\bin\OpenAL32.dll" "$(TargetDir)"
    )

    What this does is it copies to build directory the OpenAL dlls so that the sound module runs correctly. Polycode author Ivan Safrin recently spoke about removing it for a more low level sound API, so keep that in mind for the future. This also adds the default.pak file to the project, which is not an obligation, but it contains starter resources to get you up and running a Polycode app directly, like fonts, default shaders, textures, stuff like that.

  7. We’re normally ready to code! Let’s add a file. I’ll name it main.cpp, because I like simple stuff, and if we could keep this file simple, it would be great. Anyways, let’s fill it with our entry point, that is, ugly Win32 code :
    #include <PolycodeView.h>
    #include "windows.h"
    #include "PolyApp.h"

    int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
    {
        PolycodeView *view = new PolycodeView(hInstance, nCmdShow, L"My polycode app");
        PolyApp *app = new PolyApp(view);

        MSG Msg;
        do {
            if (PeekMessage(&Msg, NULL, 0, 0, PM_REMOVE)) {
                TranslateMessage(&Msg);
                DispatchMessage(&Msg);
            }
        } while (app->Update());

        delete app;
        delete view;

        return Msg.wParam;
    }

    So this is where your creativity comes in. Following the main wiki, you’re going to create an entry class (here, it’s called PolyApp) that is polled by the Win32 API for events (the do/while part) until you decide to shut it down. Protip : if you plan on doing something cross-platform, use the POLYCODE_CORE define that is defined at build depending on OS. You’re better off that way.

  8. Have fun with Polycode

So that’s it, don’t forget to change your Polycode build path, and you should have a project running. Hit me up on twitter if something’s wrong!

Tuxic

Polycode : create a code generated mesh

I’ve been working a lot with Polycode during the last few days, just to satisfy my curiosity regarding this easy to use C++/LUA framework. Despite the few problems on new Visual Studio editions, I got it running pretty easily and got to do a lot of fun stuff pretty fast.

Let’s be clear, Polycode is still young and is clearly oriented towards fast prototyping and intuitive realtime output, so I might just be pushing it in the opposite direction it is meant for. I also decided to disregard the LUA part, because I needed to get back into C++, and I like doing stuff the hard way. So that leaves me without any player, helper, or custom compiler of any sort. Despite that, Polycode is a real pleasure to use.

One thing I like to try on 3D frameworks is code generated meshes : define your vertices in code and create objects on the fly. What’s the point? Well for instance, take things like quadtrees that tend to evolve a lot during runtime, you can’t just load a model and expect it to change by itself. Polycode doesn’t really seem ready for this kind of stuff, as a lot of operations are done CPU side, and vertex creation is freaking slow. Anyways, to show the technique I used, we’ll create a simple face with a texture. It might not be the best way to do this, but since documentation is pretty sparse (not to say totally absent) on the matter, I interpreted the generated doc as best as I could.

I’m going to assume you already know how to setup a Polycode project with C++. There are 3 important objects we’ll be using to create our entity :

Polycode::Polygon *p = NULL; // the polygon that will hold our vertices
Polycode::Mesh *baseMesh = NULL; // the mesh that will hold the polygon
Polycode::SceneMesh *sceneMesh = NULL; // the object that will render the mesh to the scene

Polycode::Scene* scene = new Scene(); // the scene, duh

You’ll notice I explicited the namespace. This is because it conflicts with another Polygon class that Visual Studio forces on C++ projects. Now an important thing : when declaring a Polycode::Mesh instance, you get to choose the mesh type, and everything depends on that. For the most part, the type defines how the vertex list is interpreted and how it is drawn. There’s an awesome type called Polycode::Mesh::QUAD_MESH that takes vertices by groups of 4 and creates a neat quad. But in reality, wireframe is pretty horrible, and I wouldn’t recommend it for deployment. I tend to use the classic Polycode::Mesh::TRI_MESH that takes vertices by groups of 3 to create triangles. It requires more vertex definitions, but it’s more natural for the GPU, because remember kids, your GPU draws triangles in 3D, and nothing else. There are many more types, including dot and line renders, triangle strips and so on, so don’t hesitate to check them out depending on your scenario.

So, back to our vertices, because we still don’t know how to create them. It’s pretty straightforward, the Polygon class is neat :

p = new Polycode::Polygon(); // instanciate our uber polygon, no parameters
// add our vertices (x,y,z,u,v)
p->addVertex(-1, 0, -1, 0, 1); // there's also a Polycode::Vertex type, and this method returns a Vertex*
p->addVertex(-1, 0, 1, 0, 0);
p->addVertex(1, 0, -1, 1, 1);
p->addVertex(-1, 0, 1, 0, 0);
p->addVertex(1, 0, 1, 1, 0);
p->addVertex(1, 0, -1, 1, 1);

I used a counter clockwise vertex declaration, as this is the default culling option in most situations in 3D development. There is a backfaceCulling property on Polycode entities if you wish to decull the other side, but by default, it is set to false. So these 6 vertices define a quad object that I can assign to my baseMesh and render via my sceneMesh. Easy, right?

baseMesh = new Polycode::Mesh(Polycode::Mesh::TRI_MESH); // instanciate our mesh with our triangle type
baseMesh->addPolygon(p); // append the polygon to its structure
sceneMesh = new Polycode::SceneMesh(baseMesh); // instanciate our scene mesh with our mesh
sceneMesh->loadTexture("doge.png"); // we'll just load a texture to see if it's working, no shaders
scene->addChild(sceneMesh); // now the scene will render the scene mesh

That’s about all, if you’ve done everything right (that is, also setup the camera to a position and look at (0,0,0)), you should see a magnificent quad on your screen.

Doge
Doge Wireframe

If you got this far, I guess there’s not much else to say. No, seriously, for instance I haven’t seen any index buffer, which is critical in this case since vertex creation is way too slow. I suspect Polycode::Mesh::QUAD_MESH to make use of indices, but nothing is explicit so I cannot say for sure. The fact remains that this is all too slow and a bit cranky regarding workflow. So this is basically experimenting and definitely not ready for production of any sort. Stay safe, don’t do this at home.

Tuxic