Skip navigation

I’m sorry but I really need to rant about this.  The Enterprise Library is the most overengineered piece of garbage I have ever seen.

It’s nice when it works – which it rarely does, unless you do nothing but follow their examples line for line and use only the blocks they provide – but the minute you start trying to elaborate on anything they’ve done, you start seeing exceptions thrown in wierd places.  Even when you have the source code, which is an inevitability if you want to have a prayer of working with this crap, it is so goddamned conviluted that you spend more of your time pressing F12 and hoping to God it leads you somewhere useful than you do actually getting any work done.

I cannot believe what a headache even something simple has been.  All I wanted to do was hook into their configuration utility so that my customers could use it to configure both our application logging, which we do with the logging block, and the other details of our application in a single tool.

Jumping. Jesus. Christ.

3 days later I’ve got the configuration sort of working.  What an oddyssey that has been.  But guess what?  When I try to use the same code that the samples use to actually instantiate objects you’ve configured in your app.config, it throws bullshit exceptions at me that are indecipherable.  Basically, even when I follow their tutorial line by line and change little more than names, it doesn’t work.  And because the product is so arcane and tangled, it’s nearly impossible to figure out what’s actually happening.  How hard is to write some code that reads a block from an app.config which specifies its concrete type and instantiate it?  Apparently that requires several thousand lines of code and at least 40 different classes.  You can’t just instantiate the object.  No, you need to create a “build key” and create a “build strategy” and use a “type injector” and blah blah blah.

Seriously, did Microsoft hire some out of work postdoc CS Ph.D’s to build this thing?  I swear these people don’t actually write enterprise applications for a living.  If they did, they too would choke on this nonsense.

I am sorely regretting going deeper with EntLib.  If I didn’t know that it would be significantly more work to write my own configuration tool and write my own logging system I would ditch this crap in a second.  But I’ve had a very frustrating week.

Stay away from the EntLib.  It will ruin your life.

In this series I am going to share with you some programming gems – brief, novel techniques that I’ve picked up over the years that I believe every programmer should know.

Today’s gem: How to Round Up a Float

This isn’t a particularly ground breaking method – most veteran programmers probably know it – but for those of you who don’t, here’s the solution (at least, in most languages):

Add 0.5f to the value and cast it to an integer.  And optionally cast it back to a float, if you need a float.

Here’s the C# extension method that I include in my library project:

        public static float RoundUp(this float f)
        {
            var v = f + 0.5f;
            if (v > int.MaxValue || v < int.MinValue)
                throw new ArgumentException("This value is too large or small to round with this method.  Use decimal formatting instead.", "f");
            return (float)((int)(v));
        }

This works because if the original value’s decimal portion is less than 0.5f, which is the typical median value for rounding, then adding 0.5f to its value will not cause its singles digit to increase.  However, if the decimal portion is greater than 0.5f, the singles digit will increase.  Casting a float to an integer always truncates the decimal portion of the float regardless of its value, and by casting back to a float, you are given a decimal portion of (mostly) zeroes.  I say mostly because as we all know, floating point math is terrible.

The problem with the conversion method is that obviously the range of float is significantly larger than integer, so it doesn’t work for every situation.  Here are the versions I use for other primitives:

        public static decimal RoundUp(this decimal d)
        {
            var dp = d + 0.5m;
            return Math.Truncate(dp);
        }
        public static double RoundUp(this double d)
        {
            var dp = d + 0.5d;
            return Math.Truncate(dp);
        }

This would be the ideal method for floating point numbers as well, but there is no Math.Truncate method overload that takes a float.  I haven’t written one myself simply because I have never found a need to round a float that was larger than 2bn or less than negative 2bn.  If I had a number that large, I would probably just use a double and be done with it.

The reason that I keep the first (flawed) method around is because it’s incredibly easy to code, conserves memory by staying in the 4 byte world, and handles 99.9% of coding situations.  Seriously, how often do you have a float – a float mind you, not a decimal – for a number larger than 2 billion?

Use it, love it.

Speaking of floating point math being terrible, here’s another little gem that I always have in my toolbox:

How to evade the floating point precision dilemma in 1 easy method:

        public static bool Equals(this float f, float otherValue, float margin)
        {
            return Math.Abs(f - otherValue) < margin;
        }
// use it like:
var f = 1.0f;
var g = 1.01f;
var result = f.Equals(g, 0.1f); // value is true

There is probably a stock .NET method for doing this, but I don’t know what it is.  The .NET library is so large that even after working with it for as long as I have, I still find new classes and methods that I never knew about. 

Use this method in place of the == or object.Equals() methods when comparing floating point values.  If you’re going for true equality, set an acceptably low margin value, like 0.00001f.

Enjoy!

C++ has been on its way out in professional software development for the last decade with the exception of its obvious applications like embedded devices – and even that niche I believe will be overtaken by managed languages within the next decade.  The rate at which C++ is being phased out of the industry has begun to increase exponentially in the last 5 years because at this stage, virtually no universities actually teach students C++ anymore, and for good reason.  Computer Science is hard enough without leaking pointers.  I guess I’m showing my age a bit when I admit that I did learn C++ in college, but I haven’t done virtually any in the industry.  It’s getting to the point where I can barely read it anymore.

The watershed moment for my company, which consists largely of old school C++/MFC/COM developers, came right around the time that 64 bit processors shipped.

Now it wasn’t bad enough that an assembly compiled for a Windows platform didn’t execute on Linux.  Now an assembly compiled for a 32 bit Windows platform didn’t execute in a 64 bit process space.  Really?!

Of course, this event has happened before, many times.  First 8 bit processors gave way to 16 bit processors and 16 bit processors gave way to 32 bit processors and every time, programmers had to go through the same painful transitions.

The difference is that this time, in 2002ish when the 32-to-64 bit transition began, we had a choice.  We could either go through this nonsense again and accept the fact that we’d have to ship 2 versions of our assemblies, or we could see the future and start developing managed code instead that is immune to this problem.

And we weren’t the only ones.  Managed code, and specifically the infrastructure and ecosystem that exists around them, are superior for other reasons not necessarily related to the functioning of the software, such as for example the fact that you can actually find people who know how to write code in the language you’re using – something that is getting harder and harder to do for C++.  The number of bugs – catastrophic bugs, mind you – that are generated by a mediocre C++ developer is exponentially higher than the number of bugs generated by a mediocre .NET developer.  For that reason alone, managed code is worth the investment.

Native code haunts me to this day.  I am currently involved in a project in which we need to isolate native code into a COM+ server application so we aren’t forced to run our .NET processes in a constrained 32 bit process because we are using native interop that is 32-bit only.  These days, 4gb of RAM isn’t enough.  If you can’t shove our managed server application onto huge hardware with huge memory pools because we can’t scale past 32 bits thanks to native code, you don’t buy our software.  And that’s obviously bad for us.

Fortunately, 64 bits are big enough that it’s unlikely we’ll see 128 bit computers any time soon except in specialty scenarios (e.g. GPUs), so it would be fair to say that biting the bullet and going with 64 bit native with the assumption that 32 bit hardware would be dead long before 128 bit hardware arrived would have been a valid choice, except for all the reasons above.  C++ is a giant pain in the ass and honestly after working with C#’s feature set, it would be like going from a Ferrari to a Vespa.  I would sooner switch jobs than work in C++.  I have to enjoy what I do, and deleting pointers is not something that I enjoy doing.

I hate Experts Exchange.  I freaking hate it.

When I’m googling solutions to problems the last thing I want to do is accidentally click on a pay site.  Whenever I see that obnoxious banner at the top of the page as it (very slowly) loads, I can’t hit the back button fast enough.  And I mutter a curse about how these bozos just wasted my time.

Seriously, does anyone actaully pay for a subscription to Experts Exchange when there’s StackOverflow and the MSDN forums?  Obviously there must be people out there who do, but why?  For the love of God, why?!

I’m an outspoken critic of open source software, mostly because I believe that the vast, vast majority of open source software is produced by a bunch of amateurs who have no business making software to begin with – I’m sorry, even if you are a professional software developer by day, you simply cannot produce the same quality as you could if someone were paying you for your labors.  When you’re writing code out of the goodness of your heart it’s a lot easier to not care when there’s a critical bug or let sloppy changesets get into your codebase because you, the OSS project manager, know damn well that you’re not this guy’s meal ticket and if you call his code sloppy he’ll just stop contributing.

However I’m an equally outspoken critic of closed source knowledge.  Monetizing professional goodwill and the labors of people who aren’t you in answering technical questions strikes me in the same vein that would be struck if a restaurant tried to charge me for a glass of water.  I don’t care how much EE pays its contributors – unless it’s 100%, it’s not enough.

EE is like a for-pay porn site.  Do those even still exist?  Does anyone actually still pay for porn when it’s so ubiquitously free virtually everywhere on the internet?  I imagine clicking on a promising link and seeing EE’s crappy page must be as annoying to me as clicking on a promising link and seeing a “members only” porn site is to porn consumers.

If EE wants to do it’s thing, fine – but do us a favor and keep it out of the Google indexes.  I will never, ever buy a subscription to that site and I will discourage everybody I know from doing the same.  It’s lame.

/rant

Today in keeping with my series Game Programming is Hard, I’m going to share with you something different – the process itself.  Also in keeping with the series – a major theme of which is that even platforms specifically designed to help more people create games require you to solve even basic problems that you shouldn’t need to solve – today I’m going to show you what happens when you try to do something simple, namely, texture map a sphere.

(I will talk about how to generate vertices for a sphere in a future post, by the way).

Texture mapping is a pretty simple concept.  You have a 2D texture of arbitrary size and you want to put it on a 3D surface.  By now you should already appreciate that even very curvy surfaces in 3D graphics are in fact comprised of nothing but flat surfaces, so it shouldn’t necessarily be rocket science to paste a flat texture on a flat surface.  Also, I hope you realize by now that all surfaces are actually just triangles.

If everything in life were easy, we’d have such a thing as a triangular texture and each triangle in your 3D world would have a 2D triangular texture which is exactly the right size and it would be as simple as slapping them on piece by piece.  Of course, that’s impossible for a variety of reasons.  First, image files are rectangular, so automatically, there’s gonna be some kind of math involved to extract the triangle from the rectangle.  Second, could you imagine the painstaking process that would be involved in creating 1 perfect texture for each triangle in your scene?  Yeah, exactly – it would be an impossible undertaking.  Not to mention the fact that your triangles are going to move and scale.

In the real world, texture files are squares, and when you define vertices which, in 3’s define triangles in your 3D world, you associate each vertex with what’s called a texture coordinate.  That’s a location in the texture file.  When you have 3 texture coordinates, one for each vertex, you’ve defined a triangle inside your texture.  Imagine taking a pair of scissors and cutting that triangle out of your texture.  Then you scale that triangle to fit the actual size of the 3D triangle, and then you plaster it on.  Pretty simple, right?

In order for texture coordinates to work for varying size textures (e.g., 512×512, 2048×1024, etc), texture coordinates don’t specify pixels, they specify a scale value from 0 to 1.  This value is multiplied by the width or height of the actual texture in use to get the actual pixel in the texture to use.

It all sounds pretty simple, right?

Sure!  So, let’s say you have a sphere with somewhere around 3500 triangles.  That would be a geodesic dome of frequency 3, which is also known as a subdivided icosahedron of frequency 3, meaning each surface of the icosahedron is recursively subdivided from 1 triangle into 4 other triangles (like a Triforce).  This resolution is enough to make a pretty smooth looking sphere.

How do you wrap a square or rectangular texture on a sphere?  How do you take the 11000 or so vertices and generate texture coordinates for each vertex such that the result is that the rectangular texture is drawn correctly on the sphere?

Well as it turns out we already do the reverse whenever we make a map.  Think about how we define locations on planet earth.  We use latitude and longitude right?  That’s a 2-dimensional description of what is in reality a 3D point.  The reason we don’t include the 3rd dimension is because it’s fixed – that dimension is, give or take sea-level, the radius of the earth.

A little bit of googling will give you the formulas you need to convert a 3D point that exists on the surface of a sphere into what amounts to latitude and longitude coordinates (often called polar coordinates because they are actually calculated based on angles).  In fact, angles are what you’re going to get.  You’ll get the angle in the horizontal (XZ) plane and the angle in the vertical (XY) plane, and those combined with a fixed radius define a sphere.  It involes some arctangents and then a bit of scaling to convert radians into texture space (0 to 1, remember?).

But that’s not the problem you’re going to face.  It’s pretty easy to compute the texture coordinates.  What isn’t easy is fixing the result, which contains an ugly beast known colloquially as the “seam problem”:

 What the hell is that, and why is it there?

That is the seam of the texture.  But why does it exist?

That’s a good question.  Before I started googling furiously, I thought about some facts about the seam.  The fact that I used the earth as a texture helped me here because I can recognize the approximate longitude and latitutde of any given point on the sphere.  This seam – the only seam – is appearing in the middle of the Pacific Ocean.  What latitudinal or longitudinal feature exists there?  The international dateline, also known as longitude zero.

Ah.  A clue.  The seam also coincidentally occurs where I would expect the texture to wrap around itself – where its horizontal edges would meet.  In texture coordinate space, this means where 0 and 1f meet.

At this point I have a pretty good idea that this is a boundary case.  But what could go wrong?

The distorted texture along the seam indicates to me that something is gong wrong with the texture being picked by triangles occuping by the seam.  Since this is a boundary case scenario, the most obvious answer is that the triangles along the seam do not fit cleanly inside the texture.  What if two of the vertices lie close to the edge of the texture and the other goes off of it?  Since texture coordinates are constrained to be between 0 and 1, this would result in wrapping, and the texture mapper probably does not like it when vertices supplied in clockwise order do not represent a clockwise texture map result, which would be the case if a value that should be greater than 1.0 is wrapped around 1.0 to be less than the other vertices.

To test this hypothesis, I sought out to identify the triangles where this might be the case.

As with all geometry I create in XNA, I create basic primitive class that supplies the Vector3’s which identify the vertices.  Then, for specific drawable components (like a textured sphere), I build an array of IVertexType based on those vectors.  In this case, I have a method which takes an array of Vector3 which represent a triangle list of a geodesic sphere with frequency 3 and creates an instance of VertexPositionTexture for each Vector3.  I calculate the texture components using the standard polar method.  I process my vectors in groups of 3 so I can think of myself working with a triangle rather than an individual isolated point, because what I’m really looking for are triangles that cross the seam.

There wasn’t any guarantee starting off that any of my geodesic sphere’s many triangles would have vertices that correspond to an exact 1.0 texture coordinate, but I thought that would be a good place to start.  I started by looking at the first vertex in each triangle and checking it’s horizontal texture coordinate to see if it was 1.0.  This is actually sufficient because of another fact of XNA – namely, that polygons, at least when they are in lists rather than strips – always must be specified in clockwise order.  What that means is that if the first vertex has a texture coordinate of 1.0, then the next one must have a texture coordinate higher.  In reality, it will be lower because of wrapping – and that’s exactly the problem.  So, I identified those triangles and simply chose not to draw them by eliminating them from the vertex buffer – by continuing in my loop before returning the vectors that comprise that triangle.  (FYI, I am using the yield return mechanic in C#.  I’m a huge fan).

Here’s the way the seam looks when I eliminate those triangles:

Ah.  Good, I am eliminating triangles where this is the case.

The middle looks like a zipper, where alternating triangles are still showing.  What must I know about these triangles?

First, I had to figure out which one of the vertices in the triangle was actually first.  It was obviously one of the two vertices on the left (remember, my coordinate system is moving left to right here, also known as west to east).  It could be the bottom vertex – moving clockwise we’d come to the top vertex and that has the same value – 1.0f.   So I put a break point there and compared the Y values of the two points in question.  I’m calling my triangle’s vertices A,B,C (clockwise in that order).  Texture coordinates are usually called (U,V) to distinguish them from world coordinates (which are called X,Y,Z).  So whichever of my two triangle verts – A or B – had a higher V value was the top vertex.

Except I was initially wrong here, and my debugging proved it.  If A were the top vertex then moving clockwise we would expect B to have a different U value, but it didn’t.  Why not?  A has a higher Y value, but not a higher V value because texture coordinates start in the upper left corner of a texture, so while U increases left to right as we expect for normal cartesian coordinates, V decreases from bottom to top, starting at 1.0 and ending at 0.0.  So, A is not the top vertex of each triangle but rather the bottom one after all.  This is useful to know.

As I thought about it more I realized that knowing the vertex order of the triangles I was already culling from the seam wasn’t going to help me because I didn’t know what the order of the remaining triangles was.  I just needed to do a little trial and error.  So, I started checking vertex B’s texture coordinates for a value of 1.0f.  This is the result of also culling these triangles:

More of the seam was now gone, but not all of it.  So, I added checks for vertex C as well.

As it turned out, I had to do every combination of check to get the seam to be entirely culled.  The check is simple.  I just look for a texture coordinate at 1f and then look for another texture coordinate in the triangle that is close to 0 (depending on how resolute your sphere is, the actual value is going to be different.  In my case, a number as small as 0.5f works to cull the triangles).

The result is this:

At this point I have successfully removed the distorted seam from my sphere; unfortunately, this now looks even worse than the distorted seam.  It reminds me vaguely of the scene from The Langoliers when they start eating giant black holes through the terrain.

The reason that I removed these triangles wasn’t because I wanted to remove them but rather that I needed to identify the seam triangles that weren’t being rendered correctly and the easiest way in to do that in graphics programming is to cause them to change their appearance somehow on the screen so they stand out.  Rendering them as black holes (by not rendering them all, as a matter of fact) is an easy way to do that.

Now that we’ve made some headway, how do I solve the problem?

Well, there are a few ways to do it.  My initial thought was, okay, make sure that no triangles cross this boundary, but the problem with that is I have vertices that map, in UV coordinates, to 1.0f UF.  This is in fact a good thing, because the only way to ensure that no triangles cross this boundary would be to have vertices land exactly on the border.  The triangles on the left of the border will work nicely, but that same vertex is repeated in the triangles on the right of the border that connect to it, which is not a problem per se, except that you need to ensure that the vertex with a U value of 1.0f is the last vertex in order on those triangles.

Ultimately it seemed like it was way too much trouble, so I thought about alternatives.  One of those is recognizing the fact that if the texture wants to wrap, why not just let it?  If I have a texture coordinate of 1.0f U, why not just set it 0.0f U for the triangles where this is a problem?  I will be off by at most 1 texture pixel (called a “texel” for those in the loop).  I can live with that.  So I tried it.  I started with the first set of triangles I culled, where A and B both have a U coordinate of 1.0f and C has a coordinate very close to zero.  I set A and B’s U to 0.0 and gave it a shot:

That looks pretty good, right?  No weird seam – at least on the triangles that I’ve drawn so far.

Unfortunately, as I tried to continue with this strategy I was unable to eliminate all of the weirdness.  I decided to switch to a different texture – a simple one with one half red, one half blue, so the seam would be more obvious.  The Pacific Ocean leaves something to be desired.

Much to my horror, this is what I saw:

The fact that the edges of my missing triangles are not lining up with the color boundary is a problem.

I realized doing simple value checks wasn’t going to cut it.  What I really needed to figure out was whether the triangle’s U were wrapping or not.  So I got out a scratch pad – actually, the back of one of the sheets from my “Stupidest things ever said” desk calendar – and started drawing some points and edges.

I realized that what I was really looking for were texture mapped triangles that were not in clockwise order.  This explains the seam perfectly.  Not only is it guaranteed to happen at the seam, but it is very likely that the GPU chokes when it gets a bad texture triangle (e.g. not clockwise) in the same way that the GPU chokes when it gets a bad triangle (e.g., not clockwise, and well, not choke, but rather, cull).

I had to do a little bit of Google sleuthing and write a function that returns something called the “signed area of a simple polygon.”  Triangles are the simplest possible polygons.  I implemented a signed area function, which is basically the sum of 2D cross products, and then ran it against my texture triangles.  If  a texture triangle’s signed area came up negative, it means it’s counter clockwise.  This means it crosses the texture boundary.  When I cull those triangles, this is what I see:

At this point, it’s important to realize why this isn’t working like it’s supposed to.  Obviously textures can wrap in 3D rendering systems; it happens all the time.  Why aren’t these textures wrapping properly?!

The reason is that if you want texture wrapping to work, you still need to specify clockwise texture triangles.  Why aren’t these triangles clockwise?  It’s not the “texture” boundary it’s crossing.  Remember that we are calculating each vertex as a polar coordinate, and the polar coordinates are not wrapping, and they shouldn’t.

So the situation comes down to this: when a texture triangle ends up counterclockwise, we have to add 1.0f to the offending vertex and enable wrapping.  That should do it.

There’s only one catch.  Texture wrapping doesn’t work in DX10 and therefore it doesn’t work in XNA.  Specifically, specifying a value of more than 1.0 for a texture coordinate will not wrap as you expect, and it isn’t as simple as just doing the wrapping yourself because you end up in the original situation – a counterclockwise texture triangle.  The recommended solution to this problem for DX10 is to create a vertex where the texture coordinate would be 1.0 and create multiple faces so that you can texture them properly without wrapping.

The only problem with that is a) it’s hard and b) the result is no longer a regular geodesic dome – the vertices are not uniform across the face because you have to subdivide any triangle that crosses the seam in a bizarre way.  Imagine a triangle being bisected so that the result is a triangle – the tip – and a trapezoid – the base.  And then the trapezoid needs to be turned into three triangles itself, because a trapezoid is not a triangle.

At this point, I did what you sometimes have to do in this business: I gave up. 

My objective was to add a textured sphere primitive into my XNA toolbox and since I had already created a geodesic dome I figured I’d just texture map it and be done.

There’s a significantly easier way to create a texture mapped sphere, and that is to start with the texture coordinates themselves.  The process goes like this:

Decide how many meridians you want.  (Call it longitude step).  Decide how many tropics you want (call it latitude step).  Iterate latitude from 0 to 1.0 inclusive, increasing by latitude step.  For each latitude, iterate longitude from 0 to 1.0f inclusive.  For each combination of latitude/longitude, you have a texture coordinate – your x value is your longitude and your y value is your latitude.  To turn that into 3D points, you just need to use some simple trigonometry: you scale your longitude from 0-1f to 0-2pi radians, you scale your latitude from 0-1f to 0-pi radians, you take some sins and consines, and mulitply those by the radius of your sphere (which should be 1.0 because all primtiives should be unit primtives centered around 0,0,0).  After that you just need to do a little creative looping to create triangle strips (or lists, which are far, far easier) out of these points and you’re done.

I was familiar with this technique from previous 3D graphics work so it was just a matter of implementing it.  It took me about 3 hours under heavy distraction from the wife, the baby, and the TV to get right.

The lesson today is that sometimes you have to go back to the drawing board, and that’s okay.  But it illustrates one of the reasons why game programming is hard.  I just assumed that texture coordinates wrapped the way they had in previous DX editions so I spent a long time going down a rabbit hole that a more experienced game developer has already gone down and knows not to go down again.  If you’re starting to play with 3D graphics, XNA, DX, OpenGL, whatever – expect to waste a lot – and I mean a lot of time – learning things the hard way.  It’s going to happen.

Just remember, there’s always a way out of the rabbit hole, and often times it’s back the way you came.

The first challenge that you will need to overcome when you begin programming a game is as the title suggests: how to think about the software you’re writing.

If you’ve done any kind of programming before it was probably some kind of LOB application, like a website or some winforms or maybe some scripts.  That kind of programming follows a specific kind of model that unfortunately is only partially applicable to game programming.  First, let’s talk about what you know.

In your typical web or winforms application, your software starts by running some kind of initialization code.  It sets up the UI, loads configuration, puts some things into memory.  If it’s a web application, your UI is scripted with HTML and rendered by the browser software.  If it’s a winforms application, your UI is still “scripted” with controls you’ve setup at design time and your operating system (or the runtime, or a combination thereof) renders them.  From that point forward, your application will generally only do things in response to interactions initiated by the user.  Your application responds to events.  You, the smart application developer, takes good care not to stall the UI with CPU-intensive things in the main UI thread – when you need to harness the CPU, you fire off another thread in the background.

The point is that application performance is practially an afterthought in most modern programming because frankly most tasks that a user wants software to automate just aren’t that hard to do for modern hardware.  As a result, most programmers – including me, I admit – have gotten lazy about it.  Thanks to my formal CS education I am aware of concepts like algorithmic complexity and I try to use linear (or logarithmic) algorithms over polynomial ones, but honestly, if I can write an n-squared in 10 minutes for an n in an hour, I’ll use the n-squared a lot of the time.

When you are writing a game, performance is the first consideration you need to make, all the time.

This is a big paradigm shift for most programmers because they’re used to getting so much for free – namely, the actual application display.  Web developers are by far the laziest in this respect because they don’t even need to worry about things like out-of-thread processing on event handlers because all of their processing is happening on the server  and is totally divorced from client display (scripts don’t count).

In the bad old days of C++ Windows applications, we programmers needed to write something called “the main loop” or sometimes called “the message pump.”  The message pump was basically an infinite loop that continually probed the operating system for kernel-level events related to the window created by the application – mostly things like “the user clicked the X widget”.  The details aren’t important but the fact that your program at its core was an infinite loop waws an important one, because nothing’s changed.  WinForms applications work this way too; .NET just hides it for you and does all the message pump handling automatically, at the CLR level.  And it’s really efficient.  The CPU overhead for all of the stuff .NET does for you, like actually draw your controls on the screen every time the screen refreshes, is tiny.

When you are doing game programming, you don’t get this for free.  Here’s what XNA gives you:

  • An Update method that executes essentially as fast as possible
  • A Draw method that executes as fast as your monitor’s vertical sync rate refreshes the screen
  • A handful of initialization methods that are executed at startup

I was a little bit surprised when I created a project from the XNA template and got a new subclass of Game and saw these methods, surprised because while they give the most possible freedom, they are also a bit overwhelming.  Without proper care, Update and Draw give you enough rope to hang yourself

Unlike typical app development where your application only really needs to do things in response to what the user tells you to do, when you’re programming a game, you have to be aware that your game is doing something all the time.  XNA is constantly calling these methods for you in the game’s main loop.

That’s important.  Web and forms UI developers don’t think about their presentation in terms of frames.  When you want to show something on the screen you create a dynamic control and add it to your layout; when you want to hide something, you remove it from your layout; your framework (browser/forms) takes care of it from there.

Game programmers have to remember that nothing appears on the screen unless you draw it every single frame.  This means that your Draw() method assumes the screen is empty because it is – you will be clearing the screen at the start of every frame.  This also means that every line of code that you write in Draw(), or in a method that is called in Draw(), will be executed (optimally) 60 or more times per second.  That’s a lot of times per second.  If your drawing code cannot finish in 1/60th of a second, your framerate starts to drop.

When you’re just starting out, your framerate, even on bad hardware, should never drop below your vsync rate, which is probably going to be 60 – so 60fps.  As your begin working with XNA, the first thing you should do is add a framerate counter to your application.  Use this one:

http://blogs.msdn.com/b/shawnhar/archive/2007/06/08/displaying-the-framerate.aspx

The reason that I recommend this is because the only way you’re going to wind up with something playable in the end is if you remain vigilant on drawing performance.  Any time you add anything to your game, whether it’s a new drawing technique or a new game logic function, if doing so causes your framerate to drop below 60, you’re doing it wrong.  I say this because it’s 2010.  You’re not doing real-time ray tracing.  You’re not doing Doom 3 ultra quality.  You’re not a Final Fantasy intro video.  You’re doing a hobby game for fun.  If you can’t render it in 60 frames per second, it means you’re making a systemic error in something you’re doing in code.  It doesn’t mean that the GPU can’t handle it.  (Unless, of course, you choose to load your screen with thousands and thousands of polgyons.  If you want to tank hardware, you can tank hardware).

Next tim XNA rookie mistakes

I’ve been working with XNA for a couple of weeks in a hobbyist capacity as a break in the monotony of business programming.  I like talking and writing about programming so I though I would start a series detailing the challenges that I’ve faced (and hopefully how I overcame them).

First, before we begin, I want to state that I am a professional software engineer with abou 10 years of software development experience behind me.  When I got my degree in CS, they were still teaching students C++.   For the last 4 years I’ve done almost nothing but C# .NET development.  I work for an ISV doing product development.  I also took a lot of math in college (linear algebra, a course called “math for computer graphics”, and the CS department’s CG course).  I’ve played with Direct3D and OpenGL before, but not to much extent.

I feel that’s important to mention because as I stated in my discussion of the obstacles the amateur game programmer faces, game programming doesn’t necessarily lend itself to amateur programmers because even professionals like me find it challenging.  That isn’t to say “if I can’t do it, nobody can!”  More like, don’t be discouraged.  If you find this hard, well, you’re not alone.

So, let’s begin!

First off  – why XNA?

I got interested again in finally churning out an amateur video game when XNA 4.0 came out mostly because the idea of a (mostly) singular code base that could produce a game that is playable on a PC or console hardware was highly intriguing.  Managed Direct3D has been around for a while so graphics programming .NET is not a revolutionary step – but a managed API is.  Managed D3D struck me as lipstick on a pig.

I work on a product that got its start in the mid-90’s as a C++ core with an MFC UI.  Around 2000, we launched an ASP.NET front-end.  To do it, we had to write a lot of .NET 1.1 code.  To expedite the process we did a lot of copy and paste and a lot of syntax find-and-replace.  Because C# is a successor to C++ we got away with it, and to be fair, .NET 1.1 was barely better than C++.  It gave us garbage collection and a non-generic STL – remember, generics were introduced to .NET in C# 2.0 – so a syntax port was about as good as we could do.

These days, there’s a stark difference between C++ and C#.  I’ve gotten so used to the features of modern C# – specifically LINQ – that writing C++ is like building a house without power tools.  Whereas with .NET 1.1 code you could pretty much cross-port C++ into C# and C# back into C++ with barely more than syntax touchups, modern .NET code is a different beast.

The reason that I share this campfire story is because that is essentially what managed DirectX/3D feels like.  I’m not going to talk about what it actually is or isn’t because I barely went further than the guided tour before I got tired of writing C++ code in C#.

It seems like Microsoft agreed with me because XNA is what managed DX should have been – a modern .NET API for graphics programming that exploits (some) of .NET’s platform evolution over the C++ dinosaur. 

Finally, 4.0 grabs me where 3.0 didn’t, mostly because after my experience with .NET 1/1 vs. .NET 2.0, and my own experience with developing APIs (which I’ve been doing for the last 2 years since I write mostly platform code), I figured that Microsoft’s first stab at XNA would need work.  As an aside, one of the challenges of product development – probably the biggest challenge – is getting customers to bite on version 1.0.  There are always brave souls and sysadmins who will install betas and initial releases, but the guys with the money usually wait for a second release so the vendor has time to fix all the bugs first.  Microsoft may be one of the world’s biggest software vendors but their code is still written by people and nobody on the planet is immune to the 1.0 rule.

So, the journey begins…

Next: First steps

Every single person who has ever played a video game has also come up with a great idea for their own game. Usually it’s a clone of something else, because let’s face it, by now virtually every major “style” of game that will probably be created has been created (puzzle, RTS, FPS, adventure, MMO, open-world, sidescroller, RPG, etc..). But still, style is only the first decision in a long list of decisions that create a kick-ass game.

Anyone with programming experience (and more often those who don’t) has probably, at one time or another, dabbled with the idea of sitting down and churning out a kick ass game in their spare time.

Here are the reasons that it actually happens in only 1 out of millions of times.

1.  Game programming is hard.

There’s no other way to say it.  Game programming is some of the most challenging programming there is, and it’s not for the faint of heart.  The problem is that nobody realizes this until they start.  If I had a nickel for every time somebody thought to themselves, “I want to make a video game, so I guess I will have to code.  Let me learn a programming language so I can do it!” and then quit 2 weeks in, I’d be a millionaire.

It isn’t even remotely enough to “learn a language” to get involved in any kind of game programming.  I am a senior-level professional software developer with B.S. in Computer Science with several enterprise platform applications on the market.  My code has generated literally millions of dollars in revenue for my company over the last few years.  If I think game programming is hard, what should you think, having never written a line of anything in your life?  You’re going to do a couple of Hello World tutorials in C++ and ship a game, huh?!  Yeah right.

I know that going in, of course, but most people who decide to start creating their game don’t.

2.  Game programming is hard.  No, really, it is.

I really can’t stress this enough.  Let me explain why it’s hard.

First, you need to master graphics programming – and these days, the vast majority of people with video game ideas imagine their games in 3D.  2D graphics programming is not easy.  3D graphics programming is extremely intimidating especially for people who don’t understand programming in the first place.  The typical aspirant will read the warnings – and everyone who writes on this topic gives them, including me, because I’m about to do it – that you need to understand vector and matrix math to have a prayer of doing any kind of 3D programming.  The typical aspirant immediately brushes this off thinking, “how hard can it possibly be?” and then downloads a couple of tutorials online that breeze over the material.  They compile and run and think, “gee, this is easy!”  Then they try to do something as basic as move the camera in the plane orthogonal to the LookAt vector and are stumped, particularly because the typical aspirant doesn’t quite understand the concept of a plane in 3D space let alone what “orthogonal” means.

I took senior level courses in college including such titles as MATH240: Linear Algebra, Math434: Math for Comptuer Graphics, and CS427: Computer Graphics.  Even after all of that, I still find it hard.

But that’s just graphics, and when most people think about graphics, they think about world rendering.  They entirely forget that they’ll have to do all of the UI work as well, and UI’s are a pain in the ass to create when you’re using well-established APIs like WinForms or WPF that have already implemented all of the drawing, all of the event handling, and provide you with convenient classes like “button” and do things for you like “scrollbars.”  Guess what?  Unless you buy UI framework for DirectX or OpenGL or XNA or whatever technology you want to use – and the typical hobbyist aspirant person is not going to be able to afford the several hundred $’s these typically go for on a project they may or may not continue – you’re going to have to write all of that crap by hand – that means drawing it, that means trapping mouse events, that means creating an object model to work with behind the scenes so you can respond to UI commands, etc.  Just think for a minute how you’d create, for example, a modal dialog in a DirectX game, or create a menu window that you can scroll.  Now make sure it scales correctly with different viewport sizes, resolutions, full screen… it’s mind boggling.

Once you’ve gotten all of that work done, you can actually start focussing on the game logic.  AI is not trivial.  Pathing is not trivial.  State management is not simple.  Designing a data structure for a game save is not trivial.  Combining all of that with your graphics layer so you can actually display “things” that are happening in your game program space is not trivial.  Even APIs that are designed to help you “make games” like XNA offer virtually nothing in this area.  The XNA “game” gives you a couple of things for free, namely a class with some virtual methods like “update” and “draw” but beyond that they offer very little guidance.

Even if the enthusiast overcomes all of the mighty challenges faced in rendering things that look OK, if he doesn’t have solid background in good app design he’s going to come up with a monolithic piece of crap with 6,000 line methods and spaghetti code everywhere.  That’s no good.

It’s virtually guaranteed that you are going to need to implement at least one of the classic algorithms of computer science like a shortest path, or a minimum spanning tree, because graph programming is going to come up in your game – there’s almost no way around it.  I’ve had to crack open my CLRS tome on multiple occasions.  If you don’t know what CLRS is and don’t have a copy, I hope you’re good at googling.  You’ve got a lot of learning to do!

Oh, you want your game to be multiplayer?  Xbox Live, you say?  So now you’re going to become an expert in network programming too.  Have fun learning threading and understanding semaphores.  I’ve been doing it for 10 years and I still have a hard time understanding it because the human brain is not designed to think in parallel the same way a computer does.  Critical sections and thread-safe objects are tricky stuff for the veteran programmer let alone the beginner.

3.  Game programming is really hard.

Because of all of the above, most would-be game programmers give up once they get in water slightly deeper than what is covered by the free online code samples that guys who write tutorials put online.  It’s difficult enough writing a LOB ASP.Net app that maybe has a database and a checkout system.  That’s easy by comparison to even minor aspects of game programming.  Coming up with the vertices of a geodesic dome challened me mentally at least as much as anything I did for the enterprise software that I shipped last year which has now sold over 1,000,000 seats.

The bottom line is that even putting aside programming experience, most people are not smart enough to program games.  It’s a shame that so kids these days were brought up with the notion that they can do anything if only they try their best.  I hate to break it to these kids, but in reality there are quite a few things that you can’t do, and if you’ve ever tried programming a 3D video game, you might have already discovered that undisputable tautology already.

4. Most people aim way, way too high.

One of the biggest problems with video game development is that people imagine that it’s a lot like playing games – e.g., fun.  The truth is that video game development is really fun but only if you enjoy the act of building a video game for its own sake.  I write video game code in my spare time not because I want to make a video game per se, but rather because I enjoy programming.  I’m one of those fortunate guys who happens to also get paid to write code, but the truth is that the code I write at my day job is boring.  That brings me to an aside-

The video game industry sucks.  If you think that you like to program but you don’t want to write boring business apps and you’d rather program “what you love”, you should realize that you are not the only one who thinks programming video games is more fun than programming online store fronts for department stores.  In fact, since the comorbidity between video game enthusiasts and nerds who like to program is virtually 100%, you can rest assured that every single programmer has fantasized about writing games at one time or another.  The end result of this is that the supply of people who want to write “fun” code is about 10 times larger than the demand.  If you are smart enough to work in this field you should realize that working in a field where you are a dime a dozen is going to mean that you get treated like crap and turned over a lot.  When you add to the fact that 10% of games make 90% of the money, if you didn’t happen to land the World of Warcraft team or write the code for the Halo UI you’re pretty likely to get fired when that game (your “passion”) that you put 80 hours a week in on for 2 years ships and doesn’t sell.  Stay away.

But, back to the main point, you as an individual, or you as a small team, are not going to compete with the AAA titles who spend literally $50m on their titles and have an army of programmers, QA, artists, sound engineers, composers, and voice actors.  If you try to make a Halo clone or the next Modern Warfare or the next (LOL) World of Warcraft you’re going to fail miserably.  When you look at the “Indie” developers who have any success, their games are simple.  Take Braid or Super Meatboy for example.  Both of those games are, at their heart, simple platformers whose graphics consist of some 2d sprites, but in each case, they came up with a new gimmick that makes it interesting – in the former, the concept of real-time rewinding (and integrating that into moving forward – really cool idea, actually), whereas in the latter, it’s just really super hard but they get you back into the game so fast you barely noticed you died, so you can try that jump a hundred times in quick succession.  But what really makes that game is the fact that it records all your attempts and then replays them all at the same time so you get a screen filled with exploding meat.  Who ever would have thought those games would make money?  Which leads me to…

5. “If it’s not as good as an AAA title from EA, no one will play it, so why bother?”

One of the most discouraging facts faced by the hobbyist game developer is the knowledge that you will never be able to produce something as good as a major studio in any reasonable timeframe – it would take the average person 10 years of serious work to do everything that a team of 20-50 people can do in the typical game dev cycle.  By the time you’re done, you’re using a platform that has been retired for 5 years, because the tech moves so fast.

Faced by this, most people just give up, which is a shame.  Most of my favorite games were put out by tiny teams using relatively bad graphics – in some cases the games don’t even have graphics.  It’s true that the standards for video games these days are high, but it’s also true that often times a unique idea, or even an elaboration on an existing idea, is more powerful than brute force, which is something a lot of publishers, particularly EA, uses.

Any time I think this thought, I remind myself:

  • That I’m doing it because I like building it, not because I care how many people will fall in love with it and give me tons of money.  Hey, it would be nice, but that’s not the goal.
  • I am not an AAA title developer with a staff of 50.  Why on earth should I expect voice acting?  Is it admirable considering 1 guy is making the game?
  • Some of my favorite games were made by as few as 3 guys.
  • EA publishes Harry Potter: The Movie: The Video Game.

The last one is really the one that motivates me.  Yeah, sure, major game studios make a lot of awesome stuff, but they also churn out tons and tons of crappy titles every year.  Even without a high poly model of Emma Watson done by a professoinal 3D artist, I can make a better game than Harry Potter.  No matter how bad Harry Potter is at least one person is going to buy it, and since I don’t have a staff or any expenses because I’m doing it for fun, if I had made Harry Potter I’d probably make enough money to put my kids through college.  If someone will buy Harry Potter the video game, someone is going to buy what I make.

6. You have to put in hundreds or thousands of hours of work before you see results.

It takes so much infrastructure code before you get anything even remotely “playable.”  Game development, like any development, lends itself to iterative development, but the first construction iteration – where you build your “engine”, and believe me, even if you use an open source graphics engine, you still need your own “engine” because you need to write code to put things on the screen – that takes a long time.  You will need to solve a lot of basic problems before you can get going on an actual game.

Most people don’t have the patience to go through the early construction phases.  Most people who fantasize about making video games imagine that all they really need to do is the actual game work.  In the industry, the game work is called “design” and is done by those guys who were hired by the company and immediately issued a giant silver spoon to take rectally.  These are the guys who usually don’t program or create art (although to be fair, most of them used to do one of those jobs before they won the game industry lottery and became designers).  They get to sit around all day and work on things like game “balance” and “lore” and “world” and all of those other fun things that most people imagine is game development.  Don’t get me wrong – I guarantee Ghostcrawler over at Blizzard works overtime trying to balance World of Warcraft and that game owes a big part of its success to that effort and he deserves every penny and every stock option he gets, but my point is only that game “design” is the job everyone wants, both inside the industry and in the world of the hobbyist.  Very rarely does the amateur game developer stay awake at night fantasizing about all the neat code he’s going to get to write or all of the neat 3d models he’s going to have to steal from 3d art sites because he’s not a 3d artist and 3d art probably takes even more time than programming does.  No, he stays awake at night designing hs game.

So, when he’s faced with the idea that he’s going to have to put in many, many, many hours of mentally exhausting grueling work before he can actually put to life any of the ideas that inspired him to start working on a game in the first place, 99 times out of 100, he gives up before he starts.

Like I said, if you don’t enjoy the actual building phases – namely doing the programming and the content creation for its own sake – you will never get beyond this hurdle.

7.  Most people have either programming skills or art skills – rarely both, often neither.

This is another daunting problem.  Most would-be game developers believe that anyone can learn how to program and that’s where they start, imaging that it’s easier to learn than 3d modeling.  Plus, programming tools are free and 3DS Max costs hundreds or thousands of dollars or whatever it is these days.  Even if the would-be developer learns how to write some C#/++ and makes a game engine, he’s probably using the graphics that came bundled with the tutorials he downloaded and there’s that nagging question that bites his ankles every time he writes a line of code:

Where the heck am I going to get art for this game (without stealing it from a copyrighted work?)

If you’re flying solo and have no art skills, this question is really tough to avoid.  It’s another big reason that people give up on their games.  If they can’t make the game look good, they don’t want to do it.  This is a particularly frustrating aspect since everybody can imagine what they want their game to look like, but only artists have the ability to translate their imagination into usable pixels.

The best advice I can give on this particular problem is this: start by stealing it.  If you can’t find royalty-free models, steal commercial models if you can and use them as placeholders.  If you are one of those people who has to be impressed by the way your work-in-progress looks in order to stay motivated, then make yourself impressed.  If and when that day comes where you are ready to publish your game Indie-style (phone, XBox Live, etc) and try to make some money, and you’re still using commercial models, I guarantee that you’ll be able to find a person or person(s) with 3D art skills who would love to help you put what really amounts to the finishing touches on the game in exchange for a cut of the proceeds, because it’s a lot harder to create a video game when 3D modeling is the only skill you have, because somebody has to code the damn thing.

When it comes down to it, the only time art really matters is when you’ve gotten far enough to where you’re seriously considering a release.  And hey, if you’re at the point where your hobby game is going to be released, you’ve already succeeded

8.  Daydreaming is way more fun than doing.

Ultimately, it usually comes down to the fact that most aspirant game developers enjoy fantasizing about their ideal game but don’t actually enjoy the process of making a game because as anyone who works in the game industry will tell you, building a game is a lot different than playing one – or imagining one.  Games don’t get made because making games isn’t fun for most people.

This, my friends, is what it takes to make a game.  Very few people have what it takes, because very few people look like this IRL:

… and that’s what it takes.

My sister gave me one of those peel-off desktop calendars for Christmas, the theme of which was “Stupidest Things Ever Said” or something like that.

February 8’s title was “On Microsoft Programming, Typically Top-Notch”.  The quote was:

Vista Error 10107: A system call that should never fail has failed.

Hardy-har-har.  As someone who writes a lot of error messages, I don’t find that message to be stupid.  I’ve written dozens of messages just like that, typically in exception scenarios that can only occur when something goes catastrophically wrong, for example, the computer returns 5 when evaluating 2+2.  Which do you think is a better message:

Your system is catastrophically broken.  Go buy a new one immediately!

Or what about:

2+2 does not equal 5.  Please call technical support.

Personally, I prefer the former.  Look, if it’s never going to happen anyway, why not have some fun with it?

I think it’s a testament to programmers to begin with that we even bother writing exception messages in API level code, like for example, the Windows Vista kernel.  An API level exception message should never be seen by a user because an API level exception should be properly caught and handled by the application developer.  Of course they aren’t, because the typical application developer doesn’t bother reading the IntelliSense provided by the XML commentary that the good API developer took the time to write about the exceptions that should be handled, and the typical application developer certainly doesn’t bother try/catching those exceptions.  Even the exceptional application developer who is careful about handling API exceptions is going to miss something somewhere – particuarly something like an exception that the API developer may not have even documented because it can only happen if there’s a serious hardware level malfunction, like a kernel level method, for example.

I spend most of my time writing APIs since most of the development I do is system and platform programming – you know, the stuff that has to be as close to fool proof as possible so that the idiot application developers have only themselves to blame.  I am very fastidious about exceptions.  When I write unit tests for my code, I include tests to ensure the right exceptions are being thrown where I expect them.  I wrote this handy method that I use in my testing for this purpose:

         
public static void ExpectException<TException>(Action action, string reason = "")
        {
            try
            {
                action();
                var message = "Expecting exception " + typeof(TException).Name;
                if (string.IsNullOrWhiteSpace(reason) == false)
                {
                    message += ", reason: " + reason;
                }
                Assert.Fail(message);
            }
            catch (AssertFailedException)
            {
                throw;
            }
            catch (AssertInconclusiveException)
            {
                throw;
            }
            catch (Exception ex)
            {
                if (ex.GetType().IsAssignableTo(typeof(TException)) == false)
                {
                    var message = string.Format(CultureInfo.InvariantCulture,
                        "Wrong exception.  Expected {0}, got {1}: {2}",
                        typeof(TException).Name,
                        ex.GetType().Name,
                        ex.GetFullExceptionMessage());
                    Assert.Fail(message);
                }
            }
        }

I don’t write user-friendly exception messages.  I write programmer-friendly exception messages.  If I am raising an exception, it’s because they did something wrong.  99% of the exceptions I throw are one of the following:

  • ArgumentNullException
  • ArgumentException
  • InvalidOperationException

The first two are obvious, and the last one is thrown when an argument isn’t bad but the state of the object prevents the method from executing correctly, for example, calling the server delete method on an object that hasn’t actually been saved to the server yet.

While the argument exceptions could theoretically occur if the app dev is lazy about input checking and just throws whatever the user types as input somewhere at an API method, InvalidOperationExceptions are typically only called when the programmer made a stupid mistake, and my messages generally reflect that, e.g.:

 You cannot call Delete() before this item has actually been saved.

If a user saw this message because the exception went unhandled, they’d be confused and probably curse at the developer for being a dumbass.  You know, like all programmers do when they see specific, common, obvious exception messages in modal dialogs from poorly written applications, such as the famous:

Object reference not set to an instance of an object.

That’s the mother of dumb errors.  If an app dev – or god forbid, a systems dev – lets one of those through, that app dev needs a crash course in unit testing.

I use Code Analysis on all of my code – again, because I write platform code, it has to be extremely tight.  And Code Analysis is very picky about some things, for example, using return values.  A lot of times, the return value is useless and you don’t care, like, for example, ICollection.Remove.  If I know definitively that the object is in the collection, why do I need to check the return value of that method?  Furthermore, what on earth am I supposed to do with that value?  This?

var result = myCollection.Remove(myValue);
if (result == false)
throw new InvalidOperationException("Hell hath frozen over.");

This is the kind of error message that would make it on to the “stupidest things ever said” but it isn’t stupid.  It’s there to satisfy Code Analysis.

This is also an interesting example of another problem: code coverage.  I use code coverage to make sure that every line of code is touched by a unit test.  While this does not ensure correctness since code coverage does not guarantee good state coverage, it’s a big start.  I like to see 100% come back from block analysis, but this particular block poses a serious problem, because this situation is so rare that I can’t even fake it.  Suppose in the above example that the code immediately preceding that call to remove was a call to Contains() to ensure the existence of that object.  The only way that Remove would then return false is if a context switch occurred and some other thread came in and removed the object from the collection.  But don’t be silly; I use critical sections.  So that can’t happen.  It’s literally impossible to write a unit test to cover that block unless you cheat.

One way of cheating – the way I usually do- is by adding extra code that is only activated by compiler switches.  I usually use #debug.  I define an internal field, usually a bool, that is only included in debug builds.  I then add another block inside the method that I’m testing that checks the value of that field.  If it’s been set (by the unit test code), then I do something that forces my error condition.  In this case, it might look like this:

#ifdef DEBUG
internal bool HellHathFrozen = false;
#endif
#ifdef DEBUG
if (HellHathFrozen)
myCollection.Remove(myValue);
#endif
var result = myCollection.Remove(myValue);
if (result == false)
throw new InvalidOperationException("Hell hath frozen over.");

But you see, it is possible for that error message to be exposed to users, if for example, you accidentally shipped a debug build if your assembly.  By the way, that’s the reason why you should never, ever, ever use Debug.Asserts in your code.  Debug assemblies are bigger and a little slower; they should not significantly change the behavior of the application if they accidentally ship, and Debug.Assert will throw up giant ugly error mesages to users if it’s used incorrectly.  In reality there’s no good reason to ever use them, so don’t.

Since the typical user doesn’t understand any of this and doesn’t have the faintest idea how software is built, it’s easy for them to laugh at error messages and think programmers must be stupid.  But if you’re a good programmer, you will raise exceptions even when the impossible has occurred, and when the impossible has occurred, what are you supposed to say?  What would be a good error message in that case?  Even if you could describe what impossible sitaution has occurred, what possible good would it do, since there’s nothing the user or the app dev could have done to avoid it?

/rant.  Have a nice day.

Maybe it’s because it’s Sunday afternoon and after coding all week at my day job my brain is pretty much spent by the time I get to my hobby coding.  Maybe it was that omelette from IHOP this morning.  I don’t know what it was, but for the life of me, this took me at least half an hour to figure out.

I’m a bright guy.  I took a number of senior level math courses in college.  I got an 800 on the GRE math section.  Yet this simple problem stumped me.

I’m not even going to go into details, I’m just going to give you the solution.  The only reason I’m posting this is because I couldn’t find the answer on Google, and when there’s something basic like this that I think would be important for the public to know and it isn’t the first hit on Google, I post it.

Here it is:

The red lines indicate how the hexagon is split into triangles.

For the unit hexagon (edge length 1) centered around (0,0), the vertex coordinates are:

0: (-0.5, –y)

1: (-1,0)

2: (-0.5, y)

3: (0.5, y)

4: (1,0)

5: (0.5, –y)

where y  = 0.5 * sqrt(3)

To turn this into a triangle strip, assuming CCW culling (in other words, verts are specified in clockwise order), the verts are:

{ 1, 2, 0, 3, 5, 4}

Note that if you are trying to draw a hexagonal grid then you don’t want to do it this way.

If you’re doing a grid, then the most efficient way is going to be to draw rows of strips, where each row is only a portion of each hexagon.  You’re on your own for that.  There’s a lot more google on that problem than this one.

Enjoy your f’ing hexagon.