Skip navigation

Monthly Archives: March 2011

In this series I am going to share with you some programming gems – brief, novel techniques that I’ve picked up over the years that I believe every programmer should know.

Today’s gem: How to Round Up a Float

This isn’t a particularly ground breaking method – most veteran programmers probably know it – but for those of you who don’t, here’s the solution (at least, in most languages):

Add 0.5f to the value and cast it to an integer.  And optionally cast it back to a float, if you need a float.

Here’s the C# extension method that I include in my library project:

        public static float RoundUp(this float f)
        {
            var v = f + 0.5f;
            if (v > int.MaxValue || v < int.MinValue)
                throw new ArgumentException("This value is too large or small to round with this method.  Use decimal formatting instead.", "f");
            return (float)((int)(v));
        }

This works because if the original value’s decimal portion is less than 0.5f, which is the typical median value for rounding, then adding 0.5f to its value will not cause its singles digit to increase.  However, if the decimal portion is greater than 0.5f, the singles digit will increase.  Casting a float to an integer always truncates the decimal portion of the float regardless of its value, and by casting back to a float, you are given a decimal portion of (mostly) zeroes.  I say mostly because as we all know, floating point math is terrible.

The problem with the conversion method is that obviously the range of float is significantly larger than integer, so it doesn’t work for every situation.  Here are the versions I use for other primitives:

        public static decimal RoundUp(this decimal d)
        {
            var dp = d + 0.5m;
            return Math.Truncate(dp);
        }
        public static double RoundUp(this double d)
        {
            var dp = d + 0.5d;
            return Math.Truncate(dp);
        }

This would be the ideal method for floating point numbers as well, but there is no Math.Truncate method overload that takes a float.  I haven’t written one myself simply because I have never found a need to round a float that was larger than 2bn or less than negative 2bn.  If I had a number that large, I would probably just use a double and be done with it.

The reason that I keep the first (flawed) method around is because it’s incredibly easy to code, conserves memory by staying in the 4 byte world, and handles 99.9% of coding situations.  Seriously, how often do you have a float – a float mind you, not a decimal – for a number larger than 2 billion?

Use it, love it.

Speaking of floating point math being terrible, here’s another little gem that I always have in my toolbox:

How to evade the floating point precision dilemma in 1 easy method:

        public static bool Equals(this float f, float otherValue, float margin)
        {
            return Math.Abs(f - otherValue) < margin;
        }
// use it like:
var f = 1.0f;
var g = 1.01f;
var result = f.Equals(g, 0.1f); // value is true

There is probably a stock .NET method for doing this, but I don’t know what it is.  The .NET library is so large that even after working with it for as long as I have, I still find new classes and methods that I never knew about. 

Use this method in place of the == or object.Equals() methods when comparing floating point values.  If you’re going for true equality, set an acceptably low margin value, like 0.00001f.

Enjoy!

C++ has been on its way out in professional software development for the last decade with the exception of its obvious applications like embedded devices – and even that niche I believe will be overtaken by managed languages within the next decade.  The rate at which C++ is being phased out of the industry has begun to increase exponentially in the last 5 years because at this stage, virtually no universities actually teach students C++ anymore, and for good reason.  Computer Science is hard enough without leaking pointers.  I guess I’m showing my age a bit when I admit that I did learn C++ in college, but I haven’t done virtually any in the industry.  It’s getting to the point where I can barely read it anymore.

The watershed moment for my company, which consists largely of old school C++/MFC/COM developers, came right around the time that 64 bit processors shipped.

Now it wasn’t bad enough that an assembly compiled for a Windows platform didn’t execute on Linux.  Now an assembly compiled for a 32 bit Windows platform didn’t execute in a 64 bit process space.  Really?!

Of course, this event has happened before, many times.  First 8 bit processors gave way to 16 bit processors and 16 bit processors gave way to 32 bit processors and every time, programmers had to go through the same painful transitions.

The difference is that this time, in 2002ish when the 32-to-64 bit transition began, we had a choice.  We could either go through this nonsense again and accept the fact that we’d have to ship 2 versions of our assemblies, or we could see the future and start developing managed code instead that is immune to this problem.

And we weren’t the only ones.  Managed code, and specifically the infrastructure and ecosystem that exists around them, are superior for other reasons not necessarily related to the functioning of the software, such as for example the fact that you can actually find people who know how to write code in the language you’re using – something that is getting harder and harder to do for C++.  The number of bugs – catastrophic bugs, mind you – that are generated by a mediocre C++ developer is exponentially higher than the number of bugs generated by a mediocre .NET developer.  For that reason alone, managed code is worth the investment.

Native code haunts me to this day.  I am currently involved in a project in which we need to isolate native code into a COM+ server application so we aren’t forced to run our .NET processes in a constrained 32 bit process because we are using native interop that is 32-bit only.  These days, 4gb of RAM isn’t enough.  If you can’t shove our managed server application onto huge hardware with huge memory pools because we can’t scale past 32 bits thanks to native code, you don’t buy our software.  And that’s obviously bad for us.

Fortunately, 64 bits are big enough that it’s unlikely we’ll see 128 bit computers any time soon except in specialty scenarios (e.g. GPUs), so it would be fair to say that biting the bullet and going with 64 bit native with the assumption that 32 bit hardware would be dead long before 128 bit hardware arrived would have been a valid choice, except for all the reasons above.  C++ is a giant pain in the ass and honestly after working with C#’s feature set, it would be like going from a Ferrari to a Vespa.  I would sooner switch jobs than work in C++.  I have to enjoy what I do, and deleting pointers is not something that I enjoy doing.