Category Archives: Software development

The most important point of paying someone to develop for you

From time to time I amuse myself in other peoples failures. I know I’m not alone in this ignoble endeavour. There’s a lot of sites that are pretty much aimed at poking fun of projects that failed and code that is all kinds of shit. I can understand why you end up writing shitty code when learning, and I can understand why you fail projects in regards to deadline. I have little understanding of how on earth you can manage to run a software project completely into the ground when you’re paying some professional people to do program it for you.

How on earth can you hire someone to develop for you without checking their work?

I grew up with an image of hired developers being the übermensch of software development. The hired guns of the developer world, the special forces to be put in where brave men have failed. As I came to understand, this is not the general case. Especially when this comes to outsourcing.

Again, I’m drawn to the analogy of building houses. You’re recognizing that you cannot make the house in person, so you’re hiring other people to do it for you. This is all fine, and it’s a good sign that you’re not going to take on something you know you don’t truly understand (though it makes for pretty good TV shows). And, then the house is built you move in without inspecting it and complain when the roof leaks and the walls are full of mold.

And it’s not very hard to check the quality of software. Sure, to understand a piece of code thoroughly is hard, but to get a general sense of quality takes all of a few hours for an experienced developer. Even if you’d pay a consultant triple the normal hourly fee to go through someone elses work and critique it, it would be the best money you’d spend on the project. And, yes, it needs to be someone who isn’t the one who make the system.

At times, you don’t even need to do the actual review. If the ones developing the system fear the reviewer, then that is probably good enough evidence that the system will suck to high heavens come the review date. Someone who writes competent software would relish the opportunity to learn from the reviewer.

I know I would. Do take my word for it and check my stuff.

On big balls of mud

There’s an old article on the web about balls of mud, defined by the article as:

“A big ball of mud is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.”

It’s almost as if the balls of mud are genetically predisposed to existing. It probably is the most common phenotype of software out there. The article also includes a pretty thought-provoking analysis of why they appear. Now, this article is quite old, and the problem it’s describes is even older. It begs the question as to why these mud balls are so prominent in spite of decades of well-meaning development methodologies.

It seems that none of these processes have worked, because people still produce mud balls all the time.

  • Software engineering was supposed to make us programmers into engineers. At least that was the original purpose of the term, though it has now evolved into an umbrella term that encompasses all other methodologies. The gist of the idea was that by eliminating the rock’n roll hacker mentality of developers and tuning all developers onto the idea that engineering software should be done like you would engineer a bridge. With requirements and analysis and calculations. It didn’t work, because programming isn’t engineering.
  • The waterfall model, nowadays usually more ridiculed than followed seems to try to solve the mud problem by eliminating change once development has started. This is a great idea, except for the nasty thing called reality where requirements change all the time. So it didn’t work.

Code craftmanship is perhaps the latest and most prominent way of addressing mudballs, except that it really doesn’t. It only says that you’d like to be a craftsman and that you want to create quality stuff and that we should celebrate the programmers as craftsmen. To be fair, it doesn’t even call itself a methodology, it is more of a movement.

But agile will solve it right? Agile solves everything!

The interesting thing is that the article is written before Agile became the latest and greatest thing out there. So, has things turned out for the better?

Of course they haven’t! The mud balls are as ubiquitous as ever.

Agile embraces the change and allows fast iterations and a deliver-focused attitude. So, there will be constant changes and patches to whatever you do, and you have quick iterations with fast releases – which is exactly what causes balls of mud – changing things around fast without thinking in the long term. It won’t work. It might even make it worse.

The only way it can possibly solve the mud problem is that it will make developers so happy that they won’t create bad code any more. Possibly by shielding the team so much behind a scrum master so that they can take the time to make the necessary architectural changes to keep an application clean.

I’m not saying that agile is a bad thing, but it feels like acceptance. Are we by our latest ideology accepting that the development world is a muddy one and constructing our ideological base around handling this as best as we can?

Because scrum handles balls of mud fairly well, but it doesn’t fix them or even prevents them from appearing. Perhaps it polishes them a bit.

Are we resigning to our glorious muddy overlords?

Tagged , ,

Foreign language code

If your native language isn’t English you’ll sooner or later run into the issue of what language you’re supposed to write your code in. Confronted with the last system I worked in I haven’t seen any system which wasn’t written in English – but this system was to a large extent written in Swedish. This felt wrong on many levels with me but it started me thinking. Is the choice always clear cut?

While this article is going to be geared towards Swedish, I assume many of the same concepts apply to other languages. Either way, given the current audience of this blog, you’re probably Swedish too.

The case for English

The language itself

It’s going to be in English whether you want it or not. The only way around it is to program in a language with a preprocessor and #define all the keywords into your favourite language which must be universally evil. So you are going to end up with a pigdin programming language with the terms mixed up and it will not read naturally. Especially if your choice of programming language is something which really tries to be readable English and your native tongue has another word order than what English does.

The characters

I know that most modern languages support coding in unicode so you can actually name your stuff in Chinese, but I have never seen this in the wild. Even the humble Swedish characters of Å, Ä and Ö are notably absent even if they might be supported. So they are replaced with their non-embellished cousins which in Swedish are very separate letters indeed. Here’s a particularly evil real world example:

int Las();

I wonder what this function does? Could it be Läs, the Swedish word for Read? Might it be Lås, Swedish for Lock? Perhaps it refers to the law of employee protection commonly abbreviated as LAS? Who knows..

You can get around this by substituting ä with ae, å with aa and ö with oe. Those are the historical predecessors of our diacritical letters, but no-one uses those combinations any more, and it’s even less readable.

Translation issues

Another issue is that some things are not easily translateable, since they are in use with a third party library. Say your library uses a Table object and you want to write a function that goes and performs a check on the table. What are you going to call it? KontrolleraTable()? Or are you going to translate the word table too.

What the hell is a Mutex called in Swedish?

The case for the moon-language of your choice

With all this in mind, why does foreign language code still pop up from time to time? What is the irresistible pull?

Translation issues, again

Just like Mutex is pretty hard if not impossible to translate into Swedish it’s equally hard to translate Swedish business terms into English. If you don’t believe me, take a look at the field of accounting – something which I’ve been involved in pretty recently.

The Swedish word Periodiseringsfond has no real equivalence in the English language. It’s a genuine term, but it’s a legal one which is very specific to Swedish accounting law. Even the first part of the word, Periodisering is problematic. This actually is translatable into Accrual – but that term is completely unknown to most Swedes even though the knowledge of English is generally excellent.

Provided you need to create a class for a Periodiseringsfond, do you invent a word for it like AccrualFund – even though such a thing doesn’t exist outside of your program – just to keep to the rule of everything in English? This was actually the case for one of the larger systems I’ve worked with. That system had a pretty significant invoicing portion and it was all English. But some of that English was definitely a case of bogus translation directly from Swedish.

I’d love to see where the line is drawn – I’m sure that sooner or later every domain driven design made for a foreign audience will run into this problem.

Also, when you are dealing with your customers or users you probably want to have your developers actually talking to them. Are you unintentionally making a problematic conversation between a tech-oriented developer and a real-world user even harder by not even making them speak the same language. A bit part of the domain driven and behaviour driven design is to make this connection easier – are we missing the point simply because it feels wrong to code partly in a foreign language?

Because it does, it feels so damn wrong.

Tagged , ,

Drawing graphs with graphviz

If you’ve read any of my posts regarding regular expressions you’ve seen these little graphs. I just wanted to tell you all how they were made. They might not be very pretty, but it’s easy to produce them, especially programmatically from source code. In fact, it’s so easy that I’ve included graph output as a standard debugging feature in my parser generator. And they can get quite pretty indeed, if you know what you’re supposed to do.

This is part of a library called GraphViz. Unfortunately the graphviz tool set is about as user friendly as a bicycle without a saddle attached. Well, they do have a GvEdit application that you can input stuff in and get a graph. I suggest using this online tool and just right-click to save your graphs. That’s what I do anyway.

I find it remarkable what you can achieve with this tool, and I use it more and more since I loathe to draw and layout things by hand. A basic directed graph looks like this:

digraph g {
  a -> b
  b -> c
  c -> a
}

If you want more nodes, just add them in and draw arrows. Easy. The engine will lay this out for you and present you with a pretty picture. I’m by no means an expert, but I have gathered a few tricks (use inside of the brackets unless stated otherwise).

  • Graphviz likes to draw with arrows pointing downwards. Use graph [rankdir="LR"] to make it tend to draw left to right instead
  • Change the default node shape by adding node [shape="whatever shape you want"]. Theres a few to choose from, in my graphs i used circle and doublecircle
  • Add lables to your little arrows by adding [label="xx"] after the end node.
  • Whichever node you define first will tend to be drawn to the upper left

This is the tip of the iceberg really, graphviz.org has lots more information. As long as you don’t try to position the nodes manually you’re pretty much good to go. It really is a tool for getting graphs without having to do all the cumbersome work with moving stuff around to make it pretty. If you need fine grained per-pixel control, this isn’t for you. If you need to get a graph up and running quickly and to modify it easily, this tool should be right up your alley.

Tagged , ,

Tales of extreme debugging

Sometimes debugging, at least through the rose-tinted goggles of hindsight can be quite amusing. Today I’m going to share with you two of the better ones I’ve experienced myself to brighten up your day. I don’t really know if there’s any wisdom to be drawn from them, but they’re true and fun.

It’s too cold outside to debug

Back in late -04 early -05 I was working for Gizmondo programming games. This wasn’t exactly by choice, since our studio was bought by them we just sort of ended up doing games for this unit that was supposed to be the next big thing. Turned out it became a big thing for all the wrong reasons. So, I was one of the precious few people who actually got to develop on this machine, and while it might have been a miserable failure in some points it was groundbreaking in others. One thing was that this little device actually included a GPS chip. Now kids, this is way before the first iPhone so this was quite a novelty. I had this stint with some talented guys to make the client side of this multiplayer game Colors, which was to be the first GPS multiplayer game in the world. The gist of it was that you could control neighborhoods by actually going there and challenging the current ruler. So, being the client side guy I set out on a task to implement this.

If there is one thing you should know about GPS is that it is an extremely sensitive technology. And I don’t mean sensitive in the sense that it measures accurately. No, what I’m talking about here is that the satellite signals are very weak and it doesn’t take much to make the GPS very confused. So, when I started up this device and went to it’s dashboard it took an awfully long time to detect the signals, but eventually it did – provided it was reasonably close to the window. However, when I tried to do this while rendering anything at all on the screen the most I got out of it was two satellites and a position about halfway to Copenhagen. Turns out that no-one had considered the possibility of the graphic chip actually inducing interference on the GPS chip causing it to go haywire. There was no time to rectify this in the current hardware, so I had to make due with what I had. Which meant finding out at exactly what treshold it actually went bananas.

The only way to do this was to go outside. And this is Sweden in February. Each iteration of this debugging session entailed going outside and walking to the city park in Helsingborg and sitting down at a park bench staring at the device. Waiting for about 20 minutes to see if I got enough satellites and walking back again, usually dissappointed. To compound this, I didn’t have a laptop so there was no way to actually step through the code. And if I did printf style debugging to write to the screen it might disturb the GPS. Since then, I’ve always at least been grateful that no matter what problem I’m having, at least I’m warm.

Mysterious pauses and flying cars

Game development leads to visually funnier bugs than other branches of development. It was at least back then more prone to heisenbugs, bugs that defy detection and only appear if a debugger isn’t attached. This is a tale of a bug that fits both these criteria. During the beta testing period of the development of Richard Burns Rally, we had a bug report filed that sometimes after running races against the ghost car the game would freeze up on the PS2. The ghost car was a fully platform generic component of the game, so the report itself puzzled us some. We found out that it was indeed true, there was a one to two second pause when the ghost car finished the race, but only for prerecorded ghost cars. And only for prerecorded ghost cars driven by one of our speed daemon artists that consistently set the record for the stages.

So, the question came up, what the hell is the ghost car doing when it crosses the finish line? And how can we find out? There was only one answer in reach, and that was to drive as fast as or faster than the ghost car and look at it when it was supposed to stop after the finish line. This took a few men and many tries but eventually we managed to get to the finish line and we observed what happened. Once the race finished the ghost car teleported two feet up, and turned slighly to the left and up towards the sky. It then continued to drive upwards at an amazing speed and stopped. We just looked in amazement at this display.

So, it turns out that this was an error with the prerecorded ghost cars. More specifically it was missing four bytes at the end. Those four bytes terminated the sequence of prerecorded moves and were supposed to be 0×00000000. Since we never wrote those terminators the ghost car continued to happily drive it’s way through random memory, or other fragments of ghost cars of the past since we continually re-used the buffer for ghost cars. On the XBOX and PC it just so happened that the array were zeroed out properly by memory management. On the PS2, since we had our own memory management routines, it never was. And because of those memory management routines it was hard to get an illegal memory access violation. So it just happily looped through quite a lot of the memory until it randomly stopped.

We never re-recorded those ghost cars. We modified the file loading to append four more bytes, tossing the problem under the rug and released without either flying cars or mysterious stops.

Tagged , , , , ,

Choose your own technology adventure

I’m sure you remember those books from when you were a kid. The choose your own adventure books. You started reading, came to a decision point from where the book gave you a goto-statement to another page from where the story continued. Often to horrific deaths as I recall. Sometimes I get the feeling that I’m on such a page, choose if you’re going to go through the cellar or sneak across the garden to get to rescuing the princess. The decision you make is going to have an impact on you for a long time, potentially irreversible.

Oh yeah, and by the way, when I was a kid I immediately recognized it as a rudimentary programming book. And I converted it into BASIC on my Amiga so I could run the book as a computer game. I was that kid.

In a development context, these decision points often are when you decide on technology choice. It becomes especially interesting when it used to be a no-brainer but there has recently appeared new contenders. I’m going to tell a tale here about such a choice. Unfortunately, or fortunately as we shall see, the choice wasn’t really made by me – I just had to deal with the fallout.

Once upon a time, years ago, it was decided that a new subsystem was going to be built in the system I would work on later on. The subsystem in question was a hierarchical inventory of network equipment, basically sorting everything into a huge tree. When embarking on this quest, the brave adventurers of old had to choose a strategy for data storage. Remember kids, this is probably back in -05 or -06. Being adventurers the old proven broadsword of SQL seemed dull and boring and someone had heard about db4o. Without really consulting with anyone as this team had all the self governance in the world, they decided that this looked exciting and off they went!

The system got built and everything looked in order for nearly two years. People came and went and the actual whys of why db4o was chosen was lost. Also lost was the knowledge of how you really were supposed to use db4o. Then it started to break. None of the original programmers were there any longer, and at this time at least googling the problem turned up little help. We had ended up with a technology island which had become unmaintainable.

You see, when these sorts of databases break, they do so in weird ways. I’m sure db4o is a good and stable product these days, it might even have been so back then – the cause of the problems might have been us abusing it. But we got data loss. And not the normal kind where things go away. We had things where if you followed a connection one way, you’d find something. But if you went the other way you’d end up empty handed. Also, it might work if you tried to do it a few more times. The exact same call. In the end we wrote an interesting piece of software to convert the data from db4o into a normal MS-SQL database. It contained such features as “try to get the data five times before giving up” and “wait a little before trying again to let things cool down”. We ran it in the middle of night, since the product was up and running and some of the data in the database were high traffic in both reads and writes. It took hours to run. In the end, we rescued almost every piece of data.

What I’m getting at here is the lack of analysis when it came to technology choice. In my mind, it really boils down to this. Do you want to be cool and use the latest tech but at high risk, or use something proven, boring and low risk.

It’s a tough choice on many levels. If you go with the latest stuff, you’ll potentially attract great talent since they want to use cool tech, you might get better performance, you might get enthusiastic support from the open source community if you’re lucky. Or you might end up in the middle of the night running arcane rescue tools.

The well trodden path is full of search results in google. Its full of help on stack overflow. It’s also pretty dull and I understand the feeling of frustration when you’re still writing the same stored procedures you did five years ago. No-one likes stored procedures.

In my mind the choice isn’t about technology really. It’s about people. And it’s about sharing code ownership. If you introduce a strange new technology you have to make sure that: Everyone can understand it and the code and maintenance procedures must be completely shared. If you have those two things, you can pretty much choose whatever weird shit you want, a team of superstars that share the code can handle it. But if this is one guys pet project, you’re pretty screwed. He’ll be happy at first, you’ll let him run “his” part of the project. No-one will care about it. When he quits it’ll go into disrepair and you’ll write blog posts about it years down the line.

Tagged , ,

Why you should use prefix increments instead of postfix

This is a pet peeve of mine, so bear with me. I really don’t like this code (curly brace language of your choice, but lets say it’s C#):

for (int i = 0; i < 10; i++)
{
   // Do stuff
}

I specifically do not like i++. If you don’t have a really good reason you should be writing ++i in almost every case. Even when it doesn’t matter, like in the example above, simply because it’s good style to do so.

The result in the loop above is exactly the same if it’s written like this:

for (int i = 0; i < 10; ++i)
{
   // Do stuff
}

So why does it matter? Because sooner or later it will bite you in the ass. You see i++ has the connotation that the l-value of any expression that uses it is the value before it was incremented. Which means that a copy must be made. Sure, the compiler probably optimizes this away, but what if this was an operator overloaded object?

Take a look at this, admittedly stupid but possible, object

public class Overloaded
    {
        private ExpensiveObjectToManipulate junk;

        static public Overloaded operator ++(Overloaded o)
        {
            o.junk.Increase();
            return o;
        }
    }

You see C# only provides one overload method for ++. In glorious C++ you get to overload it twice, in which case you also get the great opportunity to also mess up the semantics of the action. Or change the return type or other generally retarded actions C++ allows. But I digress.

So, C# gives you both prefix and postfix increment for an overload. Prefix does what you think it does. Postfix does not. Postfix will make a copy of the object, assign it to whatever is supposed to be the target of the expression then it will perform the operation on the ExpensiveObjectToManipulate. Now think about if you are using the Overloaded object in a loop. And ExpensiveObjectToManipulate is also expensive as hell to copy. It might be copied by value?

Do you trust your compiler to optimize it away?

Do you trust your vendor of stupid overloaded objects to not have different side effect from the copying?

This might not be a big problem in C# where this sort of operator overloading thankfully is rare, but if you ever get to the wonderful world of C++ you’re going to see this. There is no reason on this earth to use postfix operator with it’s inherent complications in so many cases.

var i = new Overloaded();  // Using fun object for making a point.
foreach (var x in collectionOfX)
{
   // Do stuff with x
   x.DoStuff();
   // increase the counter
   i++; // Why, oh gods, why?!?
} 

There is no reason to use the postfix! Yet I’m willing to bet that many of you are doing it. Same goes for every for loop. So let’s do the safe and optimized thing and use our friend the prefix operator every time you don’t mean to copy things.

Tagged , ,

Exception handling done the right way

I love exceptions, but I get a distinct feeling many people don’t when I see some horrific examples of their use. Especially from people who gets forced into using them by javas wonderful checked exception feature without truly understanding the concept. Or by people so scared of them that they hardly dare catch it, like way too many of the C++ crowd. In these enlightened days, exceptions don’t cause the problems they used to cause, unless you are working with old C++ code which doesn’t use auto_ptr. If you are, abandon all hope and refactor hard.

I’m going to assume you’re writing C# here, but everything I say goes for Java as well. It’s even better there since Java has checked exceptions. But without further ado here’s my handy-dandy guide how to do it right!

Why bother

Because it’s a graceful way of handling exceptional circumstances. Anything that is not considered a normal result of your method that might happen is an exceptional circumstance. Return values are not for exceptional circumstances. I assume you like the .NET framework or Javas similar environment. They’re nice because of exceptions! To see the alternative, take a look at for instance straight up COM in C++. It’s clunky as hell. Without exceptions, the only way to check for any errors is to check return values. Whenever you do something you’ll invariably get the real return value as a pointer-to-a-pointer and the method itself returns a HRESULT which you need to check. But of course, you won’t, since they almost never fail and you will forget. And you can’t read the code like it’s supposed to be read since the return values will be in the parameter list instead of in the actual return.

To compound the problem further C# has an out and a ref specification to parameters. Which is almost in all cases used for this sort of stuff, and almost always totally evil in my book for the reasons stated above. You should never ever do this sort of stuff:

string DoStuff(out BusinessObject bo) {
    // Doing stuff
    // ...
    // Business plan bad, return error
    return ".COM boom failed";
}

The only way to find out what went wrong it to do string compared on the return value. Evil. Return value checking should be a thing of the past for non-exceptional circumstances.

How do I do it then?

Lets start by how to not do it. Here’s a typical example of doing it wrong

void DoStuff() 
{
	// stuff goes wrong
	throw new Exception("Shit is hitting fans");
}

NO! You don’t throw the Exception class, never ever. I wish they made it abstract. This circumvents the catch statements intended use, makes it impossible to tell what’s going wrong and makes babies cry. If you specify an Exception type to catch it will catch every exception. Whatever it was you’ll have to figure out yourself. It might have been shit hitting fans, but it might as well have been a null reference. Maybe it was a division by zero. Who knows?

If shit is hitting fans, you throw a very specific exception relating preferably to that case and no other case.

public class ShitHitFanException : Exception 
{
    public ShitHitFanException(string message) : base(message 
}

void DoStuff() 
{
	// Danger Will Robinson!
	throw new ShitHitFanException("Scotty can’t change laws of physics");
}

And you catch it by catching the exception you need to handle, permitting the stuff you’re incapable of handling to bubble up to the surface. Maybe even to a crash, if that’s your cup of tea.

try
{ 
    DoStuff();
}
catch (ShitHitFanException e) 
{
    // Alert Will Robinson of danger.
}

This leads to my next point. You need to use base classes for your exceptions if they have any sort of common ground. Just routinely declaring exceptions as being subclasses to Exception like a big happy community of equals is evil. You need to categorize your exceptions into types of exceptions. For instance, if you have some sort of method which handles files, you might be throwing FileNotFoundException, DirectoryAccessDeniedException and DiskFullException. These have common ground, and should be labeled as such by inheritance. You need a IOException base class. This makes your catching code able to intelligently decide if it wants to handle exceptions specifically or more sweeping without resorting to acts of evil like catching Exception.

More importantly it also enables you to do cool stuff like

void ShuffleImportantDocumentsFromTopDrawer(Document[] topDrawer) 
{
    try
    {
        foreach (var document in topDrawer) 
        {
            try
            {
                document.shuffle();
            }
            catch (FileNotFoundException e) 
            {
                // Dog probably ate it. Log a warning and continue
            }
        }
    }
    catch (IOException e) 
    {
        // We end up here for BOTH DiskFullException and 
        // DirectoryAccessDeniedException
        CallTheItGuy();
    }
}

This is impossible to do gracefully if you have screwed up your inheritance chain or are catching Exception itself. Exceptions have no relation to another, and you can’t catch two similar exceptions handling them the same way without resorting to copy and paste or putting the handling in a separate method, which is weird and ugly. This is oftentimes a problem wih Java file IO. They throw exceptions like there’s no tomorrow and you’re required to catch them all. You probably want to handle some of them the same way. Catch the base class and you’re set.

But, I don’t want to handle it you say. I want someone else to do it for me. First of the bat, lets never ever do this again:

try 
{
    FaultyMethod();
}
catch (FaultyException e)
{
   // Whoops. I cant handle this stuff. Do some clean up and signal errors again
   throw new OtherException("There was a problem with faulty method, big surprise");
}

NO. There’s one huge glaring problem here. You are marshalling away the cause of the exception! The hard and fast rule is: If you are throwing something from inside a catch, you need to supply the exception you throw with it’s underlying cause. Here is the corrected code:

   throw new OtherException("There was a problem with faulty method, big surprise", e);

See how we added the e to the constructor. This makes a world of difference! When you print a stack trace, you’ll be pointed to the correct line, instead of being pointed to the catch block which says absolutely squat about the real nature of the problem.

Another point I’d like to make is that you shouldn’t throw another exception unless you have changed the meaning. But you can still catch stuff and log. As an example, say you have a SOAP service. Since you are diligently applying the principles in this article, you’re using exceptions to handle SOAP errors (they serialize fine, don’t worry about it). But you want a log on the server side as well of any exceptions sent to the client. Enter the second (under)used use of the throw keyword – the parameterless throw.

string MySoapyMethod(string param) 
{
    try 
    {
        // Do cool stuff that fails...
        BrokenCode();
    }
    catch (BrokenException e)
    {
        // Log this stuff so that we know about it
        Log.Error("Broken stuff in Soapy method", e);
        throw;
    }
}

This will throw the exception without altering it in any way. This is the other legal way of handling stuff you don’t want to or can’t handle. A final point, when you log using your logging framework, such as log4net, you typically get an option to pass an exception. Use it. If you have an exception logging, never log Log.Error(e.Message). Always log Log.Error("A real description of what went wrong", e). That way, you’ll still preserve the precious stacktrace.

Tagged , ,
Follow

Get every new post delivered to your Inbox.