Google maps a result in BING?!!

Well, I know that bing is trying to improve, but simply serving up content from Google is a strange way to “compete”…

image 

Robert

Advertisements

Switching to Mercurial

So I’m trying out Mercurial.

The reason is that it is now integrated into FogBugz through Kiln – a new FogCreek Software as a Service software offering, which seems pretty amazing.

The first Mercurial resource I found particularly helpful was Martin Geisler’s article here:  http://mercurial.selenic.com/quickstart/.  This is a very short article that outlines the basics in “6 steps”.

The other resource that was really helpful was Joel Spolsky’s excellent Mercurial tutorial here:  http://hginit.com/.  As usual, Joel’s writing style is funny and informative.  His “Subversion Re-education” is particularly enlightening, and I completely agree with his statement that:

…here’s how Subversion works:  When you check new code in, everybody else gets it.

[So] Subversion team members often go days or weeks without checking anything in. In Subversion teams, newbies are terrified of checking any code in, for fear of breaking the build…

I have two machines, and I program on both simultaneously.  (It takes about 3 minutes to build the application and run all the unit tests, so I will switch over to the other computer and program on it while I’m waiting for the unit test results.)

Even with only two computers and one programmer this “feature” of Subversion has been really affecting and frustrating me lately.  I really want a source code repository where I can check-in temporary code to help “snapshot” it, even if it is not completely ready for the main “trunk”.  Subversion doesn’t  have that without using branches (which are scary in Subversion) – but Mercurial (and any DVCS) does provide this wonderful feature and it’s not scary at all.

Both of these articles have helped changed the way I think about version control, or at least helped to start changing my thinking.

The good folks at FogCreek have also been kind enough to answer a few questions, and the http://Kiln.StackExchange.com website is full of Mercurial questions and answers as well.  My first question was asking about how to handle large projects and shared utility libraries:
http://kiln.stackexchange.com/questions/808/using-projects-and-repositories

Tools

TortoiseHg now works wonderfully on Vista x64 – it didn’t when I first looked at Mercurial.  It’s not quite as “pretty” as TortoiseSVN visually, but it gets the job done and seems more functional.

I have found two plug-ins for Visual Studio 2008 and 2010:

I installed the HgSccPackage, because StackExchange had good things to say about it, and it seems to do the trick nicely.  When I add a new class in VS2010 it automatically adds it to the Mercurial repository, just like you’d expect. So far, so good!

Robert

Code contracts – suppressing warnings

The concept of designing by contracts (DbC) is a great one, and I think it should be obvious to most developers that adding pre- and post-conditions to methods can help ensure (even without unit tests) that the method is working well.

For my own projects I have always created a Check class that had a whole variety of methods for checking various conditions.  This has proved to be very useful, and although I don’t really love it when a pre-condition fails and crashes my program, it certainly makes it very easy to fix the problem right where it occurred rather than trying to hunt down the issue and figure out what unit test scenario I forgot (and therefore what new unit test needs to be written).

The new Microsoft Code Contracts that is being included with .NET 4 and VS2010 is really amazing, because it offers much more powerful post-condition checks and object invariants.  Without some very cool AOP tricks (and those only made possible with the next release of PostSharp) building an elegant object invariant method was very hard.AlexanderCoveredInDirt

Additionally, the idea of having a static contract verifier is absolutely amazing.  At compile time the verifier is able to tell me if I’m going to have problems at run-time, which means I don’t actually have to spend time running the unit tests or performing manual testing on my code.  That is, I know if I’ve got dirty code earlier in the development process than ever before.  Amazing.

Even more amazing is that the static checker sometimes makes suggestions as to what pre-conditions I should include!  So the static checker is really helping to make my code more robust.

The reason I’m mentioning all of this is becauase I’m currently writing a small demonstration application that I would like to be very complete.  Of course, I’m adding code contracts as a simple way of ensuring correctness (along with unit tests).  Unfortunately, the static verifier is giving me a whole bunch of warnings and a final message of:

CodeContracts: Checked 39 assertions: 29 correct 10 unknown

But the problem is that the “10 unknown” are reasonable complex and the static checker can’t figure it out, and each procudes two (very detailed which is great) warnings in the Visual Studio Error List.  For example, consider the following method on my document manager:

/// <summary>
/// Closes the given document and removes it from the document manager.
/// </summary>
/// <param name="document">The document to close</param>
public void Close(Document document) {
    Contract.Requires(document!=null, "The document to close must not be null");
    Contract.Ensures(this.ActiveDocument!=document, "After closing a document it cannot be the active document");
    Contract.Ensures(this.Documents.Contains(document)==false, "After closing a document it should not be part of the open documents collection");

    // Remove the document from the collection of open documents (which will raise the collection changed event)
    // If the document wasn't open (documents.Remove returns false) then there's nothing else to do here
    bool result = documents.Remove(document);
    if( result==false ) return;

    // If the document that was just closed was the active document then a new document must be made active
    if( document==this.ActiveDocument ) {
        // Determine which document will be made active now that we're closing the currently active document
        // This logic is simplistic but easy
        var newlyActiveDocument = this.Documents.Count==0 ? null : this.Documents[0];
        this.ActiveDocument = newlyActiveDocument;
    }//if - was active

    // Tell the document that it was closed
    document.DocumentClosed();
}

Any developer reading through the code would probably agree that a) the post-conditions are appropriate and, b) with the current implementation post-conditions will always be met.

Unfortunately, both of the post-conditions are very hard to check statically, and the static checker issues “unknown” warnings for both of them.  I definitely do not want a whole lot of warnings when building my application, especially warnings for un-provable post-conditions.  All these extra warnings can make it much harder to find the useful warnings that I actually want to fix.  So, how to eliminate the unnecessary “can’t check this” warnings?

CodeContractsManual Fortunately, the very excellent Code Contracts manual (installed with the Code Contracts VS plug-in) has a section on this: 6.5.4 Dealing with Warnings.  It has some very good suggestions and you should definitely read that section, but unfortunately nothing helps out here.

The next section (6.5.5) discusses how to build a “base line” file, which is basically an automatically created collection of all the current warnings.  This collection of warnings will be ignored in future compiles, allowing you to focus on new warnings.  However, to merge new warnings into this file must still be done by hand (although you could just delete the file entirely and then have the static-checker re-create it again with all the warnings).  I find that this approach is too coarse-grained to be easily effective, and it doesn’t work in a team setting when other developers might not have the base line file.  One solution would be to store the actual base line file in your SCM system, but that seems wrong.

Instead, section 6.5.6 has a very elegant and wonderfully fine-grained approach:  attribute the method with instructions to the static constraint checker that it shouldn’t emit this warning.  The special SuppressMessage attribute can even be applied to types, which is okay, and to assemblies.  Applying the attribute to assemblies however is no longer fine-grained and I don’t recommend it because it will hide any new warnings that you might want to know about (and might want to fix).

Dad feeding the birds I really like the approach when applied to just methods, because it means that a developer has specifically thought about this particular warning on this particular method and decided that it is okay to ignore it.   Unlike the broad strokes of the base-line file, this fine grained approach is clearly communicated to other developers and elegantly stored with the code in the SCM.

So our attributed method above now looks like this:

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Contracts", "Ensures")]
public void Close(Document document) {
    Contract.Requires(document!=null, "The document to close must not be null");
    Contract.Ensures(this.ActiveDocument!=document, "After closing a document it cannot be the active document");
    Contract.Ensures(this.Documents.Contains(document)==false, "After closing a document it should not be part of the open documents collection");
...

And no more warning.

David Allen has a blog almost completely devoted to code contracts (and he’s the one that pointed me to this part of the manual!), so if you’re interested in reading more about this amazing technology you should definitely check out his blog: http://codecontracts.info/?blogsub=confirming#subscribe-blog

Robert

PS.  It’s now several hours later, and the static type checking just caught my first bug.  It warned me that a variable might be null and I hadn’t checked it.  It was correct, and I needed to modify my code to check if that variable was null and do something slightly different if it was.  Nice!  One more bug I don’t have to solve due to static contract validation.  🙂

Microsoft’s creative destruction

Dick Brass has published a very interesting article about Microsoft:

http://www.nytimes.com/2010/02/04/opinion/04brass.html

I’ve read and heard that things like this go on over at Microsoft, but this is far more concrete and disappointing.

Of late I’ve been wondering how Microsoft could have such absolutely terrible marketing.  I’ve been amazed that their mobile story has been so bad and the Windows Mobile OS release just keeps getting pushed back.

They’re development tools are absolutely amazing, and continue to get better with each release – perhaps the developer group is more an insulated from politics?

I hope they can turn this around – developing for Windows with Visual Studio is a beautiful experience, and being able to easily integrate with Office is fantastic.

Robert

Mindscape’s Lightspeed

John-Daniel over at Mindscape e-mailed me yesterday at 2:50am (fortunately, he lives in New Zealand…) and suggested I write about the release of Mindscape LightSpeed 3.0.  He offered me a LightSpeed 3.0 license free for me to give away in any way I want, which I thought was pretty cool for reasons I’ll get to in a moment.  He also wants to entice me to write such a post by offering me a free license of any of their tools, which unfortunately is not as cool, for other reasons I’ll also get to in a moment.

But first, some background…

The background

Hurdles come in all shapes and sizesFor the software that I’m developing for my own business I had an absolute nightmare with persistence.  My software is a thick client application that will run on people’s desktops or laptops.  In an internet age this may be considered a little bit shocking;  several years ago I worked for a company that had a decree (delivered from on high) that all future applications would be written for the web. With Google building everything on the web, and Microsoft moving Office to the web, and all the talk about cloud computing it seems the web is the place to be.  However, my clients often use their laptops in places without wireless connectivity, the software is doing a lot of computationally intensive stuff so hosting it in the cloud could get pricy quickly, and finally the software deals with very personal data, and my clients are very leery of not having this data secured locally.

Additionally,after interviewing lots of clients, I came to the inescapable conclusion that my clients are not computer experts nor do they want to be.  I know this because they told me that in exactly those words.  🙂  So making them install or even attempting to automatically install some form of traditional client-server database was out of the question.

So from the early days when I was designing/architecting my application I knew it needed to use an embedded database.

Forget puritan dogmatism, for me its about productivity and getting useful things done, so I definitely did not want to hand-code SQL for all 500 database entities in my application.  Starting a green-field application I also wanted to focus on the domain model and do domain-driven development.  The database then  becomes a mechanism for persisting data, not the end-all-be-all of the application.  So any ORM tools that require building the database and then automatically generate some pseudo-domain-layer were not for me.  I spent about a week full-time just finding ORM tools and downloading them and trying them out.  Some hadn’t been touched in 5+ years and many were simply gone entirely (which made evaluating them very easy 🙂

So I finally started developing with SQLite, which is a great little free database.  Many of the ORMs available do not work with SQLite, which ruled them out immediately.  (I did spend a week trying to get Firebird to work as an embedded database, but no such luck, and MySQL is VERY expensive for an open-source product if you need to use it as an embedded database in a product that you’ll be selling!)

I decided to use the Microsoft Entity Framework to communicate with the database, which had just been released about two weeks before I started development and was being pushed hard as the greatest thing since sliced bread.  Unfortunately, I quickly realized that the Entity Framework was “twisted and evil”; I found the three layer abstraction great in theory and really lousy in practice, the VS2008 designer was incredibly lacking and buggy, and error messages were so complex not even Google knew anything about them.  I tried using Microsoft SQL Compact edition, with the same exact database model I just got completely different complex error messages.  So I threw out all that work (it wasn’t much) and started again with good ol’ reliable NHibernate.

Only, I immediately had headaches with that too; for all of the Hibernate croud’s ranting and complaining about the Entity Framework I found NHibernate to be really just more of the same:  their puritan dogma of “transparent persistence” is great in theory, but oh by the way we don’t have any GUI for .NET folks so just hand-code the XML (?!), oh, and you just need to make all your properties and methods virtual, so oh yeah your classes can’t be sealed, and you really need to ensure that you only use interfaces for collections (so no someList.AsReadOnly()), and you can’t just have one class have a reference another because that’s really not good database design, etc.  It certainly didn’t feel terribly “transparent” and it definitely wasn’t productive.  So in the interest of protecting my domain layer I switched to using Castle Active Record.  Putting a few attributes on my classes and properties is MUCH better than having to completely change the way I design classes.  I still had to jump through hoops when dealing with collections, but everything was much easier and faster.

This worked for several months before the puritan dogma of NHibernate broke through the Castle Active record abstraction and bit me very hard; so I went back to the drawing board.

I decided to use an object database:  DB4O.   Development has been incredibly fast ever since; everything is so darn easy, I haven’t thought about persistence in months – everything just works.

Ludicrous speed!

When a client came to me recently and wanted to build a rich internet application I quickly realized DB4O was the wrong solution.  For one thing, it’s single threaded, so it definitely doesn’t fit the internet world.  I read about a few people that tried it anyway with poor results.  I really wanted to use Eloquera, but they don’t have LINQ support yet (and LINQ is just so darn productive that not using LINQ is stealing from the client).  So the hunt was on for a good ORM solution that was easy to use and supported LINQ.  Additionally, object databases are (sadly) not really mainstream, so it just wasn’t a good fit.

Fortunately, I still had the evaluations I had done the previous year of every ORM tool out there I could find, so I just researched the top few tools and examined the latest additions.

Mindscape’s LightSpeed had been a top contender before, and it came out on top this time around.  Everybody that’s used it seemed to love it, and having used it myself now it really is a great tool with great support.  The visual model designer is integrated directly into visual studio and supports both forward (to the database) and reverse (from the database) generation.  You don’t have to use the code-generation feature at all (although I do), and what is really great is that it automatically generates data transfer objects (DTOs).  These DTOs are all partial classes with many partial methods, so it’s easy to extend them and hook in custom code.  It’s very well thought out.  They have also responded to every forum post and e-mail in a very quickly.

My client is happy because I am very productive (and he pays by the hour):  I can easily design new entities, and use the automatically created DTOs to send the entities to the Silverlight client via WCF RIA Services.  (This only required a small change to their code generation templates).  I spend most of my time focused on the actual Silverlight application and a very little time worrying about database persistence and wire-transmission.  Development the way it should be.

I should mention that I still designed the database-access layer so that it completely encapsulates the use of Lightspeed (although it does expose the automatically generated DTOs).  I believe this is just good layered design, although it would in theory allow us to switch out Lightspeed and use a different ORM (what?)

Mindscape’s offer

While Lightspeed is Mindscape’s flagship product, their other products don’t really interest me that much.   I’ve already got the Actipro WPF Property grid (and I don’t need it for my software) their WPF Flow and Star Diagrams looks interesting, I don’t need either.  Their “Essential WPF Control set” is underwhelming, especially when compared against Actipro’s WPFStudio which is great and also comes with themes. So while they have offered me a free license to any of their other tools for blogging about them, I just don’t need any of them.  I just wanted to blog about them anyway because LightSpeed is cool.  🙂

Your free Mindscape license

If you’d like the free license that Mindscape has offered, drop a comment below, I’d love to hear from you about what you’re currently using for persistence (if anything) and why you want to try something different.

Object databases

Speed: For object database sceptics, in my tests, the object database Eloquera was just as fast as the LightSpeed over SQL-Server combination, and LightSpeed is very fast.  Micro benchmarks never tell you about real-life, but my little test was a multi-threaded test with multiple instance types and inserting and updating hundreds of thousands of entities.  The guys over at Eloquera are also doing a great job.

Licensing: With Eloquera’s new licencing model I’ll probably be switching my own project over to Eloquera when they get LINQ support, because DB4O’s draconian royalty based licensing is cruel and unusual punishment for a small business struggling to get started.  (And sadly the .NET version of NeoDatis has completely stalled, although if I start making cash I’m tempted to donate some to the NeoDatis guys to see if we can’t get it going again).

PlasticSCM Review

Well, I’m finally giving up.  I wanted it to work.  It’s looks great, and the advertised features are very impressive.  I have now spent about 3 to 4 days trying to get PlasticSCM up-and-running, and my work days are usually from 9am-5pm and 8pm-11pm, so that’s a lot of time.

Unfortunately, I just can’t get it working. I also have some serious concerns:

  • There is one support guy in the USA.  He’s fantastic, and knows his stuff inside and out.  But there’s only one of him.
  • Lots of little bugs and usability issues.
    • I have a screenshot of PlasticSCM showing me that my revisions are all happening tomorrow, while the Vista clock is clearly showing today’s date (and the server and the client are both on my desktop where I took the screen shot).
    • If you want to pull changes from another machine (my laptop in this case) you do a “pop”, because that apparently is the opposite of pushing changes to another server.  If the support guy hadn’t told me this I never would have thought to apply a “stack-data-structure” analogy to grabbing changes from another machine.
    • Right-clicking on the root of the repository and saying “Check-in (recursive)” doesn’t actually check-in all the changes.  I discovered that changes to files that I had not explicitly checked out were completely (and silently) ignored.  However, you can right-click on those files and then say “Check-in” and it will do so.  So the check-in recursive command doesn’t actually check-in all the files you’ve changed, unless you are clicking on a changed file explicitly and then the check-in command will check it in.  Software should be polite, and it should at the very least be consistent, and even better should probably ask if I want to include files I didn’t check-out and offer to show me the files.  The software should adapt to my needs, not vice-versa.
      (BTW, according to the support guy the solution is to click on the file root and say show changes, select them all, and then say check-in; that’s fine, but I really dislike the changing semantics of “check-in” and the silent failure to check-in modified but not explicitly checked-out files (like many other revision control systems will allow).
  • I never did get the Subversion import working.  They offered to do it for me if I sent them my subversion repository, but I just can’t bring myself to send all of my code to a 3rd party company without lots of legal paperwork in place.
  • Their website is under heavy construction, and apparently things like the 1-800 number got forgotten.  There should be strict processes for reviewing changes before pushing them out to the production website.  It’s still not there and this is three days after I mentioned it to them.
  • The documentation is dramatically wrong (see this post)

I know software has bugs – ALL software has bugs.  My software has bugs.  If the PlasticSCM date/time bug had been the only issue I’d encountered I’d have no problem paying for and using their product.  But it wasn’t, it was all of the above and more.  But in my previous job as a top-level systems architect and lead-developer with a very large finanical services company, it is really important not to mess up the basics, and with the right processes in place it’s not too hard either.

There are a few things I think they could do easily do better:

  • They need more than 1 person on support
  • For my business I use Ring Central, so if somebody phones the 1-800 number it goes directly to both my land-line and my cell phone at the same time.
  • To be honest, I don’t really care if there is bad grammar in the documentation, I understand we live in a global world; it’s not terribly impressive, but I understand.  However the documentation should not be flat-out wrong.  Have the developer that wrote the feature either write the documentation for it (if their English is good enough) or have the developer check out the documentation after the technical writer has written it.
  • Implement strict processes and controls for the website – it’s the first place a customer will come – spelling mistakes, broken forms and missing support phone numbers should never happen.  There are lots of services to check that a website is up and running, including submitting forms; on the back-end any request e-mail with say “FROM TEST SERVER” and then a really long silly (constant) number could be automatically deleted.  Additionally, before pushing out a new version of the website have somebody run through a check-list of features to test, including submitting forms.
  • Do usability testing, and do it often.  I do this as often as possible, and especially after implementing a new feature.  My experience is that people are always very happy to help out and give you their advice!  I’ll post more on usability testing soon, but it’s very easy to do and personally I find it very fun and rewarding.

I think  that Codice Software have a potentially really great product, and I’ll definitely check out PlasticSCM in a few more versions.  I think the easy and very visual branching is fantastic, and I love the distributed features (although sadly I didn’t get to try them out).  I wish them all the best.

Robert

PS.  The version I was trying to use is 2.8.163, and today’s date is 2009-12-08.

The PlasticSCM Evaluation: Not good so far, problems with ignore.conf

So I’ve downloaded PlasticSCM and I’m giving it a whirl, and sadly, so far I’m not impressed and I don’t even have my files into it yet!

After dealing with Subversion bugs and corruption the other day, I thought things would be smooth sailing.  Unfortunately, the PlasticSCM Subversion importer did not correctly import everything even after all that work.  So instead, I just did a direct export of just the current HEAD revision from Subversion into the Plastic workspace, and then did an “Add (recursive)” to add everything.

Of course, it added all my bin, obj, publish, etc folders too, which is no good, so I went looking in the documentation for how to ignore files and easily found the information.  I followed the documentation, and it didn’t work .

I posted on their forum, I Googled, I phoned their support #, I e-mailed their support.  Nothing.

So I started to play with it, and I eventually got it working.

Apparently the exclusion patterns are not compared to the file name or to the folder name as described in the documentation, but instead the exclusions are applied to the entire path!. That’s why I couldn’t just specify “desktop.ini“, because the path doesn’t match it!  Instead, *desktop.ini works, because that will ignore any desktop.ini file on any path.

Powerful yes, documented no; this is very different from the documentation and the release notes!
If anybody out there cares, here’s my now working ignore.conf file:

# RAM: The PlasticSCM ignore file, defining the paths to exclude
*\bin*
*\obj*
*\tmp*
*\publish*
*.pdb*
*\Performance Snapshots*
*\_ReSharper*
*\TestResults*
*.user
*desktop.ini
*~*.docx

Here is what really worries me

  1. The documentation is dramatically wrong.
  2. Only one person replied to this forum thread
  3. That person was wrong
  4. When I called their support phone number I got an answering machine
  5. When I e-mailed their support I got absolutely no response (even to acknowledge that they received my question
    (I have still not received any response, more than 24 hours later)
  6. Google only had links to the incorrect documentation and release notes

Maybe I will not buy this after all despite the cool videos…

So far this entire process has wasted about 5 hours of my time, all in an effort to save 30 minutes every week with SubVersion problems.  This is not good…

Switching to PlasticSCM & Dealing with Subversion corruption

So I’m switching over to PlasticSCM, and I’m trying to import my existing SubVersion repository.

The PlasticSCM importer was easy enough to configure, and at first I pointed it against the VisualSvn Server bin directory.  That didn’t work very well, and I immediately got a null-pointer exception.

So I downloaded the latest Windows binaries from http://subversion.tigris.org/getting.html#binary-packages

That worked better, but after a little while the import said that there was an error.  At first I thought it was a problem with PlasticSCM, but it wasn’t.  The PlasticSCM importer reported that there was a problem running the command:

svn log -v https://myserver/myproject

So I ran this command from the command line and eventually got this error back from Subversion:

Could not read chunk Size: Secure connection truncated

So after some Googling demonstrated that other people had this problem and nobody had a solution I configured VisualSvn Server to just use a non-secure connection, which then lead to this error:

Could not read chunk size: connection was closed by server

Okay…  So then I got smart and tried something slightly clever – I bypassed the server entirely:

svn log -v file:///d:/dev/Subversion/myproject

Now I finally got the REAL error message:

svn: Malformed file

So then I tried this:

svnadmin verify d:\dev\Subversion\myproject

But that reported that every revision verified just fine!?  But when I run the “svn log -v” again revision 147 still bombs, so it looks like “svnadmin verify” is not really doing a very good job.  The “svnadmin recover” command does nothing, and completes almost immediately saying that everything is fine.  (at this point, I am really not happy with Subversion – how many other internal corruptions are there?!)  I tried doing a dump of the repository:

svnadmin dump d:\dev\Subversion\myproject> d:\tmp\SubversionRepository.dump

but that also failed on revision 147.

Back to Google!  Apparently people are just hoping that the revision is early in the product development life cycle and then dropping all the revisions before that:  http://svn.haxx.se/users/archive-2006-10/0215.shtml.  Sigh.

So then I ran:

svnadmin dump –revision 148:HEAD d:\dev\Subversion\myproject > d:\tmp\SubversionRepository.dump

and got some error messages that things were referencing revision 145, which is before when I’m now dumping so loading into an empty repository will fail.  What’s really odd is that it didn’t stop there, it just kept going, so if I wasn’t watching it I might not have noticed that the dump was bad.  So I had to run it again, skipping even more revisions and starting at 153 – 8 revisions AFTER the corruption.

What good is a revision control system if the revision history silently becomes corrupted and the only solution is to drop all the revisions before that!?!? Sadly, I’m not alone:

So with my partial dump completed I did a load into a new repository:

svnadmin create d:\dev\Subversion\myproject_fixed
svnadmin load d:\dev\Subversion\myproject_fixed <
d:\tmp\SubversionRepository.dump

This command took a while to run, but it seemed to be successful.  I can now do a svn log -v against the repaired repository.

And the PlasticSCM importer is now working; Goodbye Subversion!

Subversion says: I’ve just sucked 2.5 hours of your life away.
[Robert cries and moans in pain]

Switching to NLog

My application has about 165,000 lines of code, and I have been using log4net for all my logging needs.  However, for a client application that I’m working on I wondered if Log4Net was the best choice.  After doing some research and experimentation I decided it wasn’t.  Instead, NLog wins the new title, and it only took 1 hour to switch my entire 165,000 lines of code over to NLog.

Benefits of NLog over Log4Net:

  • Signs of life!  Log4Net doesn’t seem to be active any more – certainly there hasn’t been a release in ages.
  • Logging methods that accept format strings and parameters.
    This later is why switching to NLog took so long: I was able to go through my code (using VS2008 & searching across the entire solution) and remove all the irritating string concatenation, replacing them with format strings and parameters.  This means that if a logging level is turned off, the string concatenation will never occur!

So instead of:

log.Debug(“Some message ” + pieceOfData + ” and ” + someOtherData );

which ALWAYS results in 3 string concatenations even if debug logging is disabled, I can now do:

log.Debug(“Some message {0} and {1}”,pieceOfData,someOtherData );

which ONLY does a string concatenation if the debugging is enabled.

  • Automatic configuration – just start-up your application and go
  • More powerful configuration!
    Their configuration has LOTS of very cool marcos (which they call Layout Renderers).  This allowed me to eliminate a huge pile of messy C# log4net configuration code (complete with nasty casting to internal classes – which apparently is the “documented” approach) just to configure the output file to go into the user’s AppData local folder.Eliminiating C# code for a single easy-to-understand line of configuration is good!

So instead of all of this:

/// <summary>
/// Setup Log4Net and create a file appender so that it logs to a writable location!
/// (even in Vista where the "Program files" folder is read-only.
/// </summary>
private static void SetupLog4Net( string dataPath ) {
	// Configure Log4Net with the properties from the app.config file
	log4net.Config.XmlConfigurator.Configure();

	// Create the file appender and setup basic options
	var fileAppender = new log4net.Appender.RollingFileAppender();
	fileAppender.AppendToFile = false;
	fileAppender.RollingStyle = log4net.Appender.RollingFileAppender.RollingMode.Once;
	fileAppender.MaxSizeRollBackups = 4;	// The number of log files to keep around
	fileAppender.MaximumFileSize = "10MB";

	// Use an almost standard layout pattern
	const string layout = "%date  %-5level  Thread:[%-23thread]  %logger - %message%newline";
	fileAppender.Layout = new log4net.Layout.PatternLayout(layout);

	// Log data to the location application data directory because it's pretty
        // much guaranteed we can write there.
	string logFilePath = CurrentLogFilePath(dataPath);
	fileAppender.File = logFilePath;

	//Notify the appender on the configuration changes
	fileAppender.ActivateOptions();

	// Get the root logger (bit of a hack, but seems "standard") and add the file appender
	var repository =  LogManager.GetRepository() as log4net.Repository.Hierarchy.Hierarchy;
	if( repository==null ) throw new Exception("Failed to get the log4net hierarchy repository.");
	log4net.Repository.Hierarchy.Logger root = repository.Root;
	root.AddAppender(fileAppender);

	// Very important: Mark the repository as configured and notify it that is has changed
	repository.Configured = true;
	repository.RaiseConfigurationChanged(EventArgs.Empty);
}

I now have this one line of configuration code in the XML configuration file:

fileName=”${specialfolder:folder=LocalApplicationData}\MyApp\MyApp.${shortdate}.log”

That is FANTASTIC!

  • NLog is also supported by Gibraltar!  (Which I WISH I owned…)

The only down side is that padding doesn’t always work, so laying out the log file for easier reading is not there yet (from the NLog blog):

…the use of “padding” attribute on layout renderers. If you used it in v1 you may have found
that it did not work for all renderers (depending on their implementation)

Apparently this is fixed in version 2, but I don’t know when that will be available.

Somebody has also already written an article about NLog and PostSharp which is cool because it means “free” logging.

I also can’t get it to do rolling log files every time the application starts up.  I’ve posted a question on the NLog forum here.

Robert

Things I have found to be true #1

A life lesson that I learned the hard way is this:

If I’m right about something it should easily to convince somebody of my point of view.

Now, “easy” might mean doing some research and writing up a small Word document if it’s a professional disagreement at work.   It almost certainly means understanding the other person’s context:  their point of view, their understanding of the way things work and ought to work, and the constraints they have to work within and why those constraints are there (and if they  can possibly be removed, often with creative thinking or brining somebody else into the discussion and repeating this process).

For example, I had a disagreement with boss  about something.  Before I tried to convince them I was right, I first asked lots of questions to understand why he thought he was right.  I didn’t know the constraints the boss was working under, but I knew that their boss and the boss’s boss were often dogmatic about technology, and I respected the boss as a very intelligent person that always tried to make the best decisions (yes, it was great working with this gentleman!)  So I worked hard to understand his constraints.

Mountain boyOnce I understood his context, I realized two things:  1. his understanding of the technologies in question was slightly incorrect (he is after-all the boss and slightly removed the technology after 20 years in management) and 2.  he was rightly concerned about alignment with strategic technology direction, a few other projects that were doing something similar, larger scale, security initiates  and how my suggested technology would worth within those constraints, etc.  With that deeper understanding of the problem and solution criteria I did some research, wrote up a small Word document that outlined at a high level how two technologies worked, the pros and cons of both approaches and presented a summary evaluation of both technologies against a more complete set of criteria than I had originally been aware of.  In the end we almost always chose the right technology.

This actually happened about 20 times in the 4 years that I had the pleasure of working with this gentleman, and sometimes the technology I had suggested was the better choice, and sometimes once I understood everything better my original thinking was wrong.  Thus, this life lesson has an inescapable and important corollary:

If, after correctly understanding the complete problem, understanding all the solution criteria and understanding any other constraints, my original point of view may be wrong.  However, that’s okay because we can now choose the right solution together.

And a few times, my original thinking was right, but artificial, capricious and immutable constraints defined by the boss’s boss’s boss (etc) meant that we couldn’t do the right thing, so we chose the best thing we could under the circumstances.

PS.  The dogma clause

Obviously sometimes the constraints the other person is working within are their own personal dogma, and therefore absolutely preclude any ability to change their mind regardless of any amount of evidence.  Hopefully (and usually?) this is not the case in a professional setting like work.  I therefore cast out such situations as obviously not applicable to a professional discussion like this one is pretending to be.  I also make it a rule to try my best to avoid issues with people whose personal dogma on the issue overrides evidence.  However, it does happen even at work (as it the case with the boss’s boss’s boss etc above).  I have found the best solution in such situations is to politely say that you’ll have to agree to disagree, then do what they are asking while looking for another job…