Archive for December, 2009|Monthly archive page

Mindscape’s Lightspeed

John-Daniel over at Mindscape e-mailed me yesterday at 2:50am (fortunately, he lives in New Zealand…) and suggested I write about the release of Mindscape LightSpeed 3.0.  He offered me a LightSpeed 3.0 license free for me to give away in any way I want, which I thought was pretty cool for reasons I’ll get to in a moment.  He also wants to entice me to write such a post by offering me a free license of any of their tools, which unfortunately is not as cool, for other reasons I’ll also get to in a moment.

But first, some background…

The background

Hurdles come in all shapes and sizesFor the software that I’m developing for my own business I had an absolute nightmare with persistence.  My software is a thick client application that will run on people’s desktops or laptops.  In an internet age this may be considered a little bit shocking;  several years ago I worked for a company that had a decree (delivered from on high) that all future applications would be written for the web. With Google building everything on the web, and Microsoft moving Office to the web, and all the talk about cloud computing it seems the web is the place to be.  However, my clients often use their laptops in places without wireless connectivity, the software is doing a lot of computationally intensive stuff so hosting it in the cloud could get pricy quickly, and finally the software deals with very personal data, and my clients are very leery of not having this data secured locally.

Additionally,after interviewing lots of clients, I came to the inescapable conclusion that my clients are not computer experts nor do they want to be.  I know this because they told me that in exactly those words.  🙂  So making them install or even attempting to automatically install some form of traditional client-server database was out of the question.

So from the early days when I was designing/architecting my application I knew it needed to use an embedded database.

Forget puritan dogmatism, for me its about productivity and getting useful things done, so I definitely did not want to hand-code SQL for all 500 database entities in my application.  Starting a green-field application I also wanted to focus on the domain model and do domain-driven development.  The database then  becomes a mechanism for persisting data, not the end-all-be-all of the application.  So any ORM tools that require building the database and then automatically generate some pseudo-domain-layer were not for me.  I spent about a week full-time just finding ORM tools and downloading them and trying them out.  Some hadn’t been touched in 5+ years and many were simply gone entirely (which made evaluating them very easy 🙂

So I finally started developing with SQLite, which is a great little free database.  Many of the ORMs available do not work with SQLite, which ruled them out immediately.  (I did spend a week trying to get Firebird to work as an embedded database, but no such luck, and MySQL is VERY expensive for an open-source product if you need to use it as an embedded database in a product that you’ll be selling!)

I decided to use the Microsoft Entity Framework to communicate with the database, which had just been released about two weeks before I started development and was being pushed hard as the greatest thing since sliced bread.  Unfortunately, I quickly realized that the Entity Framework was “twisted and evil”; I found the three layer abstraction great in theory and really lousy in practice, the VS2008 designer was incredibly lacking and buggy, and error messages were so complex not even Google knew anything about them.  I tried using Microsoft SQL Compact edition, with the same exact database model I just got completely different complex error messages.  So I threw out all that work (it wasn’t much) and started again with good ol’ reliable NHibernate.

Only, I immediately had headaches with that too; for all of the Hibernate croud’s ranting and complaining about the Entity Framework I found NHibernate to be really just more of the same:  their puritan dogma of “transparent persistence” is great in theory, but oh by the way we don’t have any GUI for .NET folks so just hand-code the XML (?!), oh, and you just need to make all your properties and methods virtual, so oh yeah your classes can’t be sealed, and you really need to ensure that you only use interfaces for collections (so no someList.AsReadOnly()), and you can’t just have one class have a reference another because that’s really not good database design, etc.  It certainly didn’t feel terribly “transparent” and it definitely wasn’t productive.  So in the interest of protecting my domain layer I switched to using Castle Active Record.  Putting a few attributes on my classes and properties is MUCH better than having to completely change the way I design classes.  I still had to jump through hoops when dealing with collections, but everything was much easier and faster.

This worked for several months before the puritan dogma of NHibernate broke through the Castle Active record abstraction and bit me very hard; so I went back to the drawing board.

I decided to use an object database:  DB4O.   Development has been incredibly fast ever since; everything is so darn easy, I haven’t thought about persistence in months – everything just works.

Ludicrous speed!

When a client came to me recently and wanted to build a rich internet application I quickly realized DB4O was the wrong solution.  For one thing, it’s single threaded, so it definitely doesn’t fit the internet world.  I read about a few people that tried it anyway with poor results.  I really wanted to use Eloquera, but they don’t have LINQ support yet (and LINQ is just so darn productive that not using LINQ is stealing from the client).  So the hunt was on for a good ORM solution that was easy to use and supported LINQ.  Additionally, object databases are (sadly) not really mainstream, so it just wasn’t a good fit.

Fortunately, I still had the evaluations I had done the previous year of every ORM tool out there I could find, so I just researched the top few tools and examined the latest additions.

Mindscape’s LightSpeed had been a top contender before, and it came out on top this time around.  Everybody that’s used it seemed to love it, and having used it myself now it really is a great tool with great support.  The visual model designer is integrated directly into visual studio and supports both forward (to the database) and reverse (from the database) generation.  You don’t have to use the code-generation feature at all (although I do), and what is really great is that it automatically generates data transfer objects (DTOs).  These DTOs are all partial classes with many partial methods, so it’s easy to extend them and hook in custom code.  It’s very well thought out.  They have also responded to every forum post and e-mail in a very quickly.

My client is happy because I am very productive (and he pays by the hour):  I can easily design new entities, and use the automatically created DTOs to send the entities to the Silverlight client via WCF RIA Services.  (This only required a small change to their code generation templates).  I spend most of my time focused on the actual Silverlight application and a very little time worrying about database persistence and wire-transmission.  Development the way it should be.

I should mention that I still designed the database-access layer so that it completely encapsulates the use of Lightspeed (although it does expose the automatically generated DTOs).  I believe this is just good layered design, although it would in theory allow us to switch out Lightspeed and use a different ORM (what?)

Mindscape’s offer

While Lightspeed is Mindscape’s flagship product, their other products don’t really interest me that much.   I’ve already got the Actipro WPF Property grid (and I don’t need it for my software) their WPF Flow and Star Diagrams looks interesting, I don’t need either.  Their “Essential WPF Control set” is underwhelming, especially when compared against Actipro’s WPFStudio which is great and also comes with themes. So while they have offered me a free license to any of their other tools for blogging about them, I just don’t need any of them.  I just wanted to blog about them anyway because LightSpeed is cool.  🙂

Your free Mindscape license

If you’d like the free license that Mindscape has offered, drop a comment below, I’d love to hear from you about what you’re currently using for persistence (if anything) and why you want to try something different.

Object databases

Speed: For object database sceptics, in my tests, the object database Eloquera was just as fast as the LightSpeed over SQL-Server combination, and LightSpeed is very fast.  Micro benchmarks never tell you about real-life, but my little test was a multi-threaded test with multiple instance types and inserting and updating hundreds of thousands of entities.  The guys over at Eloquera are also doing a great job.

Licensing: With Eloquera’s new licencing model I’ll probably be switching my own project over to Eloquera when they get LINQ support, because DB4O’s draconian royalty based licensing is cruel and unusual punishment for a small business struggling to get started.  (And sadly the .NET version of NeoDatis has completely stalled, although if I start making cash I’m tempted to donate some to the NeoDatis guys to see if we can’t get it going again).

Advertisements

PlasticSCM Review

Well, I’m finally giving up.  I wanted it to work.  It’s looks great, and the advertised features are very impressive.  I have now spent about 3 to 4 days trying to get PlasticSCM up-and-running, and my work days are usually from 9am-5pm and 8pm-11pm, so that’s a lot of time.

Unfortunately, I just can’t get it working. I also have some serious concerns:

  • There is one support guy in the USA.  He’s fantastic, and knows his stuff inside and out.  But there’s only one of him.
  • Lots of little bugs and usability issues.
    • I have a screenshot of PlasticSCM showing me that my revisions are all happening tomorrow, while the Vista clock is clearly showing today’s date (and the server and the client are both on my desktop where I took the screen shot).
    • If you want to pull changes from another machine (my laptop in this case) you do a “pop”, because that apparently is the opposite of pushing changes to another server.  If the support guy hadn’t told me this I never would have thought to apply a “stack-data-structure” analogy to grabbing changes from another machine.
    • Right-clicking on the root of the repository and saying “Check-in (recursive)” doesn’t actually check-in all the changes.  I discovered that changes to files that I had not explicitly checked out were completely (and silently) ignored.  However, you can right-click on those files and then say “Check-in” and it will do so.  So the check-in recursive command doesn’t actually check-in all the files you’ve changed, unless you are clicking on a changed file explicitly and then the check-in command will check it in.  Software should be polite, and it should at the very least be consistent, and even better should probably ask if I want to include files I didn’t check-out and offer to show me the files.  The software should adapt to my needs, not vice-versa.
      (BTW, according to the support guy the solution is to click on the file root and say show changes, select them all, and then say check-in; that’s fine, but I really dislike the changing semantics of “check-in” and the silent failure to check-in modified but not explicitly checked-out files (like many other revision control systems will allow).
  • I never did get the Subversion import working.  They offered to do it for me if I sent them my subversion repository, but I just can’t bring myself to send all of my code to a 3rd party company without lots of legal paperwork in place.
  • Their website is under heavy construction, and apparently things like the 1-800 number got forgotten.  There should be strict processes for reviewing changes before pushing them out to the production website.  It’s still not there and this is three days after I mentioned it to them.
  • The documentation is dramatically wrong (see this post)

I know software has bugs – ALL software has bugs.  My software has bugs.  If the PlasticSCM date/time bug had been the only issue I’d encountered I’d have no problem paying for and using their product.  But it wasn’t, it was all of the above and more.  But in my previous job as a top-level systems architect and lead-developer with a very large finanical services company, it is really important not to mess up the basics, and with the right processes in place it’s not too hard either.

There are a few things I think they could do easily do better:

  • They need more than 1 person on support
  • For my business I use Ring Central, so if somebody phones the 1-800 number it goes directly to both my land-line and my cell phone at the same time.
  • To be honest, I don’t really care if there is bad grammar in the documentation, I understand we live in a global world; it’s not terribly impressive, but I understand.  However the documentation should not be flat-out wrong.  Have the developer that wrote the feature either write the documentation for it (if their English is good enough) or have the developer check out the documentation after the technical writer has written it.
  • Implement strict processes and controls for the website – it’s the first place a customer will come – spelling mistakes, broken forms and missing support phone numbers should never happen.  There are lots of services to check that a website is up and running, including submitting forms; on the back-end any request e-mail with say “FROM TEST SERVER” and then a really long silly (constant) number could be automatically deleted.  Additionally, before pushing out a new version of the website have somebody run through a check-list of features to test, including submitting forms.
  • Do usability testing, and do it often.  I do this as often as possible, and especially after implementing a new feature.  My experience is that people are always very happy to help out and give you their advice!  I’ll post more on usability testing soon, but it’s very easy to do and personally I find it very fun and rewarding.

I think  that Codice Software have a potentially really great product, and I’ll definitely check out PlasticSCM in a few more versions.  I think the easy and very visual branching is fantastic, and I love the distributed features (although sadly I didn’t get to try them out).  I wish them all the best.

Robert

PS.  The version I was trying to use is 2.8.163, and today’s date is 2009-12-08.

The PlasticSCM Evaluation: Not good so far, problems with ignore.conf

So I’ve downloaded PlasticSCM and I’m giving it a whirl, and sadly, so far I’m not impressed and I don’t even have my files into it yet!

After dealing with Subversion bugs and corruption the other day, I thought things would be smooth sailing.  Unfortunately, the PlasticSCM Subversion importer did not correctly import everything even after all that work.  So instead, I just did a direct export of just the current HEAD revision from Subversion into the Plastic workspace, and then did an “Add (recursive)” to add everything.

Of course, it added all my bin, obj, publish, etc folders too, which is no good, so I went looking in the documentation for how to ignore files and easily found the information.  I followed the documentation, and it didn’t work .

I posted on their forum, I Googled, I phoned their support #, I e-mailed their support.  Nothing.

So I started to play with it, and I eventually got it working.

Apparently the exclusion patterns are not compared to the file name or to the folder name as described in the documentation, but instead the exclusions are applied to the entire path!. That’s why I couldn’t just specify “desktop.ini“, because the path doesn’t match it!  Instead, *desktop.ini works, because that will ignore any desktop.ini file on any path.

Powerful yes, documented no; this is very different from the documentation and the release notes!
If anybody out there cares, here’s my now working ignore.conf file:

# RAM: The PlasticSCM ignore file, defining the paths to exclude
*\bin*
*\obj*
*\tmp*
*\publish*
*.pdb*
*\Performance Snapshots*
*\_ReSharper*
*\TestResults*
*.user
*desktop.ini
*~*.docx

Here is what really worries me

  1. The documentation is dramatically wrong.
  2. Only one person replied to this forum thread
  3. That person was wrong
  4. When I called their support phone number I got an answering machine
  5. When I e-mailed their support I got absolutely no response (even to acknowledge that they received my question
    (I have still not received any response, more than 24 hours later)
  6. Google only had links to the incorrect documentation and release notes

Maybe I will not buy this after all despite the cool videos…

So far this entire process has wasted about 5 hours of my time, all in an effort to save 30 minutes every week with SubVersion problems.  This is not good…

Switching to PlasticSCM & Dealing with Subversion corruption

So I’m switching over to PlasticSCM, and I’m trying to import my existing SubVersion repository.

The PlasticSCM importer was easy enough to configure, and at first I pointed it against the VisualSvn Server bin directory.  That didn’t work very well, and I immediately got a null-pointer exception.

So I downloaded the latest Windows binaries from http://subversion.tigris.org/getting.html#binary-packages

That worked better, but after a little while the import said that there was an error.  At first I thought it was a problem with PlasticSCM, but it wasn’t.  The PlasticSCM importer reported that there was a problem running the command:

svn log -v https://myserver/myproject

So I ran this command from the command line and eventually got this error back from Subversion:

Could not read chunk Size: Secure connection truncated

So after some Googling demonstrated that other people had this problem and nobody had a solution I configured VisualSvn Server to just use a non-secure connection, which then lead to this error:

Could not read chunk size: connection was closed by server

Okay…  So then I got smart and tried something slightly clever – I bypassed the server entirely:

svn log -v file:///d:/dev/Subversion/myproject

Now I finally got the REAL error message:

svn: Malformed file

So then I tried this:

svnadmin verify d:\dev\Subversion\myproject

But that reported that every revision verified just fine!?  But when I run the “svn log -v” again revision 147 still bombs, so it looks like “svnadmin verify” is not really doing a very good job.  The “svnadmin recover” command does nothing, and completes almost immediately saying that everything is fine.  (at this point, I am really not happy with Subversion – how many other internal corruptions are there?!)  I tried doing a dump of the repository:

svnadmin dump d:\dev\Subversion\myproject> d:\tmp\SubversionRepository.dump

but that also failed on revision 147.

Back to Google!  Apparently people are just hoping that the revision is early in the product development life cycle and then dropping all the revisions before that:  http://svn.haxx.se/users/archive-2006-10/0215.shtml.  Sigh.

So then I ran:

svnadmin dump –revision 148:HEAD d:\dev\Subversion\myproject > d:\tmp\SubversionRepository.dump

and got some error messages that things were referencing revision 145, which is before when I’m now dumping so loading into an empty repository will fail.  What’s really odd is that it didn’t stop there, it just kept going, so if I wasn’t watching it I might not have noticed that the dump was bad.  So I had to run it again, skipping even more revisions and starting at 153 – 8 revisions AFTER the corruption.

What good is a revision control system if the revision history silently becomes corrupted and the only solution is to drop all the revisions before that!?!? Sadly, I’m not alone:

So with my partial dump completed I did a load into a new repository:

svnadmin create d:\dev\Subversion\myproject_fixed
svnadmin load d:\dev\Subversion\myproject_fixed <
d:\tmp\SubversionRepository.dump

This command took a while to run, but it seemed to be successful.  I can now do a svn log -v against the repaired repository.

And the PlasticSCM importer is now working; Goodbye Subversion!

Subversion says: I’ve just sucked 2.5 hours of your life away.
[Robert cries and moans in pain]