Tuesday, November 29, 2005

Do you check in your OSS source into trunk?

How to use Open-Source Software? What I mean is, should you see OSS source code as part of your project's source code, keep their source code around and maintain/update them, or should you just use them in your project and wait for it to be maintained by their authors? This is a question for a lot of development teams, because by now almost anyone would have known the pros and cons of OSS. The bottomline is, you will use them at one point or another. The bigger question is, how to use them to your project's benefits, without carrying too much overhead.

For me, I want to maintain as few lines of code as possible. So my answer is, don't ever give me the OSS's source code. Give me your project's source code plus the OSS assemblies that are in use. I should be able to checkout your project's trunk and go. Don't assume I have stuff installed after I downloaded your project's source code. If at some point in the future the code does not run because of an OSS bug, fix the bug then submit it back to the community. If this happens more than few times, use something else.

This solves the problem of trying to maintain a chunk of code that a team has no idea about. At the minimum, it solves any new dev's problem of not having to download a 100MB trunk with 80MB of it is OSS source code (admittedly it's more a nuisance than a problem).

One of the good things about OSS is that there usually is an abundant amount of alternatives out there. Take functional testing as an example, Selenium and WATIR. There are things Selenium is good at (cross-browser testing), and there are things WATIR is good at (more powerful script coding). Mock objects anyone? NMock, EasyMock.NET, Rhino.Mocks, etc. Code coverage? There are even TWO NCovers out there that has the same name...

I think the problem of merging a tweaked, home-brew version of an OSS back into using a latest version down the road is much more painful than using multiple OSS in your code base. For the latter at least the breaking changes are documented.

If it hurts to use something in solving your problem. Don't use it. Problem solved. The problem you are getting paid to solve is delivering business value.

Saturday, November 19, 2005

The best Visual Studio.NET blogging companion

Check this out, all I did was in VS 2003, right-click, select "Copy as HTML...", click OK, and CTRL-V in blogger. All of a sudden you get this stylish code colorization in a blog:

    public class Bootstrap : IDisposable
        private IMutablePicoContainer picoContainer;
        public static void Main()
                using (Bootstrap bootstrap = new Bootstrap())
                    IMainForm mainForm = bootstrap.BuildMainForm();
                    Application.Run((Form) mainForm);
            catch (Exception e)

Awesome VS.NET Add-in! CopyAsHTMLSource

Web 2.0... the what?

Recently I am hearing more and more about Web 2.0. So what the hell is it? It seems to me no one really knows exactly what it is, but among all opinions they all point to the same direction: enhanced web application's user experience. I guess the industry is looking for the next buzz word after AJAX.

In my company's forum, a couple web sites have been mentioned, and they really impressed me with what they mean by "user experience":

Man I am falling in love with the head-shaking textbox in script.aculo.us!

Monday, October 10, 2005

How to mock out event calls in NMock?

When it comes to .NET eventing, a lot of developers barf at and not knowing how to test them. one is that the event handler methods are correctly wired to the corresponding events, and two the event handler code does what it's supposed to. The first is hard to test, because the wiring of a method to the event is internal to the containing class only. The second is easier, because you could stub out the event handler method and making sure that it is getting called, but pure mockists would dislike this approach.

Consider the following example:

public interface IBattery {
event EventHandler Low;
event EventHandler Depleted;

public class Battery : IBattery {
public event EventHandler Low;
public event EventHandler Depleted;

public Battery() {
Low += new EventHandler(OnLowBattery);
Depleted += new EventHandler(OnDepleted);

public void SomeMethodThatConsumesBattery() {
if (IsLowBatter("10%")) {
OnLowBattery(this, EventArgs.Empty);

protected virtual void OnLowBattery(object sender, EventArgs e) {
if (Low!= null) {
Low(sender, e);

public interface ILaptop { }

public class Laptop : ILaptop {
private IBattery battery;

public Laptop(IBattery battery) {
this.battery = battery;

batter.Low += new EventHandler(OnLowBattery);

protected virtual void OnLowBattery(object sender, EventArgs e) {
// Count down 15 mins before shutdown!

We want to unit test the events are wired for our Laptop class, but we want to avoid changing anything any of the classes for the purpose of unit testing, since I am a big advocate of "never change your production code just for testing." You may argue that I have a "protected virtual void" event handler method there to leave room for myself for stubbing, but as far as event handling method goes, I think it is actually a good idea to allow my subclasses to override and extend my default implementation. This is the standard and encouraged way of writing and declaring event handling methods too in writing custom ASP.NET server controls. Check out Nikhil Kothari's book.

So, solution one, create a stub class for our Battery class in testing Laptop, and add extra methods to manually raise the events:

public class BatteryStub : Battery {
privately bool lowBatteryExecuted = false;

public void RaiseLowBattery() {
Low(this, EventArgs.Empty);

protected override void OnLowBattery(object sender, EventArgs e) {
lowBatteryExecuted = true;

public void Verify()
if (!lowBatteryExecuted) throw new ApplicationException();

public class LaptopTests {

public void StartCountdownWhenLowBattery() {
IBattery batteryStub = new BatteryStub();
ILaptop laptop = new Laptop(batteryStub);


// Assert laptop countdown started.


That's a lot of code to just unit test a single event is wired correctly. Also this stub is really doing a lot of a mock's work. Look at the Verify() and the simplistic stubbed event handler method. The fact that they are there and are simply is because in testing we are not interested in what they do, but that they are getting called. Now, duplicate this kind of test stub class for every object you have events, and you will quickly lose appetite on how many you have to write for all your domain objects.

Fortuntely there is a second solution, if you use NMock 1.1:

(credit to my co-worker and talented friend Levi Khatskevitch)

public class DynamicEventMock : DynamicMock
private const string ADD_PREFIX = "add_";
private const string REMOVE_PREFIX = "remove_";

private EventHandlerList handlers = new EventHandlerList();

public override object Invoke(string methodName, params object[] args)
if (methodName.StartsWith(ADD_PREFIX))
handlers.AddHandler(GetKey(methodName, ADD_PREFIX), (Delegate) args[0]);
return null;
if (methodName.StartsWith(REMOVE_PREFIX))
handlers.RemoveHandler(GetKey(methodName, REMOVE_PREFIX), (Delegate) args[0]);
return null;
return base.Invoke(methodName, args);

public void RaiseEvent(string eventName, params object[] args)
Delegate handler = handlers[eventName];

if (handler == null)
if (mockedType.GetEvent(eventName) == null)
throw new MissingMemberException("Event " + eventName + " is not defined");
else if (Strict)
throw new ApplicationException("Event " + eventName + " is not handled");


private static string GetKey(string methodName, string prefix)
return string.Intern(methodName.Substring(prefix.Length));

Now, in your test class:

public class LaptopTests {

public void BatteryLowIsRaised()
DynamicEventMock mockBattery = new DynamicEventMock(typeof(IBattery));

ILaptop laptop = new Laptop((IBattery)mockBattery.MockInstance);

mockBattery.RaiseEvent("PlayClick", EventArgs.Empty);

// Assert the laptop instance's 15 mins count down started


Should the Low event wasn't wired, this test fails. You have explicitly told the mock object to raise its event, and assuming your events are wired correctly to the correct event handler, you have control over when to fire them.

There are drawbacks, of course. Notice the event name is represented by a string, making event renaming a pain to do, as it is for NMock for method expectations. My only advise is try out EasyMock.NET if that is a real pain for you.

Ruby on Rails demo

A lot of buzz has been generated by Ruby, and given that Ruby on Rails pushes that hype to another level it sure is worth a look. This video from rubyonrails.com is a great tutorial on what is Ruby on Rails and how it works.


Friday, September 23, 2005

Container framework in .NET - yes - 1.1

Haven't blogged for a good while... =P

I have been looking at the System.ComponentModel namespace. It is obviously an area where I have constantly overlooked. What is in there? There is a ton in there that the Visual Studio .NET IDE extensively uses, so it means it is useless to us as developers, right? Maybe not.

Recently I have found myself interested in digging deeper into PicoContainer.NET. It allows you to design better OO business objects by decoupling your business objects from one another (through IoC). For those who aren't familiar with IoC (Inversion of Control), basically it means in this context instead of having object A to instantiate and use object B in it, object A will ask for an instance of object B in its ctor at its construction time. How it helps OO design is that now object A no longer has a irreplaceable link to object B, thus they are decoupled, and it allows us to test object A easily by either stubbing or mocking out object B.

In the above example, object A and object B are "Components". They will be put in a "Container" and Pico will automagically instantiate object B when you request for an instance of object A.

Back to the System.ComponentModel namespace, it also contains interfaces IContainer, IComponent. Now things get interesting, each IComponent is called "sited" after being added into a IContainer (like the glue between the Container and the Component). A component can now call base.Site.GetService(typeof(anotherComponent)) to access another component's functionality. For our example, in object A's ctor, it can call base.Site.GetService(typeof(ObjectB)) to retrieve an instance of object B and create itself, without having to know how to "hard create" an instance of object B.

How does it help testing? For testing object A, one can stub or mock out its Site property and provide a testing implementation when its GetService() is getting called. This pattern (or good practice) is very important to design decoupled OO systems.

So when to use what? My take is that one should use Pico in your bootstrap class (public static void Main(args[])), and since it is considered harmful to use Pico in anywhere deeper than your bootstrap class, I will consider using the System.ComponentModel stuff in everywhere else I think using a Container-Component pattern will help decoupling my design.

Some excellent reads on the topic:
[Urban Potato] Geek: System.ComponentModel
[Daniel Cazzulino] Lightweight Containers and Plugin Architectures: Dependency Injection and Dynamic Service Locators in .NET

Friday, June 10, 2005

Want to whip up some quick UI Tree?

I had to produce for one of my clients some form of UI Tree to help them to focus on how various pieces of user interactions with the system is going to take place from a UI perspective. It went from paper bubbles to Visio-lizing it and client still don't like it. Finally, I stumbled across this product that saves the day.

It is basically a map of thoughts organized in a click-me-and-drill-down-as-you-go application. The point is, it proides a very easy drag-and-drop tree creating inetrface to do the job very quickly.

At the end of the day this did wow the people who saw it, and it is extremely easy to use. It got the job done. I am however interested in the many ways of people who create the so-called UI Tree how they do it.

Tuesday, May 24, 2005

Connecting a Pocket PC emulator to host internet in Whidbey

This tip is for developing environment which has Whidbey Beta 2. I do not know if it will work (probably won't) on a Virtual PC environment.

If you are trying to connect your Pocket PC emulator to the internet because you are developing/deploying perhaps a web service on your host system, and you want your emulator to be able to consume it in your debugging, here is a few things in sequence I have tried to get it to work:

- Restart your computer. Yes.
- Open your .sln by double-clicking it in your Explorer, not by opening your Whidbey and then selecting it from your File/"Recent Projects". Apparently there is a funky difference.
- Run F5 to bring up the emulator.
- It will probably tell you "Cannot connect to device", but that's okay.
- In Whidbey, go to Tools/"Device Emulator Manager", set your emulating device to Cradle.
- Reset the state of your emulator.
- Close down the emulator.
- Run F5 again to bring it back up.
- In your emulator Connections/Advanced/"Select Networks", select "My Work Network" in the first droplist for programs that need access to the internet.
- Go to its Edit/"Proxy Settings", uncheck the box "This network connects to the internet". Notice this box will automatically be checked again by the emulator once it can connect to the internet.
- And finally, verify that you have turned your firewall software off.

This would work in the emulator IE whether you type "http://localhost/MyService.asmx" or "http://localhost/MyService.asmx". If I were you, I would see if I could even connect to Google first in your emulator IE before testing the web service.

Tuesday, May 17, 2005

Dynamic deployment config settings

Using Visual Studio .NET 2003, there are different options to dynamically use app/web.config settings depending on what environment the build is deployed on. One of which is in your code, use preprocessor directives like #if, #elif, and #endif. I don't like this approach, period. This is really ugly. What if you have 5 deployment environments? You will end up with values with keys "dbConnStringDev", "dbConnStringTest", "dbConnStringQA", "dbConnStringProd", etc.

Another option is have inside your app/web.config file, just create one key-value pair for each setting regardless of how many deploying environments you might have (eg. ), and then after each deployment, manually (or automate this by some scripts) go and edit the settings for all these. Well... this is just as bad as a single mistype would deem a deployment failed.

A third option would be, if you have separate a set of physical machines per deployment environment, one could use each machine's machine.config as the definitive setting storage, and then override them with local app/web.config in for example your developer's .NET projects. Question arises if for example you have five different deployment environments (dev, test, QA, stage, and prod) but you only have two boxes for your web services server across all of these environments: how do you direct traffic for dev, test, and QA to box A while stage and prod to box B. It's the same problem all over again.

The solution (well, not really) I have found so far is this. In the .csproj that could house a app/web.config file, you can find the following lines for each solution configuration:

Name = "Release"
AllowUnsafeBlocks = "false"
BaseAddress = "285212672"
CheckForOverflowUnderflow = "false"
ConfigurationOverrideFile = "app.release.config"
DefineConstants = "TRACE"
DocumentationFile = ""
DebugSymbols = "false"
FileAlignment = "4096"
IncrementalBuild = "false"
NoStdLib = "false"
NoWarn = ""
Optimize = "true"
OutputPath = "bin\Release\"
RegisterForComInterop = "false"
RemoveIntegerChecks = "false"
TreatWarningsAsErrors = "false"
WarningLevel = "4"

if you create a new .config file for each deployment environment that you have, then create a new solution configuration for each deployment envionment (eg. Debug, Release, QA, Stage, Production), then change that bolded line for each deployment into the appropriate .config file, then create a Setup project in your solution and build it with the solution configuration you want for your deployment, the output .msi will include *a* app.config file, with it being replaced with the file you specified in the above bolded text. So in the example above, app.release.config will be used and renamed to app.config when the output .msi is deployed onto the environment.

This is so far the cleanest solution for me. But it does tie you into using Setup project for your deployment configuration. And from what I have heard, .msi has not been a good deployment strategy for complex application deployment (yes, the fact that there is a relational database built into a .msi does not make it suit for enterprise level if it is complex to use).

Still looking into alternatives... on a side note, the fact that Microsoft is switching from "no-touch deployment" into "one-click deployment" means to me that the marketeers at MS is going one step to far in their marketing effort =P

Thursday, May 05, 2005

Whidbey Beta 2 Visual J# Redistributable, cannot load package problem

I was having a tough time to install Whidbey Beta 2 onto not my Virtual PC instance, but onto my host OS Windows XP Prof. I installed it three times in total, fortunately I have my laptop backed up before I acted.

The problem I was having was that during installation, the wizard tells me that the component 'Microsoft Visual J# Redistributable Package 2.0 Beta 2' installation was unsuccessful. The Whidbey install went on, but as I open a .NET 2.0 solution, it gives me some infamous package loading error:

Could not load type 'Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable2' from assembly 'Microsoft.VisualStudio.Shell.Interop.8.0, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'

and I then I could not open the desgin view of a simple winform page in my Smart Device Compact Framework project.

The solution was I brought back the image I took earlier, downloaded this cleaner tool from Aaron Stebner, ran it. Then the next time I install Whidbey it no longer gives me the J# Redistributable problem. My winform page now loads fine.

Thursday, April 28, 2005

Songs of the Extremos

My co-worker Shaun Jayaraj today showed me the Songs of the Extremos. This is funny stuff... Check this one out first:

Hey Dude (sing to the tune of - "Hey Jude" by the Beatles)

Hey dude
Your code smells bad
Go refactor and make it better
That tests are requirements
Then you can begin
To make it smell better

While talking funny about Agile here is another good one, The Agile Manifesto (Hip-Hop Remix).

Individuals and interactions over processes and tools

translates into:

Peeps and tradin' rhymes ova' fake moves and bling-bling

Do you dig it?

Wednesday, April 27, 2005

The Guerrilla Guide to Interviewing

A co-worker of mine Alex Pukinski sent me this blog entry a few days ago about a guide to hiring the right software developers for your company. I find it very interesting and in many cases agreeing to what it says. Having done a few in-office interviews for ThoughtWorks and looking back at them, I could probably use this guide a little more in making my choices...

Friday, April 22, 2005

nunit.core.dll and GAC

I came across a useful tip on using NUnit in purely XCOPY fashion. NUnit download comes with an installer. During its installation, it sneakily would GAC the nunit.core.dll for you. This may or may not be causing you problem if your development source tree also contains your NUnit assemblies. By that I mean your build scripts might be structured in a way that it does not rely on each of the machine to have NUnit installed before building, but it uses its own source-controlled, XCOPY'ed NUnit exe and co.

If your machine does not have NUnit installed, but you get it through your source control system, when you run the test assemblies through the NUnit exe, you might get a nasty error (TDA: actual error message) from NUnit saying it cannot find nunit.core.dll or something.

One of the solution (that my colleague and NUnit contributor) Mike Two suggests is that in your projects where you would use nunit.framework.dll, to also add a reference to the nunit.core.dll of your source tree. This guarantees that NUnit would be able to find it and load it into its AppDomain. This way you will never need to install NUnit and rely of the assembly being GAC'ed.

Thursday, April 14, 2005

Lovely business words...

There are a few words in the business IT world I dislike. The beauty is that in whatever order and however you put these words together in a statement they will deliver the exact same message - a broad and meaningless statement. Here are the candidates:


eg. Yesterday I worked on the enterprise business architecture of the infrastructure solution of the project.
eg. I am the architect of the business solution infrastructure team, managing the project's business model.

Thursday, April 07, 2005

Google Suggest with ASP.NET 2.0

This interesting blog post talks about how to do the Google Suggest way of communicating with the server without triggering a postback in ASP.NET 2.0. Pretty neat.

Tuesday, April 05, 2005

How to create build scripts for projects

Here is an article from Mike Roberts on how to set up the build scripts for .NET development project. I have had the pleasure to play with .NET 2.0 on my current project, and building the build scripts around .NET 2.0 and MSBuild I can definitely say it is not exactly fun.

For CruiseControl.NET to display compile errors from executing "msbuild project.sln", an XmlLogger has to be written and supplied to the command line. The code for the XmlLogger no longer works when I updated my Whidbey from Dec CTP to Feb CTP, which updated .NET Framework to build 50110, and therefore had to be re-written.

After that seems everything resumes normal... at least I am able to show compilation and nunit errors on my CCNet page.


I am starting this blog today to write about the pain, pleasure, hate, and love of the various forms of technologies that I have encountered in my life. Working for ThoughtWorks has enabled me to grow maturely into a professional developer, which requires constant learning and experimenting the latest and greatest technologies. I truly enjoy working for this company...

And this is where my initialization routine begins... :-D