Tuesday, January 24, 2006

Few findings about Presentation Model

Almost every project that I have been on strives for code testability. By increasing code testability one can better the design of the solution as well as continue to change code without worries. However when it comes to increase testability of the view in the classic MVC, it is always hard to do. Microsoft proclaims that they have achieved MVC (or Model-ASPX-CodeBehind), but it fails to address the testability issue of the code in V and C: CodeBehind logic are extremely hard to test against. Both the Presentation Model (PM) and the Model-View-Presenter (MVP) pattern address this issue.

Having used both patterns for my past few projects, I think I am seeing some pros/cons. Today let's look at the Presentation Model first:

Presentation Model:
I like using Presentation Model (PM) when I am developing a web application. In web apps, there are a couple major headaches when it comes to increase code testability, the first of which is carrying state across requests. Secondly, in ASP.NET, the Page class gets created and disposed of every request, making using an intermediary class (such as a Presenter) to "push" content from the model to the view very difficult, and hence a directional referencing problem.

PM employs a "pull" model, meaning the consumer class (aka the code-behind) will ask for data that the PM has prepared for it. Because of this model, at each Page cycle step (Page_Load, Page_Init, etc.) the view will be in the driving seat as far as controlling when to reload the controls and what to load them with. The PM simply sits there and wait for someone to tap his shoulder to ask him to produce something. Notice the directional reference: the view has a reference to the PMs. The PMs do not know about the view, and do not push content out to force the view's state to be refreshed.

Usually you will have one PM per ASPX page, which I call a Page PM. When the page gets busy, your Page PM is likely to be a gynormous 1,000 line class, containing many properties for the use of the many dynamic view controls' use. Therefore, use sub-PM classes to better organize your code and avoid code duplication. The levels of sub-PM classes from my experience probably won't go deeper than 3 inheritance hierarchy.

Most of your Page PMs usually will have reference to the session object, in which most probably the aggregate root of your Domain Model will be stored. For a more complex Domain Model, the PM's task of flattening out this 3-dimensional beast starting with this aggregate root will make your view look extraordinarily simple.

Because PMs usually flatten out your Domain Model into some easily readable grid or string format, your Domain Model objects may not be the ultimate data holder objects that your view 's controls bind to. For example, you will obviously need a new object for a grid showing an insurance policy's type, coverages, each and every insured entity, and each coverage's premiums. (Hopefully they live in separate objects in your Domain Model). Create some what I call "Data Item" objects to facilitate data binding to your grids or lists. Data Item objects are just for holding your diced and sliced Domain Model data and have no behaviors. Think of them as classes that represent a row of data in every single grid on your page. In the above example, your PolicyInfoDataItem class will have public properties of [PolicyType :string, CoverageName :string, InsuredEntityName :string, CoverageAmount :double]. This inspiration comes from DataGridItem, and various ListItem variants in the .NET Framework.

Unit testing your Page PMs in NUnit is always not easy because it uses the Session object, which is not available in NUnit testing. I have seen people use reflection to create a fake HttpContext object that stores Session data on the current thread, which then during [SetUp] set the context object (yes it does have a setter). Then your PM unit test classes have access to HttpContext.Session.

Speaking of unit testing PMs, I find it to be much more work to use mocking to unit tests your PMs. Especially if you love constructor injection, having your PM's ctor to take all those IEverything classes and during unit tests have them all mocked out is a lot of work because PMs is much more object state dependent than behavior loaded. As a result, I would prefer state-based testing (Stubs or the Object Mother pattern) to facilitate unit testing PMs. One must also manage the creation and customization of your domain model's state carefully, because these tend to duplicate code across your test classes easily if you go with state-based testing.

Your view also should not directly instantiate or reference any Domain Model objects (as a guideline, not rule). Whenever possible, pass primitive type user input information gathered off the Page and pass them directly to the Page PM for actions. This can consolidate exception handling routines (instead of scattering them all over your views), and also decouple your view from your Model, which you might later put them into separate assemblies.

Your Page PM can also handle behaviors that your view requests. For example, whenever a button is clicked, its code-behind event handler will make a call to the Page PM's "Save" method, passing the necessary information gathered off the controls, and then the Page PM will delegate the responsibility of the actual save to the Domain Model layer.

Saturday, January 14, 2006

Agile story tips

Stories in an Agile project tend to be less talked about than Test Driven Development, Continuous Integration, and Food :-). But in fact, when it's done well it can profoundly affect the project team's productivity and end users' satisfaction towards the software. The following are a few findings from my experience related to stories management that will drive better software. Their importance are rated on a a scale of 1 to 5 asterisks:

  • Stories should be a thin-thread from the frontend all the way to the backend. (*****)

    This helps the features being developed each iteration to be completed more consistently, and consistency drives predictability, and thus increases visibility and helps business to prioritize and plan given the available time and resources.

  • Business analysts (or the sponsor/end user) should own the stories. During IPM (Iteration Planning Meeting), they should be signed-off, meaning no changes should be made when developers start developing them. (*****)

    By freezing the requirements, developers can be much more productive. Compare a 100m race between two runners, one can go all the way from start to finish without stopping, while the other has to stop-and-go every 10m because he has to worry about the next 10m track he runs will be changed. Also, by someone owning the stories, after the stories were developed the owner of the story will have the responsibility to verify the correctness of the solution. Thus this requires better-written story specifications (tying back to freezing requirements), and in the end the developed story becomes exactly what business wants.

  • Stories should be measured in terms of difficulties or story points, instead of ideal development days (IDD). (****)

    There are two camps of people when it comes to this bullet. One camp uses IDD to estimate the difficulty of a story, then uses Load Factor to measure how off they were in their original estimates. The other camp uses Level of Difficulty such as small/medium/large, or Story Points such as 1-5, so they can measure purely based on yesterday's weather, and not a number that someone uncomfortably being forced to make up.

    IMHO, the first camp's glossaries were created at "post-mortem," literally. Consider the following conversation after a project failure:
    Business: Why did you guys fail to deliver? You only delivered half of what I want.
    Developers: Because we were not productive.
    Business: Why?
    Developers: Too many damn meetings.
    Business: Hm... so in "ideal time" if you had no meetings you would be able to deliver?
    Developers: Oh yea...
    Business: So in hindsight, the original estimates you guys told me was off by a "load factor" of 1/2. Next time should I want a new software I better take that into my budget account...

    The problem is that all these numbers are only meaningful within the context of that one project. But people being people, they like to carry these numbers with them to any projects they walk into wherever they go... because apparently they have been burned before. Next time when business asks for budget for a brand new project, guess what. They will bump up the budget by half, because of that "load factor."

    By using the terms of the second camp, one is implicitly forced to think of these measurements in the context of the current project. When I say this story's difficulty is a Large, one has to ask: Relative to what? Of course, the answer is relative to the other stories of this project. When I say, Story A is a 1 and Story B is a 3, again you are forced to think in the context of the current project.

    You might ask, now if you don't bump up the budget by half, then doesn't the business not get all of what they ask for? The answer is yes. But that's the beauty of short, iterative releases. If we can do that, then in the end the business without bumping up the original budget, yes they will exactly get what they have asked for according to their business priority, perhaps the 6 out of their 10 features, but since we have delivered at least some of the features in early iterations, not only the business has saved money due to those features being rolled out, but also the business is in better shape when it comes to repositioning itself to face more real world challenges, and thus will pump more budget to continue develop the software to give them what they want.

  • If stories are small enough, then there is no need to task them out during an IPM (Iteration Planning Meeting). If stories are to be tasked, they should be estimated. As each of them are completed, actuals should be measured.

    Admit to it. Small estimates are much more accurate than big estimates. Therefore, if each story is tasked out into small chunks of time estimates, and after story completion we have its corresponding actual time spent measurement, we then can find out how much work really is to complete say Story A, a medium-difficulty story or one that has Story Points of 3.

    So what are these estimates and actuals for? They are actual prove (or tracked history) of how we complete our Stories. Let's say in Iteration One we have a story "Public user login" that has a Story Points of 3. In that iteration (two-weeks), at the IPM the development team estimates that there are a total of 10 tasks to be done to complete that story, and they busted their ass to complete that and only that story. Then, in Iteration Seven, a similar story "Restricted user login" shows up. Relatively speaking, it also has a Story Points of 3, since they are about equally difficult. However, since most of the one-time tasks to do that has been completed, the actual number of tasks to complete this story in Iteration Seven might be just 3 tasks. Now the team can use the rest of the time to build other stories, and thereby achieve more Story Points. If it turns out the team achieved total of 7 points in the end, then we say the team is kicking some ass and is more productive than they were in Iteration One. You would notice the total time spent on all tasks between the two iterations will be somewhat the same (assuming no resources change), but completed Story Points increased. From the point of the business, it rocks, because they are seeing more stuff being churned out by the team, in exactly the manner they want it to be.

    This brings up another very important point, if you notice...

  • For story difficulty measurements, whatever you use (IDD, Small/Medium/Large, Story Points), you should always estimate it using the same scale as you estimate the entire story list. (*****)

    This is the only way to measure whether the team is improving over the course of the development.

    Using the example in the last bullet, if in Iteration One the team thinks that "User login story" is a 3 Story Points story, and in Iteration Seven the team completes a similar also 3 Story Points story "Restricted login story", then if in Iteration Ten business comes back and say they want a brand new but similarly difficult "CEO only login story", now despite the fact that in Iteration Ten, after doing those two login stories, this new story requires very very little work to complete, we must again make this story have 3 Story points.

    This way, the measurements will tell the business the following:
    In Iteration One, the development completed 3 Story Points.
    In Iteration Seven, the development completed 7 Story Points (because the tasks required to do the "Restricted login story" has reduced.
    In Iteration Ten, the development completed 14 Story Points (because even few tasks is required to complete the new 3-Point "CEO only login" story).
    In terms of the business people, Story Points = functionalities = business value. They know what they are getting in a consistent basis.

    Should there be a case somewhere in Iteration Eight the number of Story Points dropped, then one has to figure out why. Here's the task actuals can come into handy. In Iteration Seven, 7 Story Points and total of say 100 actual hours of time were needed to complete all tasks. In Iteration Eight, only 6.5 Story Points were completed. But if we look at the hours of the actual, only 90 hours were recorded from all tasks. Now we know the time the team spent on the actual tasks for all stories are about the same. Probably because people having vacations or public holidays that contributes to the drop in productivity.

Tuesday, November 29, 2005

Do you check in your OSS source into trunk?

How to use Open-Source Software? What I mean is, should you see OSS source code as part of your project's source code, keep their source code around and maintain/update them, or should you just use them in your project and wait for it to be maintained by their authors? This is a question for a lot of development teams, because by now almost anyone would have known the pros and cons of OSS. The bottomline is, you will use them at one point or another. The bigger question is, how to use them to your project's benefits, without carrying too much overhead.

For me, I want to maintain as few lines of code as possible. So my answer is, don't ever give me the OSS's source code. Give me your project's source code plus the OSS assemblies that are in use. I should be able to checkout your project's trunk and go. Don't assume I have stuff installed after I downloaded your project's source code. If at some point in the future the code does not run because of an OSS bug, fix the bug then submit it back to the community. If this happens more than few times, use something else.

This solves the problem of trying to maintain a chunk of code that a team has no idea about. At the minimum, it solves any new dev's problem of not having to download a 100MB trunk with 80MB of it is OSS source code (admittedly it's more a nuisance than a problem).

One of the good things about OSS is that there usually is an abundant amount of alternatives out there. Take functional testing as an example, Selenium and WATIR. There are things Selenium is good at (cross-browser testing), and there are things WATIR is good at (more powerful script coding). Mock objects anyone? NMock, EasyMock.NET, Rhino.Mocks, etc. Code coverage? There are even TWO NCovers out there that has the same name...

I think the problem of merging a tweaked, home-brew version of an OSS back into using a latest version down the road is much more painful than using multiple OSS in your code base. For the latter at least the breaking changes are documented.

If it hurts to use something in solving your problem. Don't use it. Problem solved. The problem you are getting paid to solve is delivering business value.

Saturday, November 19, 2005

The best Visual Studio.NET blogging companion

Check this out, all I did was in VS 2003, right-click, select "Copy as HTML...", click OK, and CTRL-V in blogger. All of a sudden you get this stylish code colorization in a blog:

    public class Bootstrap : IDisposable
    {
        private IMutablePicoContainer picoContainer;
 
        [STAThread]
        public static void Main()
        {
            try
            {
                using (Bootstrap bootstrap = new Bootstrap())
                {
                    IMainForm mainForm = bootstrap.BuildMainForm();
                    Application.Run((Form) mainForm);
                }
            }
            catch (Exception e)
            {
                MessageBox.Show(e.ToString());
            }
        }


Awesome VS.NET Add-in! CopyAsHTMLSource

Web 2.0... the what?

Recently I am hearing more and more about Web 2.0. So what the hell is it? It seems to me no one really knows exactly what it is, but among all opinions they all point to the same direction: enhanced web application's user experience. I guess the industry is looking for the next buzz word after AJAX.

In my company's forum, a couple web sites have been mentioned, and they really impressed me with what they mean by "user experience":
script.aculo.us
Rico

Man I am falling in love with the head-shaking textbox in script.aculo.us!

Monday, October 10, 2005

How to mock out event calls in NMock?

When it comes to .NET eventing, a lot of developers barf at and not knowing how to test them. one is that the event handler methods are correctly wired to the corresponding events, and two the event handler code does what it's supposed to. The first is hard to test, because the wiring of a method to the event is internal to the containing class only. The second is easier, because you could stub out the event handler method and making sure that it is getting called, but pure mockists would dislike this approach.

Consider the following example:


public interface IBattery {
event EventHandler Low;
event EventHandler Depleted;
}

public class Battery : IBattery {
public event EventHandler Low;
public event EventHandler Depleted;

public Battery() {
Low += new EventHandler(OnLowBattery);
Depleted += new EventHandler(OnDepleted);
}

public void SomeMethodThatConsumesBattery() {
.
.
.
if (IsLowBatter("10%")) {
OnLowBattery(this, EventArgs.Empty);
}
}

protected virtual void OnLowBattery(object sender, EventArgs e) {
if (Low!= null) {
Low(sender, e);
}
}
}

public interface ILaptop { }

public class Laptop : ILaptop {
private IBattery battery;

public Laptop(IBattery battery) {
this.battery = battery;

batter.Low += new EventHandler(OnLowBattery);
}

protected virtual void OnLowBattery(object sender, EventArgs e) {
// Count down 15 mins before shutdown!
}
}

We want to unit test the events are wired for our Laptop class, but we want to avoid changing anything any of the classes for the purpose of unit testing, since I am a big advocate of "never change your production code just for testing." You may argue that I have a "protected virtual void" event handler method there to leave room for myself for stubbing, but as far as event handling method goes, I think it is actually a good idea to allow my subclasses to override and extend my default implementation. This is the standard and encouraged way of writing and declaring event handling methods too in writing custom ASP.NET server controls. Check out Nikhil Kothari's book.

So, solution one, create a stub class for our Battery class in testing Laptop, and add extra methods to manually raise the events:

public class BatteryStub : Battery {
privately bool lowBatteryExecuted = false;

public void RaiseLowBattery() {
Low(this, EventArgs.Empty);
}

protected override void OnLowBattery(object sender, EventArgs e) {
lowBatteryExecuted = true;
}

public void Verify()
{
if (!lowBatteryExecuted) throw new ApplicationException();
}
}

[TestFixture]
public class LaptopTests {

[Test]
public void StartCountdownWhenLowBattery() {
IBattery batteryStub = new BatteryStub();
ILaptop laptop = new Laptop(batteryStub);

batteryStub.RaiseLowBattery();

// Assert laptop countdown started.

battery.Verify();
}
}

That's a lot of code to just unit test a single event is wired correctly. Also this stub is really doing a lot of a mock's work. Look at the Verify() and the simplistic stubbed event handler method. The fact that they are there and are simply is because in testing we are not interested in what they do, but that they are getting called. Now, duplicate this kind of test stub class for every object you have events, and you will quickly lose appetite on how many you have to write for all your domain objects.

Fortuntely there is a second solution, if you use NMock 1.1:

(credit to my co-worker and talented friend Levi Khatskevitch)

public class DynamicEventMock : DynamicMock
{
private const string ADD_PREFIX = "add_";
private const string REMOVE_PREFIX = "remove_";

private EventHandlerList handlers = new EventHandlerList();

public override object Invoke(string methodName, params object[] args)
{
if (methodName.StartsWith(ADD_PREFIX))
{
handlers.AddHandler(GetKey(methodName, ADD_PREFIX), (Delegate) args[0]);
return null;
}
if (methodName.StartsWith(REMOVE_PREFIX))
{
handlers.RemoveHandler(GetKey(methodName, REMOVE_PREFIX), (Delegate) args[0]);
return null;
}
return base.Invoke(methodName, args);
}

public void RaiseEvent(string eventName, params object[] args)
{
Delegate handler = handlers[eventName];

if (handler == null)
{
if (mockedType.GetEvent(eventName) == null)
{
throw new MissingMemberException("Event " + eventName + " is not defined");
}
else if (Strict)
{
throw new ApplicationException("Event " + eventName + " is not handled");
}
}

handler.DynamicInvoke(args);
}

private static string GetKey(string methodName, string prefix)
{
return string.Intern(methodName.Substring(prefix.Length));
}
}

Now, in your test class:

[TestFixture]
public class LaptopTests {

[Test]
public void BatteryLowIsRaised()
{
DynamicEventMock mockBattery = new DynamicEventMock(typeof(IBattery));

ILaptop laptop = new Laptop((IBattery)mockBattery.MockInstance);

mockBattery.RaiseEvent("PlayClick", EventArgs.Empty);

// Assert the laptop instance's 15 mins count down started

mockBattery.Verify();
}
}

Should the Low event wasn't wired, this test fails. You have explicitly told the mock object to raise its event, and assuming your events are wired correctly to the correct event handler, you have control over when to fire them.

There are drawbacks, of course. Notice the event name is represented by a string, making event renaming a pain to do, as it is for NMock for method expectations. My only advise is try out EasyMock.NET if that is a real pain for you.

Ruby on Rails demo

A lot of buzz has been generated by Ruby, and given that Ruby on Rails pushes that hype to another level it sure is worth a look. This video from rubyonrails.com is a great tutorial on what is Ruby on Rails and how it works.

http://www.rubyonrails.com.nyud.net:8090/media/video/rails_take2_with_sound.mov

Friday, September 23, 2005

Container framework in .NET - yes - 1.1

Haven't blogged for a good while... =P

I have been looking at the System.ComponentModel namespace. It is obviously an area where I have constantly overlooked. What is in there? There is a ton in there that the Visual Studio .NET IDE extensively uses, so it means it is useless to us as developers, right? Maybe not.

Recently I have found myself interested in digging deeper into PicoContainer.NET. It allows you to design better OO business objects by decoupling your business objects from one another (through IoC). For those who aren't familiar with IoC (Inversion of Control), basically it means in this context instead of having object A to instantiate and use object B in it, object A will ask for an instance of object B in its ctor at its construction time. How it helps OO design is that now object A no longer has a irreplaceable link to object B, thus they are decoupled, and it allows us to test object A easily by either stubbing or mocking out object B.

In the above example, object A and object B are "Components". They will be put in a "Container" and Pico will automagically instantiate object B when you request for an instance of object A.

Back to the System.ComponentModel namespace, it also contains interfaces IContainer, IComponent. Now things get interesting, each IComponent is called "sited" after being added into a IContainer (like the glue between the Container and the Component). A component can now call base.Site.GetService(typeof(anotherComponent)) to access another component's functionality. For our example, in object A's ctor, it can call base.Site.GetService(typeof(ObjectB)) to retrieve an instance of object B and create itself, without having to know how to "hard create" an instance of object B.

How does it help testing? For testing object A, one can stub or mock out its Site property and provide a testing implementation when its GetService() is getting called. This pattern (or good practice) is very important to design decoupled OO systems.

So when to use what? My take is that one should use Pico in your bootstrap class (public static void Main(args[])), and since it is considered harmful to use Pico in anywhere deeper than your bootstrap class, I will consider using the System.ComponentModel stuff in everywhere else I think using a Container-Component pattern will help decoupling my design.

Some excellent reads on the topic:
PicoContainer
[Urban Potato] Geek: System.ComponentModel
[Daniel Cazzulino] Lightweight Containers and Plugin Architectures: Dependency Injection and Dynamic Service Locators in .NET

Friday, June 10, 2005

Want to whip up some quick UI Tree?

I had to produce for one of my clients some form of UI Tree to help them to focus on how various pieces of user interactions with the system is going to take place from a UI perspective. It went from paper bubbles to Visio-lizing it and client still don't like it. Finally, I stumbled across this product that saves the day.

It is basically a map of thoughts organized in a click-me-and-drill-down-as-you-go application. The point is, it proides a very easy drag-and-drop tree creating inetrface to do the job very quickly.

At the end of the day this did wow the people who saw it, and it is extremely easy to use. It got the job done. I am however interested in the many ways of people who create the so-called UI Tree how they do it.

Tuesday, May 24, 2005

Connecting a Pocket PC emulator to host internet in Whidbey

This tip is for developing environment which has Whidbey Beta 2. I do not know if it will work (probably won't) on a Virtual PC environment.

If you are trying to connect your Pocket PC emulator to the internet because you are developing/deploying perhaps a web service on your host system, and you want your emulator to be able to consume it in your debugging, here is a few things in sequence I have tried to get it to work:

- Restart your computer. Yes.
- Open your .sln by double-clicking it in your Explorer, not by opening your Whidbey and then selecting it from your File/"Recent Projects". Apparently there is a funky difference.
- Run F5 to bring up the emulator.
- It will probably tell you "Cannot connect to device", but that's okay.
- In Whidbey, go to Tools/"Device Emulator Manager", set your emulating device to Cradle.
- Reset the state of your emulator.
- Close down the emulator.
- Run F5 again to bring it back up.
- In your emulator Connections/Advanced/"Select Networks", select "My Work Network" in the first droplist for programs that need access to the internet.
- Go to its Edit/"Proxy Settings", uncheck the box "This network connects to the internet". Notice this box will automatically be checked again by the emulator once it can connect to the internet.
- And finally, verify that you have turned your firewall software off.

This would work in the emulator IE whether you type "http://localhost/MyService.asmx" or "http://localhost/MyService.asmx". If I were you, I would see if I could even connect to Google first in your emulator IE before testing the web service.

Tuesday, May 17, 2005

Dynamic deployment config settings

Using Visual Studio .NET 2003, there are different options to dynamically use app/web.config settings depending on what environment the build is deployed on. One of which is in your code, use preprocessor directives like #if, #elif, and #endif. I don't like this approach, period. This is really ugly. What if you have 5 deployment environments? You will end up with values with keys "dbConnStringDev", "dbConnStringTest", "dbConnStringQA", "dbConnStringProd", etc.

Another option is have inside your app/web.config file, just create one key-value pair for each setting regardless of how many deploying environments you might have (eg. ), and then after each deployment, manually (or automate this by some scripts) go and edit the settings for all these. Well... this is just as bad as a single mistype would deem a deployment failed.

A third option would be, if you have separate a set of physical machines per deployment environment, one could use each machine's machine.config as the definitive setting storage, and then override them with local app/web.config in for example your developer's .NET projects. Question arises if for example you have five different deployment environments (dev, test, QA, stage, and prod) but you only have two boxes for your web services server across all of these environments: how do you direct traffic for dev, test, and QA to box A while stage and prod to box B. It's the same problem all over again.

The solution (well, not really) I have found so far is this. In the .csproj that could house a app/web.config file, you can find the following lines for each solution configuration:

<Config
Name = "Release"
AllowUnsafeBlocks = "false"
BaseAddress = "285212672"
CheckForOverflowUnderflow = "false"
ConfigurationOverrideFile = "app.release.config"
DefineConstants = "TRACE"
DocumentationFile = ""
DebugSymbols = "false"
FileAlignment = "4096"
IncrementalBuild = "false"
NoStdLib = "false"
NoWarn = ""
Optimize = "true"
OutputPath = "bin\Release\"
RegisterForComInterop = "false"
RemoveIntegerChecks = "false"
TreatWarningsAsErrors = "false"
WarningLevel = "4"
...

if you create a new .config file for each deployment environment that you have, then create a new solution configuration for each deployment envionment (eg. Debug, Release, QA, Stage, Production), then change that bolded line for each deployment into the appropriate .config file, then create a Setup project in your solution and build it with the solution configuration you want for your deployment, the output .msi will include *a* app.config file, with it being replaced with the file you specified in the above bolded text. So in the example above, app.release.config will be used and renamed to app.config when the output .msi is deployed onto the environment.

This is so far the cleanest solution for me. But it does tie you into using Setup project for your deployment configuration. And from what I have heard, .msi has not been a good deployment strategy for complex application deployment (yes, the fact that there is a relational database built into a .msi does not make it suit for enterprise level if it is complex to use).

Still looking into alternatives... on a side note, the fact that Microsoft is switching from "no-touch deployment" into "one-click deployment" means to me that the marketeers at MS is going one step to far in their marketing effort =P

Thursday, May 05, 2005

Whidbey Beta 2 Visual J# Redistributable, cannot load package problem

I was having a tough time to install Whidbey Beta 2 onto not my Virtual PC instance, but onto my host OS Windows XP Prof. I installed it three times in total, fortunately I have my laptop backed up before I acted.


The problem I was having was that during installation, the wizard tells me that the component 'Microsoft Visual J# Redistributable Package 2.0 Beta 2' installation was unsuccessful. The Whidbey install went on, but as I open a .NET 2.0 solution, it gives me some infamous package loading error:

Could not load type 'Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable2' from assembly 'Microsoft.VisualStudio.Shell.Interop.8.0, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'

and I then I could not open the desgin view of a simple winform page in my Smart Device Compact Framework project.

The solution was I brought back the image I took earlier, downloaded this cleaner tool from Aaron Stebner, ran it. Then the next time I install Whidbey it no longer gives me the J# Redistributable problem. My winform page now loads fine.

Thursday, April 28, 2005

Songs of the Extremos

My co-worker Shaun Jayaraj today showed me the Songs of the Extremos. This is funny stuff... Check this one out first:

Hey Dude (sing to the tune of - "Hey Jude" by the Beatles)

Hey dude
Your code smells bad
Go refactor and make it better
Remember
That tests are requirements
Then you can begin
To make it smell better

While talking funny about Agile here is another good one, The Agile Manifesto (Hip-Hop Remix).

Individuals and interactions over processes and tools

translates into:

Peeps and tradin' rhymes ova' fake moves and bling-bling

Do you dig it?

Wednesday, April 27, 2005

The Guerrilla Guide to Interviewing

A co-worker of mine Alex Pukinski sent me this blog entry a few days ago about a guide to hiring the right software developers for your company. I find it very interesting and in many cases agreeing to what it says. Having done a few in-office interviews for ThoughtWorks and looking back at them, I could probably use this guide a little more in making my choices...

Friday, April 22, 2005

nunit.core.dll and GAC

I came across a useful tip on using NUnit in purely XCOPY fashion. NUnit download comes with an installer. During its installation, it sneakily would GAC the nunit.core.dll for you. This may or may not be causing you problem if your development source tree also contains your NUnit assemblies. By that I mean your build scripts might be structured in a way that it does not rely on each of the machine to have NUnit installed before building, but it uses its own source-controlled, XCOPY'ed NUnit exe and co.

If your machine does not have NUnit installed, but you get it through your source control system, when you run the test assemblies through the NUnit exe, you might get a nasty error (TDA: actual error message) from NUnit saying it cannot find nunit.core.dll or something.

One of the solution (that my colleague and NUnit contributor) Mike Two suggests is that in your projects where you would use nunit.framework.dll, to also add a reference to the nunit.core.dll of your source tree. This guarantees that NUnit would be able to find it and load it into its AppDomain. This way you will never need to install NUnit and rely of the assembly being GAC'ed.

Thursday, April 14, 2005

Lovely business words...

There are a few words in the business IT world I dislike. The beauty is that in whatever order and however you put these words together in a statement they will deliver the exact same message - a broad and meaningless statement. Here are the candidates:

business
enterprise
architecture
management
solution
infrastructure
model

eg. Yesterday I worked on the enterprise business architecture of the infrastructure solution of the project.
eg. I am the architect of the business solution infrastructure team, managing the project's business model.

Thursday, April 07, 2005

Google Suggest with ASP.NET 2.0

This interesting blog post talks about how to do the Google Suggest way of communicating with the server without triggering a postback in ASP.NET 2.0. Pretty neat.

Tuesday, April 05, 2005

How to create build scripts for projects

Here is an article from Mike Roberts on how to set up the build scripts for .NET development project. I have had the pleasure to play with .NET 2.0 on my current project, and building the build scripts around .NET 2.0 and MSBuild I can definitely say it is not exactly fun.

For CruiseControl.NET to display compile errors from executing "msbuild project.sln", an XmlLogger has to be written and supplied to the command line. The code for the XmlLogger no longer works when I updated my Whidbey from Dec CTP to Feb CTP, which updated .NET Framework to build 50110, and therefore had to be re-written.

After that seems everything resumes normal... at least I am able to show compilation and nunit errors on my CCNet page.

Initialize()...

I am starting this blog today to write about the pain, pleasure, hate, and love of the various forms of technologies that I have encountered in my life. Working for ThoughtWorks has enabled me to grow maturely into a professional developer, which requires constant learning and experimenting the latest and greatest technologies. I truly enjoy working for this company...

And this is where my initialization routine begins... :-D