Wednesday, December 13, 2006

Specifying ruby 1.8.4 to install using DarwinPorts

Recently I had to install ruby 1.8.4 onto my Mac. I have always been using DarwinPorts (v1.320 ) and it has served me well. But this time it is giving a little trouble. On the port tree (port list ruby) DarwinPorts gives me ruby 1.8.5, which as we all know breaks script/breakpointer for now. It does not give me the option to specify a previous ruby version to do port install, as if 1.8.5 is the only ruby that it recognizes.

I suppose you could compile ruby from source by hand, but we developers are just hard heads and refuse to give in...

Fortunately, there is a solution. (Thanks to Marc Selig)

1) Find out the svn revision number of the Portfile that has 1.8.4 by looking at:
http://trac.macosforge.org/projects/macports/log/trunk/dports/lang/ruby/Portfile
In my case it is 16709.

2) Set up a local port repository. In the file /opt/local/etc/ports/sources.conf, add the line:
file:///Users/Shared/dports

3) Install the port into your local repository.
cd /Users/Shared/dports
svn co --revision 16709 http://svn.macports.org/repository/macports/trunk/dports/lang/ruby/ lang/ruby/
portindex /Users/Shared/dports

4) Now you should be able to see ruby @1.8.4 in addition to @1.8.5 by running "port list". Run "port install ruby @1.8.4" and you are up and running.

Sunday, September 24, 2006

Enhance your irb experience on Windows

You can extend the functionality of your irb on Windows by creating a file .irbrc in your %userprofile% directory. Put some Ruby code in there, and they will get loaded when you launch your irb, as well as your script/console. I have put in some code snippets in there to aid my irb experience.

(You may have issues trying to create a file with its name beginning with a period on Windows Explorer. Try to run the command "notepad .irbrc" in your cmd.exe)

Tab Completion

require 'irb/completion'
ARGV.concat [ "--readline", "--prompt-mode", "simple" ]
Do you miss intellisense? If you do this, then in irb, type in like "text".s[tab][tab] will give you a list of methods in String that starts with s.

Ruby Interactive (ri) reference in irb
def ri(*names)
system(%{echo | ri #{names.collect { |name| name.to_s }.join(" ")}})
end
Then you can type ri 'String.strip' to see the Ruby index. The argument has to be a string though.

Remember, on a Windows machine, you have to put "echo | " in front of your ri line or else your irb will just return 'false'. I know it's counterintuitive that it isn't at the end, but it works.

Clear screen
def cls
system('cls')
end
I always wanted clear screen...

Console messed up with long irb lines
If you have a line of Ruby code that is way too long, and you want to go back and make changes, on Windows console it is not very friendly as it will almost guarentee to mess up your edits and disorient your typings. The only way I Googled on how to fix this is to take away the "--readline" switch from the Tab Completion tip above, and replace it with "--noreadline". But then you lose Tab Completion of course. I haven't found a workaround for it yet, but in the mean time I will happily just use Cygwin bash shell =)

Wednesday, September 13, 2006

Ruby constructor dependency injection... or not?

Dependency injection has proven to be something a black belt unit tester must know about if you are serious about unit testing. If you have written some sort of unit tests, would you be jealous if I tell you for some of us, running all these unit tests takes sub-1 second? In C# and Java, actively practising Dependency Injection makes mocking and stubbing out dependencies much easier, and thus tests become easy to write, and run fast because they do not need to make time consuming calls. In fact, constructor injection is one of my favourite design techniques:

    public class Laptop

    {

        private IBattery battery;

 

        public Laptop(IBattery battery)

        {

            this.battery = battery;

        }

 

        public void PowerOn()

        {

            if (battery.Meter >= 10)

            {

                // Booting Vista...

            }

        }

    }



Then to unit test Laptop, you could use NMock like such:

        [Test]

        public void PowerOnFailsWhenBatteryIsTooLow()

        {

            mocks = new Mockery();

            mockBattery = mocks.NewMock<IBattery>();

            Expect.Once.On(mockBattery).Method("Meter").Will(Return.Value(9));

            Laptop laptop = new Laptop(mockBattery);

 

            laptop.PowerOn();

 

            Assert.AreEqual("Off", laptop.PowerStatus);

            mocks.VerifyAllExpectationsHaveBeenMet();

        }



It may not be worth it to mock out Battery, but think about a lengthy web service class.

That's all true in C# and Java. Now in Ruby, I don't even need to constructor inject my Battery instance to unit test my Laptop class. I can just as easily unit test my Laptop class without having to inject my mock Battery:

class Laptop
def initialize
@battery = Battery.new
end
def power_on
if @battery.meter >= 10
# Booting Max OS X
end
end

def power_status
end
end

class LaptopTest < Test::Unit::TestCase
def test_power_on_fails_when_battery_too_low
Battery.any_instance.expects(:meter).returns(9)
laptop = Laptop.new
laptop.power_on
assert_equal :off, laptop.power_status
end
end


Basically I am mocking using Ruby Stubba/Mocha, but I don't even need to write an extra constructor to inject the Class Under Test's dependencies! With no interface IBattery, nothing! This is some cool trickery of programming in a dynamic language like Ruby, and I am discovering these things every day with my colleagues!

I know you are going to say "well, I can use reflection to do the same thing...", and I will tell you, sure, you try to do it in a readable manner and with one line of code. I didn't say you can't do it with C# or Java, I am just saying, this is how I can do it with one line of highly readable Ruby. Happy programming.

Tuesday, September 05, 2006

The most revolutionary usernames and passwords, ever

Haven't you written the authentication piece for your app? Isn't it annoying after it is implemented in your app, every time you try to access your app's functionality during development, you have to start logging in every time by typing in some dev-only user id with a fake password? Having a good set of tests helps alot in terms of not having to go through the app starting from login while new features are still undergoing development, but there are still many cases where you have to debug something starting from the first login page of your application.

Over time, I have developed one way to mitigate this nuisance, by creating a few dev usernames and passwords of the following pattern:

Users:
+ dave
+ fred
+ cat
+ red

Passwords:
+ create
+ database
+ case
+ ware
+ vase
+ card
+ garbage
+ beta

Now the question is, what is the pattern? Answer me in 10 seconds...

That's right, I can finish typing all of them with my left hand, while my right hand still holding my mouse and not have to switch from my mouse to my keyboard to type the password, then my mouse to hit the 'Login' button.

Did you get the pattern in 10 seconds?

Thursday, August 24, 2006

Size does matter

When it comes to managing an Agile project team, one question usually arises for planners and managers is how big should a story really be. For the most people a story should be sized at the minimum as something that can be completed within an iteration by someone (or a programming pair). Each story is tagged with a difficulty level, whether you measure using 1-2-3 ideal development days or gummy bears, for best-guess estimates on how long it will take to implement the solution for the story.

I think while it looks trivial, story sizes is very important to an Agile team to manage business wishes of the application and development delivery of business values.

On some projects, a story could be one small thing on one screen (eg. enter customer address). Others, it spans multiple screens (eg. enter, edit, and delete customer info). For me, I would much prefer they being as small as they can be. For something like entering, editing, and customer info, strive to break things out to something meaningfully small, like:
- enter customer name
- enter customer address
- enter customer billing info
- edit customer name
- edit customer address
- edit customer billing info
- delete a customer

Small story has a few advantages:

More tailored to business needs
Suppose you have a specific need for a desktop computer, like you are a professional graphics designer or gamer. Would you be more satisfied for your purchase if you spend your budget at an online store for one of their 20 brand computers, or custom build it with hardware parts you get to play with one by one to address your specific needs? Is 1GB RAM gonna be enough? How about 2GB? Or 1GB plus more video card memory is good? If you purchase your computer parts by parts, chances are you will be buying something much more tailored to what you need now, at the same time pay for something that can accomodate tomorrow's change better (eg. a motherboard with more RAM slots than normal). The smaller your application's features are, the more flexibility you have. But don't go overboard and purchase your parts at microscopic level (buying transistors) which will cause you nightmare.

You can need it later, or YAGNI at all
You are building your dream vehicle from scratch. You have a tight budget, and a tight deadline. Do these constraints sound familiar? We are all working in this competitive business environment every day. So if you are building this car you need, can it satisfy your need with only 2 wheels like a motorbike? Does it still address your needs if it has three wheels? Do you need this car to have a 5th backup tire? Of course all these depends on what you use your car for. You might only need 2 wheels and as light weight as possible for a marathon or endurance race, or you might absolutely need a 5th wheel if you use it in the Africa dessert where there is no concrete. But the tight budget and deadline does not change, what changes is the delivery and whether the users' needs are satisfied. So, between one story that says "build 5 wheels for the vehicle" and five stories that says "add 1 additional wheel to the vehicle", what would you pick? Prioritization is key.

More consistent and better measured velocity
As previously mentioned, stories should be sized as something that can be completed in one iteration. Based on last iteration's performance, if you are planning for 3 stories to play the following iteration, constituting a 1-point, 2-point, and 3-point story. If the team were 99% complete with it but was unable to complete that 3-point story, your team's velocity dropped from a 6 down to a 3. This is a pretty big change in velocity. Now imagine you redo your iteration with 8 stories, same scope, a total of 20 points based on last iteration velocity. At the end of the iteration if your team cannot deliver one 2-point story and one 3-point story (note: much smaller scale), you team's velocity is 15. Your tracking of your development team's progress can be more accurately measured simply by splitting your stories up.
But with smaller stories it does come at a cost. There are some disadvantages.

Code is better, analysis is easier
Smaller stories encourage team members to complete something faster, meaning they get a chance to go back and refactor code easier and more often. It also allows programming pair to switch pairing partner more often to get more understanding of the code base. Smaller stories can also be defined in much more granularly fashion. Things that business do not understand yet can be split out to another story in order to get things going. Better tested and better understanding the code and stories, obviously makes your application more robust and easier to change.

With smaller stories, one must also understand there are drawbacks too. Here are a few I can think of:

Story explosion
Most people use an Excel spreadsheet to manage their day-to-day story progress. Having a spreadsheet of 60 stories is alright, now consider representing the same story list with 180 stories. If you are organizationally challenged, you will quickly run into issues with your tracking. How to categorize such information? Well, in today's blogsphere world there is something called tagging. You can tag a story with multiple tags, and later click on a tag topic (eg. UI) and a list of things you previously tagged will show up. This allows someone to quickly sort and filter the obviously larger story list. I don't know if there is a product out there that supports such information sorting and filtering yet, but I would start looking there if I were to solve this issue. Del.icio.us is a good place to explore tagging.

Story selections
When there are more stories to choose from, playing the right stories becomes more tricky. It is common to have groups of stories that are tied to an area that is under heavy business analysis and are subject to change completely, and groups of stories that are fairly stable and not prone to change. But only picking stories that are stable and easy to complete is not the correct way to play stories. Whichever stories to play should depend on the complexity of the technical solution
and business value that can be recouped from that solution. The application is meaningless to the business if out of the 180 stories, 90 are complete, 60 are still under intense business analysis and are subject to change, and 30 are ready to go, while the business value generated by those 90 completed stories are trivial. Get down to the dirty aspect of the problem the application is trying to solve, and start flushing them out. Without a defined problem to solve, any application is way to expensive to build.

So should you go with diet size story or super-sized story? This question is like asking how diverse your investment portfolio should be. It depends. Now that you know why size does matter, apply your thinker's head to see whether it applies to the project you are on!

Friday, August 18, 2006

The "Not-Enough-Objects" anti-pattern

Definition:
This is a pattern that every single programmer in the world had used at one point or another in their programming careers. Classes become unnecessarily bloated and multi-talented in their ability to single-handedly accomplish almost anything and everything an application requires.

Root Causes:
Being lazy to create new classes for new but not aligned behaviors into existing objects; being ignorant on good objects commuication and interactions; lack of concern of each objects' goal of existence; failure to realize the importance of programming to interfaces; putting off code refactoring forever (the "kitchen sink" syndrome 1); embraces code babies2 as if they are the next American Idols; failure to effectively organize, categorize, and classify application functionalities.

UML:
eg.

Customer
+ OrderSoda() ~20 LOC
+ Sleep() ~100 LOC
+ RPCCallTolMom() ~250 LOC
+ PlayTVGames() ~500 LOC
+ WebServiceCallToWalkMyDog() ~3,000 LOC
+ RepeatRoutineEveryDay() ~275 LOC
.
.
.
(many more methods)

Symtoms:
When it appears in your code, they make your code become demoralizingly hard to read or comprehend; extremely difficult to touch code without introducing new errors; code become very brittle; code starts to look like POOP (Procedural Object-Oriented Programming); exponentially increase your programmers' after-hour caffeine consumption to life-threatening level against others (e.g. "I will kill the xxx if I knew who wrote this").

How to Avoid:
Avoid 1,000 line classes and/or 200 line methods; avoid class behaviors that are mutually exclusive; avoid class states that are mutually exclusive; avoid classes to have multiple responsibilities in favor of delegating to new objects that the classes use; avoid too many if-else statements in favor of object hierarchy (see Strategy); effectively categorize or organize your classes/namespace/file/folder structures into meaningful and humanitarian fashion; encourage collective code-ownership; frequently communicate with team members the intent of each object; feverishly unit test the heck out of each class and object using mocks and stubs; use TDD as test-driven design to think how other objects will use the object you are unit testing; pair-program with someone who reminds you of code quality is of utmost importance.

Short List of Skills Required to Avoid:
1. Programming to abstraction and not implementation
2. Object encapsulation
3. "Tell, Don't Ask" and the Law (or guideline) of Demeter (link), go back to #2 if you do not understand
4. Short methods/classes with well-intended names in favor of plethora of comments
5. Unit testing
6. All your unit tests run in sub-10 secs. because you understand Mock vs. Stub (link)
7. Dependency Injection (link)
8. Decoupling and cohesiveness (related to object-to-object communications)
9. Knowledge of design patterns of all types
10. Knowledge of anti-patterns

After-Thought:
Have we understood why good programmers are rare?

(1) kitchen sink syndrome: If it is clean, it stays clean. Once dirty dishes goes in and no one bothers cleaning up, dishes will start piling up and they never get cleaned.
(2) code babies: A snippet or chunk of code, or application functionality that is given birth by its genetically related programmer that no one else in the planet understands or dares to understand.

Sunday, August 13, 2006

Time is money... Stop using your Quick Launch or Start menu

Do you have a list of 30+ icons of applications you use daily in you Quick Launch taskbar and you turned on its auto-hide feature? Do you try to actively manage your Start/Programs folders to make it easier for you to find and launch applications? Are you running out of keyboard shortcuts to launch your favourite applications? If you say yes to all these questions, chances are you are a power keyboard-driven user of your computer. It's simply a breeze without having to move your hands away from the keyboard and do everything on your computer.

To add more ammunition to you keyboard freaks, download and try Colibri. This is an application that lets you launch your apps by simply typing in a pattern that matches your apps' names! I mapped it to the F12 key on my laptop. When I want to launch Firefox, I press F12, then type in "fox", and hit enter and it is launched; When I want to launch MSN Messenger, I press F12, type in "mess", hit enter and it is launched...

It's pretty much similar to Google Desktop, but in GD after typing I have to use the up and down arrows multiple times to nav away from other doc types to get to app's exe. Colibri is strictly for app launching. If I type in "mess" for MSN Messenger, it will remember that you have used it to launch MSN Messenger, so next time rather than suggesting you to launch Y! Messenger it will default to MSN Messenger, while letting you to arrow down to select Y! Messenger should you want to.

I rarely use my Quick Launch or my Start menu anymore...

Bonus Tip 1: Did you know shift-F10 opens up your mouse right-click menu?
Bonus Tip 2: Turn on your "Begin finding when you begin typing" feature in your Firefox. I use this to "type-and-hit-enter" to surf the web =P

Wednesday, July 19, 2006

Applying PicoContainer.NET with Presentation Patterns - Part II

(Part I)

Let's think about what Pico is good at in practical sense. I think Pico is great at wiring dependencies together. That means, it is good at wiring dependencies starting from your Presentation Model/Presenter layer and back, because these are all classes that you have complete control over. For your Views, at least if you develop in Visual Studio .NET, the restriction on their default constructors conflicts with the way Pico works. There are ways around it (setter injection as said in Part I), but they are ugly.

But what if I don't register my Views into my Pico container like other classes? Then the question becomes: how can my View get a reference to its dependent Presentation Model/Presenter?

Before I show you my solution, I have to say one more thing. The method InitializeComponent() synonymously refer controls as components for a reason. Every control in .NET is a component. What is a component? It is something you put in a container. Every container has many components. So yes, I am telling you that these controls/widgets are all being put in a container (not Pico) in the .NET control hierarchy framework. So what does each control being a component in this .NET container hierarchy enable me to do? It allows you to have access to the functionalities from other components in the same container. Let me rephrase: each View, being a component, can have access to other classes like a Presentation Model/Presenter, so long as they are all in the same .NET container hierarchy framework. Didn't I just say that this container framework is not Pico? Yes, I did, but it doesn't mean we have to use only one over the other. We can use them both together to each of their strengths to achieve the best of both worlds. Let me explain.

We have already said using Pico container to host everything starting from the Presentation Model layer is a good thing. So here is some example code on how to do this:

        private static IMutablePicoContainer createMutablePicoContainer()

        {

            IMutablePicoContainer pico = new DefaultPicoContainer();

            pico.RegisterComponentImplementation(typeof(IPageOnePM), typeof(PageOnePM));

            pico.RegisterComponentImplementation(typeof(IPageTwoPM), typeof(PageTwoPM));

            pico.RegisterComponentImplementation(typeof(IWebService), typeof(WebService));

            return pico;

        }



In order to allow the Views to have access to other components (Presentation Model/Presenter), we have to create a .NET container. It is just a class that subclass from System.ComponentModel.Container. I am going to call this an ApplicationContainer to avoid being confused with our Pico container.

        internal class ApplicationContainer : System.ComponentModel.Container

        {

            // ...

        }



To put our Views into this ApplicationContainer, you instantiate an instance of it, and put the Form object that you will start your application with into it like this:

        [STAThread]

        static void Main()

        {

            ApplicationContainer applicationContainer = new ApplicationContainer();

 

            MainForm form = new MainForm();

            applicationContainer.Add(form);

 

            Application.Run(form);

        }



From our Views, the method that they will use to get access to other components is called GetService(Type serviceType). When this method is called from within our Views, if our Views are put in a System.ComponentModel.Container, by default the method will ask for its containing container to traverse all registered components and see who can provide this "service" to it. If it finds such component, the container will return it, and now the requesting component gets a reference to that object. How does the container traverse its registered components and decide to give the component requesting a service the service that it asks for? Well, interestingly a System.ComponentModel.Container also has its own GetService() method to do just that. Now, since we have our own subclass ApplicationContainer, what if we override its GetService(), and while our ApplicationContainer object receives a request for service from any of its components, we can instruct it also look to see if Pico has the stuff that the requesting component has. More concretely, when a View uses GetService(typeof(MyPresentationModel) to get its Presentation Model/Presenter dependency, ApplicationContainer will ask for a Pico container that has already been fully registered to return an instance of that class if it is found, like such:

        internal class ApplicationContainer : System.ComponentModel.Container

        {

            private IMutablePicoContainer _pico = createMutablePicoContainer();

 

            protected override object GetService(Type service)

            {

                object instance = _pico.GetComponentInstanceOfType(service);

 

                if (instance != null)

                {

                    return instance;

                }

 

                return base.GetService (service);

            }

        }



To sum it all up, you need to do the following to get things to work:
1. Create an ApplicationContainer class subclassing System.ComponentModel.Container.
2. Add your starting Form into the ApplicationContainer instance, prior to starting it.
3. Set up a Pico container within the ApplicationContainer instance.
4. Register all dependencies that your Presentation Model/Presenter classes will need in your Pico container. You do not need to register your Views.
5. Override the ApplicationContainer's GetService() method, make it to look further into its already setup Pico container for anything that it should return.

Now from your Views, you can gain access to its dependency, in this case a Presentation Model/Presenter, by calling:

        private void PageOneView_Load(object sender, System.EventArgs e)

        {

            // Add this View to ApplicationContainer. Otherwise we have to instantiate

            // each and every view in public static void Main() and do the adding there.

            base.FindForm().Container.Add(this);

 

            // This GetService() call will now find what we need in ApplicationContainer.

            IPageOnePM service = (IPageOnePM)base.GetService(typeof(IPageOnePM));

        }



And modify your PresentationModel's constructor to include an additional parameter to take in the web service class. Then your class can start using the web service functionality, while in unit testing you can mock/stub it out!

    public interface IWebService

    {

    }

 

    public class WebService : IWebService

    {

    }

 

    public class PageOnePM : PresentationModel, IPageOnePM

    {

        private IWebService webservice;

 

        public PageOnePM(IWebService webservice)

        {

            this.webservice = webservice;

        }

    }

 

    // ApplicationContainer class

    private static IMutablePicoContainer createMutablePicoContainer()

    {

        IMutablePicoContainer pico = new DefaultPicoContainer();

        // ...

        pico.RegisterComponentImplementation(typeof(IWebService), typeof(WebService));

        // ...

        return pico;

    }



So now, we have eliminated the child user control not knowing how to get a Presentation Model/Presenter problem, because a user control is also, well, a component in the same ApplicationContainer. We have also solved the ugly setter injecting child user controls' dependencies problem. You also did not modify a single View default constructor! We can now happily use our Visual Studio .NET IDE to do WYSIWYG design and manage good OO design, plus Inversion of Control-ing our dependencies.

After-thought: Though the title of these two posts say Presentation Patterns, in my mind they are more geared towards just for Presentation Model, due to in my opinion the difference in directional references between Presentation Model and Model-View-Presenter. I mentioned briefly about this in my earlier post here.

By the way, note that now each page can have access to multiple Presentation Model objects, instead of the what-it-used-to-be 1-to-1 relationship between a page and its Presentation Model, so one can do something similar to Ruby on Rails where each View can make calls to multiple Controllers which operates on various Models to complete the desired action! This makes code-sharing between Presentation Models much easier, and also each Presentation Model can be named not after their page but by application functionality!

Applying PicoContainer.NET with Presentation Patterns - Part I

There is a couple favourite presentation-layer design patterns that I have been consistently using to build .NET applications. In particular, they are Presentation Model and Model-View-Presenter. Both are extremely handy when it comes to making the code-behind of your View more testable by delegating its responsibilities to another layer of code. From that layer, you can start chipping in various flavors of dependency injections and stubs and mocks to start going all-out unit testing assault to your code.

In many cases, on a per View basis (like a page or user control), there is a Presentation Model class, or a Presenter class, sitting behind it ready to receive a call from its corresponding View, then execute the behavior called upon.

PicoContainer.NET (Pico from here on) encourages more decoupled object-oriented class design by helping you to manage your classes' dependencies. As a quick example, suppose you have a class iPod (you know what it is, right?) and a class Battery. Obviously iPod depends on a battery. However, the iPod class at construction time creates its own Battery instance to use and cannot take any other battery types (to the dismay of iPod users). Now your iPod stopped working and you have to figure out why. If you need to test your iPod, you have to be extremely creative to figure out where the problem is related to its battery or not somehow. How do you know if the problem lies in the iPod, or its battery? This is why good code minimize dependencies. Suppose, however, your iPod receives a Battery instance through its constructor. Then in the unit tests of your iPod class, you can then mock/stub the battery out and start exclusively interrogating and verifying the behaviors of your iPod class. iPod now can be injected with a mock Battery instance, and hence your iPod class is more testable, and its tests are more "unit" and not touching other parts of your code. You can then start writing more automated unit tests that run in milliseconds easily.

The question now is who then is responsible for passing a Battery instance into iPod. That's where Pico shines. If, as mentioned, your iPod has such a constructor that takes in the Battery dependency, using Pico as a container, you register your iPod and Battery class into the container, when you ask the container for an iPod instance, the Battery instance (since it is also registered) will be instantiated automagically and you can now start using your iPod object, without having to hard-code your iPod class to instantiate its own Battery.

Because Pico is such a great tool, I have used it in various .NET projects of mine. Since Pico uses a registration scheme, whatever you put into a Pico container you get its dependency-wiring functionality for free, many things tend to be put into it. Afterall, Pico encourages good design, right? As a result, Views, Presentation Models, Presenters, etc., are all registered into Pico container. The Pico container is set up at public static void Main() time, and when you run Application.Start(form) you pass it into your form object. From there all of your Views can programmatically ask for its dependencies. Life is good.

Why are Views being put into the container? Because every View needs a delegating Presentation Model or Presenter to handle its behavior, meaning, every View "depends" on a Presentation Model or Presenter. As a result, they all get registered into Pico in order for the form to get navigate the user to the correct View.

Life is good - until stuff happens. Since constructor injection is the preferred way to inject dependencies, if your View "depends" on something, then your View would need a constructor passing in its dependencies before Pico can start wiring dependencies on your behalf. Every .NET programmer knows that, every View (regardless you use a Form or User Control) has a default parameter-less constructor that has a default line InitializeComponent(). This is because the Visual Studio .NET IDE uses this constructor to support WYSIWYG at design time. Tampering with or getting rid of this constructor and/or the InitializeComponent() method call will get your IDE trouble in supporting editing your View at design time.

A second problem that would arise is that, consider I have a User Control. It is also a View, albeit being used and instantiated by its parent View (maybe another User Control). Since this child user control instance is created by the auto-generated code from the parent Views InitializeComponent() method call, you cannot modify this child user control's default constructor to take in its "dependency" Presentation Model/Presenter, and then hope that its parent Form can somehow inject the child control's dependencies inside its InitializeComponent() method. Even if you can modify these auto-generated code, your IDE design-time support is now in jeapardy. Because of this problem, some Pico programmers would buck the constructor injection coding consistency, pass in the child user control's dependencies into the parent View's constructor, and start using setter injection to inject the child user control's dependencies, after the parent Form's InitializeComponents() method call. That way at least you can leave the child user control's default constructor alone and continue to use VS's strong IDE support, while managing your child control's dependencies.

What to do? Now comes Part II - My attempt to get the best sides of all worlds: Use Pico to manage dependencies; and have VS.NET IDE WYSIWYG support.

Wednesday, July 12, 2006

Ruby sugar you can't find in C#

Coming as a C# programmer to learn Ruby, as I journey from a compiling language to a dynamically typed language, of course there are some concepts in Ruby that are easier (or harder) for me to grasp than others. I want to write out loud some of these interesting bits and pieces for others who are also interested in trying out Ruby.

String vs. Symbol
In Ruby a string can be single-quoted or double-quoted (eg. "foo"). A symbol is a variable name prefixed with a colon (eg. :foo). Symbols somtimes are used pretty much synonymously with strings. The reason they exist is that by using symbols, in memory they only exist as a single copy per symbol no matter how many times you refer to them in your code. String however will create a new copy of the string in memory every time you refer to them, even though they are exactly the same. This in C# is equivalent to string interning, and is the difference between using between a StringBuilder and regular string concatenation. So don't be baffled when you see them next time.

Methods vs. Messages
In Ruby, everything is an object, including a number and a string. Ruby programmers like to think that when you are calling a method on an object, you are sending a message to that object in hope of it doing something for you. One of the cool features in Ruby is that one can call methods that are not defined at programming time but define and call them at runtime. For example:

class Foo
def programming_time_method
puts "programming time method"
end

def define_more_methods
self.class.class_eval do
define_method(:new_born_method) { puts "i was born!" }
end
end
end

Now if you do:
f = Foo.new
f.programming_time_method
puts f.respond_to?(:new_born_method) #Does f have method by that name?
f.define_more_methods
puts f.respond_to?(:new_born_method) #Does f have method by that name, now?
f.new_born_method

then you get results:

>ruby test.rb
programming time method
false
true
i was born!

One can do much more fancier stuff with Ruby too. Read on.


Class Methods vs. Static Methods
In C# most developers I worked with frowned upon static methods. They are flat-out evil. You cannot effectively mock method calls of them, and that they encourage non-OO style procedural programming. Pretty much all of the arguments that they are good for something are trumped by the hard-to-test reason. Introducing class methods in Ruby:

class Customer
def self.class_method
puts "class method getting called"
end
end

Customer.class_method
>ruby test.rb
class method getting called


At first glance they act just like static methods, you do not need an instance but just the class type and you can start making calls on them. However, they are no longer un-mockable. The unit testing framework that comes with Ruby also contains a mock framework. It can mock class methods. All of a sudden class methods, or should I say methods that associate to a class, deserves another look. In fact, Ruby on Rails uses class methods extensively for some of its magical operations. For example, if my Customer class inherits from Model, and has an id and a name in the database, I automatically get these magic methods for free - yes, free!

Customer.find_all                # returns the list of all Customer instances from database
Customer.find_by_id(2) # returns the Customer instance that contains id=2 in database
Customer.find_by_name('STEPHEN') # returns the list of all Customer instances whose name is STEPHEN


When you add a column to your table in the database, you get a free find method without coding. That's Rails.

Method-missing
Any message (aka method) you send (aka call) to an object, if that object does not yet have that method defined, it will go into its method_missing() method, which you can override. From that point, one can be fairly creative given the dynamic nature of what Ruby allows a programmer to do. In fact, method_missing() plays a pretty important role in constructing a DSL (Domain Specific Language).

class Customer
def method_missing(method_symbol, *args)
puts method_symbol.to_s
return self
end
end

customer = Customer.new
customer.pays.me.five.dollars

>ruby test.rb
pays
me
five
dollars

More Ruby sugar will follow. Stay tuned.

Thursday, June 22, 2006

How to use GMail SMTP server to send emails in Rails ActionMailer

Recently I had to use Ruby to send emails. I don't want to go through the hassle of setting up a mail server on my build box, and since I use GMail's hosted webmail service I am thinking I might be able to use their smtp mail server to do it.

Rail's ActionMailer was simply the automatic choice, since I am building a rails app.

Turns out GMail supports only SSL SMTP mailing service, meaning if you cannot create a SSL connection to its SMTP server, you cannot send email through them. My Rails and Ruby (1.84) version do not yet support creating a SSL SMTP connection through Net::SMTP. DHH writes about how to do so through installing msmtp here, but we developers just obviously love options =)

The dynamic nature of Ruby allows me to enhance the functionality of that class. I found the following code from a couple Japanese posts here and here on the web (fairly low Google ranking). Pasting them in my /vendor/plugins as follow:


$ cat vendor/plugins/action_mailer_tls/init.rb
require_dependency 'smtp_tls'


$ cat vendor/plugins/action_mailer_tls/lib/smtp_tls.rb
require "openssl"
require "net/smtp"

Net::SMTP.class_eval do
private
def do_start(helodomain, user, secret, authtype)
raise IOError, 'SMTP session already started' if @started
check_auth_args user, secret, authtype if user or secret

sock = timeout(@open_timeout) { TCPSocket.open(@address, @port) }
@socket = Net::InternetMessageIO.new(sock)
@socket.read_timeout = 60 #@read_timeout
@socket.debug_output = STDERR #@debug_output

check_response(critical { recv_response() })
do_helo(helodomain)

raise 'openssl library not installed' unless defined?(OpenSSL)
starttls
ssl = OpenSSL::SSL::SSLSocket.new(sock)
ssl.sync_close = true
ssl.connect
@socket = Net::InternetMessageIO.new(ssl)
@socket.read_timeout = 60 #@read_timeout
@socket.debug_output = STDERR #@debug_output
do_helo(helodomain)

authenticate user, secret, authtype if user
@started = true
ensure
unless @started
# authentication failed, cancel connection.
@socket.close if not @started and @socket and not @socket.closed?
@socket = nil
end
end

def do_helo(helodomain)
begin
if @esmtp
ehlo helodomain
else
helo helodomain
end
rescue Net::ProtocolError
if @esmtp
@esmtp = false
@error_occured = false
retry
end
raise
end
end

def starttls
getok('STARTTLS')
end

def quit
begin
getok('QUIT')
rescue EOFError
end
end
end

So now, in your ActionMailer::Base's server settings if you have:

ActionMailer::Base.server_settings = {
:address => "smtp.gmail.com",
:port => 587,
:domain => "mycompany.com",
:authentication => :plain,
:user_name => "username",
:password => "password"
}
You call your ActionMailer::Base's deliver method (perhaps from a custom subclass), it will send an email through GMail. Mission accomplished.

Thursday, May 25, 2006

Some unspoken tips on managing an Agile project

Want to have an easier time to manage an Agile project and avoid common pitfalls while have some fun? Read on...

Food
Believe it or not, food is not a luxury item on an Agile project. Food is *essential*. Agile practices encourages as much open-air communication as quickly as possible, be it computer-to-human (through tests) or human-to-human (short iterative cycle). Yet sometimes it is hard to force down onto people to communicate more effectively from a Project Manager. By having good food/snacks, people who are not co-located are going to come to the room much more often than usual, and a casual "how are you guys doing" while shuffling popcorn into his/her mouth is usually a conversation starter on many things the development team needs. And food, of course, improves morale inmeasurably =)

Team co-location
This is key to being able to communicate more effectively, which is one of the tenets of a successful Agile project. By more effectively, I mean the project can execute in less time because people can make more informed decision based on an abundant amount of open-air information. For whatever reasons, people are generally good at filtering useless information but not good at carrying information to the appropriate people who need them. By bringing the team together in a relatively open area with maximum face-to-face communication, you are encouraging team members to make more informed decisions before they waste time to work on the wrong thing based on false assumptions on continually ongoing changes within the development team and the business.

CI build sound
If you use CruiseControl.NET, try putting a sound clip that reminds of some embarrassing moments of some development team members as the build success/failure cctray settings. This can get quite amusing some times. Another morale booster.

Quote sheet
During the course of development, there is bound to be someone who inadvertently said something hilarious about someone or something. Keep a list of them on a wiki on the build server where the team members have access to. This is a must as part of the team-building process. At ThoughtWorks's annual national gathering day, we collect great and funny quotes from ThoughtWorkers from projects everywhere, print them on T-shirts, then auction them out. The money collected goes to donation. Funny and meaningful.

My funny quote of my current project? Nah. You have to ask me personally for that =P

Second story wall for issues after 1.0 release
In a XP story wall, which represents the progress of stories planned for the current iteration, usually contains columns ("DEV Ready", "DEV In Progress", "QA Ready", "QA Complete") that mark the stages of a story from ready for development to QA complete. Now how does the wall change when your application goes 1.0 release?

Generally, after a 1.0 release, the team will create a new branch of the source code for urgent production defect fixing purpose. The problem is that when you have a large team of developers split across urgent production defect fixes and release 1.1 development, or there are many small and frequent 1.x releases coming after your initial release, or for some uncommon reason you have more than one branches of source code (after you have determined it is ultimately unavoidable). Then managing the merging of code from your release branch to your active development branch becomes tricky, because if developers forget to merge, your active development branch now has bugs.

How about at 1.0 release, create a new story/issue wall next to the original story wall, but it only contains columns "Issue Verified", "Issue In Progress", "Issue Resolved", and have the developers merge their source code before they can move any issue cards from "Issue Resolved" column of that wall to the other wall's "QA Ready"? This physical movement of an issue card from one wall to another always remind developers to merge their changes. If that doesn't help the busy and forgetful developers, have them move issue cards to a new column called "Issue Merged".

Stand-up token
A good stand-up meeting is not an avenue to identify "solutions" of problems participants face. It should be a quick meeting to identify individuals who need to hold face-to-face conversations after the stand-up and then move on. But people being people, sometimes conversations go out of hand during a stand-up. Have a stand-up token, like a football, in stand-up so that only the person who has the token can talk. When you see people exchange the token too many times in a conversation, perhaps it's time to ask them to offline it till after the stand-up.

Thursday, April 27, 2006

Stacking "using" blocks together

I had always been thinking of using the using-block to instantiate multiple disposable objects so I don't have to remember explicitly calling Close() or Dispose() on them:

            using (IDisposable one = new DisposableOne(), 
                   IDisposable two = new DisposableTwo())
            {
                // Compile time error
            }


But unfortunately C# does not support that. So as a work around, one would do these nested using statements to do the trick:

            using (IDisposable one = new DisposableOne())
            {
                using (IDisposable two = new DisposableTwo())
                {
                    // Ops. Code is now indented
                }
            }


But now, you have messed up your pretty-looking indentation of your code. Remember back in the acadamia your programming class instructor told you that all your if-else/for/while statements will work without curly brackets? Well, surprise, same case for using blocks:

            using (IDisposable one = new DisposableOne())
            using (IDisposable two = new DisposableTwo())
            {
                // This works!                
            }


Time to show off to your pairing friend you can write prettier code than him/her =D

Friday, April 21, 2006

Multi-threading applications tips

Recently I have been involved with a multi-threading application, and throughout development I have been acquiring tips and gotchas here and there. One of the fun things about such applications is that you get to play with some sort of almighty powerful server box. In my case, a cool 8-processor dual core box with 8 GB RAM... How much is it? $70,000. =)

Test your application in a comparable multi-processor testing server. Better yet, test it on your production box
Yes, you heard me. Usually multi-threading applications are performance critical. Otherwise how else can you justify a pricy machine and more complicated code? If you are not able to perform testing on the actual production box, chances are the users will ultimately find the problems. Which means developers, QA, and project managers all have to work OT hours to fix the problem and push those fixes to production, plus the number of phone calls and help desk tickets you have to make. When you multiply these hours and stress, the justification of having another pricy testing server before going into production all of a sudden is not too pricy anymore.

Server GC vs. Workstation GC
.NET garbage collection behaves differently when it is installed on a single processor machine versus on a multi-processor machine. If you install it on a single processor box, it is in Workstation GC mode, meaning there is at most one thread doing garbage collection. When you install .NET onto a multi-processor box, it is installed as Server GC mode, meaning there is one GC thread per CPU. I strongly advise that the environment you deploy to before your production environment to be a multi-processor machine and have Server GC mode turned on. That way you are testing more closely to the real environment. In performance-intensive applications, garbage collection, albeit automatic, is usually worth monitoring as well.

Raise an event thread-safe
In C#, many of us have raised an event this way. In fact, this is the way most books/lectures/tutorials demonstrate it.

    1 public class Battery
    2 {
    3         public const int LOW_BATTERY_LEVEL = 20;
    4 
    5         public event EventHandler LowBattery;
    6         public event EventHandler Depleted;
    7 
    8         private int remainingBattery = 100;
    9 
   10         private void OnLowBattery()
   11         {
   12                 if (LowBattery != null)
   13                 {
   14                         LowBattery(this, EventArgs.Empty);
   15                 }
   16         }
   17 
   18         private void OnDepleted()
   19         {
   20                 if (Depleted != null)
   21                 {
   22                         Depleted(this, EventArgs.Empty);
   23                 }
   24         }
   25 }


Did you know that this is not thread-safe? Thread A could be executing this code, does a null check at line 12, when it is just about to raise the event, Thread B grabs the subscriber and unregisters/unwires the event. When Thread A resumes execution, it tries to raise the event when no one is subscribing to it. Null reference exception.

In order to make the code thread-safe, we can take advantage of delegates are value-types, in other words, you can only create"copies" of them, but not by reference.

   10 private void OnLowBattery()
   11 {
   12         EventHandler handler = LowBattery;
   13         if (handler != null)
   14         {
   15                 handler(this, EventArgs.Empty);
   16         }
   17 }


Use of lock(this) and lock(typeof(Foo))
Every C# programmer is aware of the lock keyword. It prevents multiple threads from referencing the resource while the resource is being modified. When you want to modify a member variable thread-safe, you can go and lock the object that contains it:

public Foo()
{
            lock(this)
            {
                    _memberVariable = "SOMETHING";
            }
}


But what about static variables? Well, interestingly the lock statement allows you to lock a type object as well:

public class Foo
{
            public static string StaticVariable = null;
 
            public void SetUpStatic()
            {
                    lock (typeof(Foo))
                    {
                            StaticVariable = "SOMETHING";
                    }
            }
}


Give you threads names
Yes. Threads in the old days have only a thread id, which gets generated every time when a new thread is genereated, and that makes debugging very painful. Now in .NET you can programmatically give each thread your application creates a name with its t.Name property. Use them, then in your VS.NET Debug/Windows/Threads (Ctrl-Alt-H while debugging) you will see each thread with their name. Now they look much more friendly. Better yet, name your threads after each project manager you have worked with =)

Use .NET synchronization classes
Don't tell people you know how to write multi-threading applications knowing only how to use lock(). If you don't know how Monitor.Wait() and Monitor.Pulse() and the shopping bag of the synchronization classes that .NET provides, you are missing out alot. As a starter, try this excellent article here. It's a must read for anyone programs C# and multi-threading.

Wednesday, April 19, 2006

Checking duplicate Sql statements in Excel

In my current project we are doing database Continuous Integration (CI). This means that every developer will have their own database instance for development, and at every check-in the build server kicks off the CI cycle, which starts getting the latest source code, compile, rebuild the database from scratch through running SQL scripts, re-insert data (reference data) these tables should have, compile, test, etc. When done right, this allows the much-required freedom for developers to develop against the database without interfering other developers' databases, and significantly reduces the time it takes to ask the DBA's to modify the database table for you.

When you acquire from the customer/business analysts a new set of reference data that they would like you to insert into these database tables, I have seen in many cases where these new data conflicts with the data that already exists due to duplication (mostly human error). Since most of these reference data are scripted in text files and there are usually hundreds if not thousands of them, sometimes finding that one duplicating line is like trying to find a needle in a sea.



Fortunately in Excel there is a function to rescue: =COUNTIF(set, to_find).

Usage: Increment count if "to_find" exists in "set".

So to quickly find out which is the offending PK constraint INSERT line, you do this:
=COUNTIF(E3:E11, E3)



And as you see you will quickly find out where that awful INSERT statement is... Time to bug the customer =)

Thursday, March 16, 2006

Pragmatic NAnt scripting

Why am I talking about NAnt when the rest of the world is in love with Ruby and Rake? Well, I still think NAnt has its value. Rake is still pretty raw (it doesn't even have an NUnit task yet), and until it has matured there is still a huge group of developers out there not wanting to spend a huge deal of time learning Ruby/Rake. Plus, most companies out there who have already adopted NAnt is stuck with their NAnt code. This article provides some insights on how to provide such companies business value by making NAnt scripts more maintainable.

Based on my experiences with NAnt, I am concluding the followings are true:

  1. NAnt scripts are very hard to maintain, due to reasons such as:

    • Target dependencies are all intermingled into a big unmanageable web.
    • No consistencies across NAnt scripts. There is no coding convention in NAnt scripts.
  2. The scripts have to handle a lot if you think about it. It has to handle...
    • Pre-Build (like build artifact directory creations)
    • The CI (Continuous Integration) cycle like GetLatest, Compile, UnitTests, OtherTests.
    • Post-Build (like package, stage, distribute, deploy, blah...)
  3. Build scripts are nasty and no one wants to inherit, and that's why names like "build monkey" (and even more provocative ones) are created and stick to people who initially start the script and tend to stick to those people until they leave the project. It is a thankless job.
  4. Build scripts are, nevertheless, crucial to any projects. If you have a good set of build scripts, they can help developers writing higher quality code, BAs to verify story functionalities quicker, QA to mitigate environment nightmare, and streamlines deployment process by deployment team.

And here are some tips that I have found to be useful to extend the maintainability and lifespan of your build scripts.

Have multiple build script files
I would break down what I see in many projects one monolithic projectname.build file into many smaller build files named after their build function. For example, the file below is called Sesame.build. It has one and only one purpose: to build a project, and test it. I am putting these two build functions together into one build file because the targets involving testing the build output is usually small, and usually go hand-in-hand together. Conveniently co-locating them into one file I think is better than creating a small and isolated test.build file. But other build disciplines, such as package & deploy, by itself could be a big deal, so if complexity warrants I will have a separate file for them. The only encapsulation we can apply to build scripts is really separating them into files.

<?xml version="1.0" ?>
<project name="cruise" default="all">

    <target name="all" depends="get, build, tag" description="Target executed by CCNet." />

    <target name="get" depends="get.get_latest" description="Gets the latest source code from Subversion." />
    <target name="build" depends="build.build" description="Compile and build the source code." />
    <target name="tag" depends="tag.tag" description="Tag the successful build by the naming convention tags\\CRUISE-B999" />

    <target name="get.get_latest">
        
    </target>

    <target name="build.build">
        <nant buildfile="Sesame.build" target="all" />
    </target>

    <target name="tag.tag">
        
    </target>

</project>


<?xml version="1.0" ?>
<project name="Sesame" default="all">

    <target name="all" depends="build, test" description="Compile, build, and test the Sesame project." />

    <target name="build" depends="build.compile, build.database" description="Compiles the .NET source code and setup local database instance." />
    <target name="test" depends="test.unit_test, test.other_test" description="Runs unit tests and functional tests." />

    <target name="build.compile">
            ...
    </target>

    <target name="build.database">
            ...
    </target>

    <target name="test.unit_test">
            ...
    </target>

    <target name="test.other_test">
            ...
    </target>

</project>


NAnt target categorizations and naming convention
The ultimate sin of unmaintainable build scripts more or less attribute to the over-profileration of target dependencies. When targets start having a complex web of dependencies, build scripts will take an enormous amount of courage and time to repair.

By breaking down all targets in a single NAnt build file into three categories, and through coding consistency and naming conventions, build scripts can last for a very long time. I find the following categorizations of all NAnt targets in your build scripts useful.

<?xml version="1.0" ?>
<project name="Sesame" default="all">

    <!-- Level 1 -->
    <target name="all" depends="build, test" description="Compile, build, and test the Sesame project." />

    <!-- Level 2 -->
    <target name="build" depends="build.compile, build.database" description="Compiles the .NET source code and setup local database instance." />
    <target name="test" depends="test.unit_test, test.other_test" description="Runs unit tests and functional tests." />

    <!-- Level 3 -->
    <target name="build.compile">
        // Compile using the build.solution_configuration property value...
    </target>

    <target name="build.database">
        // Rebuild database using the database_server property value...
    </target>

    <target name="test.unit_test">
            
    </target>

    <target name="test.other_test">
            
    </target>

</project>


Points of interests:

  • Level 1: These are targets that orchestrates the order of executions of various Level 2 targets and not Level 3 targets. They will only contain depends, but never have a target body. They are the common entry points into the build function the script file represents (eg. target "all" in cruise.build will be called by CruiseControl.NET to kick off the CI build process). They must have descriptions. I prefer names to be underscore-delimited.

  • Level 2: These are targets that group Level 3 targets together to make a cohesive unit of work of function. For example, a "clean" target might do altogether a few things to clean a build: build artifacts directories, build results directories, VS.NET bin/obj/VSWebCache, etc. They again will never have a target body. These targets will also have descriptions. I again prefer their names to be underscored.

  • Level 3: These targets will *never* contain depends. They will *only* do one very refined detail piece of work. In addition, their names should be namespaced by a Level 2 target name using a period, and then their function, so that they are easily distinguishable from other targets. This helps newcomers to recognize them and start treating them differently. They never have descriptions (think of these as private methods in a class that does one very specific function for you).
The namespacing naming convention can also be expanded to properties as well.

Use properties to consolidate paths
Have a file (eg. common_paths.build) that extensively use NAnt properties to consolidate your source's folder structure. This will save a lot of code duplication in your build scripts in the longer run if you intend to keep your build scripts for a while.



<?xml version="1.0" ?>
<project name="common_paths.nant" default="all">
    <property name="trunk.dir" value="." />

    <property name="build_output.dir" value="${trunk.dir}\build_output" />
    <property name="build_results.dir" value="${trunk.dir}\build_results" />
    <property name="source.dir" value="${trunk.dir}\source" />
    <property name="tools.dir" value="${trunk.dir}\tools" />

    <property name="dotnet.dir" value="${source.dir}\dotnet" />
    <property name="dts.dir" value="${source.dir}\dts" />
    <property name="sql.dir" value="${source.dir}\sql" />

    <property name="nunit.dir" value="${tools.dir}\nunit" />
    <property name="nant.dir" value="${tools.dir}\nant" />
    <property name="nantcontrib.dir" value="${tools.dir}\nantcontrib" />

</project>


Use the target description attribute
NAnt provides a -projecthelp command line switch to list all of the targets in a given build file. When you give targets a description these targets gets first-class recognition in the listing as "Main Targets":



Combining this tip with the naming convention of the Level 3 targets can be a very powerful technique in improving the readability of your build scripts. As a bonus tip, consider also implementing a target named "help" to display all this -projecthelp target lists.

Use depends or call, but not both
If you start using both, you will be impairing the readability of your build scripts. They more or less do the same thing anyways. I would use depends over call because newcomers to NAnt would be much more likely to learn about depends ahead of call.

Know your property inheritance
NAnt's property inheritance convenience is commonly overlooked and under-valued. Many people chose using <call> task to preserve locally declared property values, rather than experimenting property inheritance of various kinds. I am publishing my findings here.

<?xml version="1.0" ?>
<project name="example" default="example">

    <include buildfile="common_paths.nant" unless="${property::exists('trunk.dir')}" />

    <target name="example" depends="calling_script.properties">
        <nant buildfile="example_two.build" inheritall="true">
            <properties>
                <property name="solution.configuration" value="DEBUG" />
                <property name="build_output.dir" value="c:\modified_build_output" />
            </properties>
        </nant>
    </target>

    <target name="calling_script.properties">
        <property name="calling_script.depends_inherited" value="Inherited from depends clause." />        
    </target>

</project>


<?xml version="1.0" ?>
<project name="example_two">

    <include buildfile="common_paths.nant" unless="${property::exists('trunk.dir')}" />

    <echo message="1) Explicitly inheriting property from nant task tag: solution.configuration='${solution.configuration}'" />
    <echo message="2) Explicitly overriding property in calling script's nant task tag: build_output.dir='${build_output.dir}'" />
    <echo message="3) Implicitly inheriting property from calling depends clause: calling_script.depends_inherited='${calling_script.depends_inherited}'" />

    <echo message="Is (1) readonly? ${property::is-readonly('solution.configuration')}" />
    <echo message="Is (2) readonly? ${property::is-readonly('build_output.dir')}" />
    <echo message="Is (3) readonly? ${property::is-readonly('calling_script.depends_inherited')}" />

</project>




1 & 2) Inherit properties through <nant> task
You can call the <nant> task and include a <properties> set within the tags, while setting the inheritall attribute to true to pass those properties to the build script that is getting invoked. In addition, as shown in (2), you can also override the loaded property "build_output.dir" in the properties section prior to passing it into the callee build script.

If you use this style of properties passing, it is a very good idea to pass them as readonly=true to avoid your customization of script behaviors to be reset again by the callee script.

3) It's important to know that properties declared earlier in the depends clause can be used further downstream in the later depends targets, even into another callee build script. This technique is useful when you do not want to load all properties for all targets up front (with the side-effect that the last target in your execution will have access to all properties declared in every single previously executed target, but I am okay with it because I rarely run into properties nightmare compare to build scripts nightmare). Therefore, the following readable script can be achieved:

<?xml version="1.0" ?>
<project name="Sesame" default="all">

    <target name="all" depends="build, test" description="Compile, build, and test the Sesame project." />
    
    <target name="build" depends="build.properties, build.compile, build.database" description="Compiles the .NET source code and setup local database instance." />

    <target name="build.properties">
        <property name="build.solution_configuration" value="DEBUG" unless="${property::exists('build.solution_configuration'}" />
        <property name="build.database_server" value="localhost" unless="${property::exists('build.database_server'}" />
    </target>

    <target name="build.compile">
        // Compile using the build.solution_configuration property value...
    </target>

    <target name="build.database">
        // Rebuild database using the database_server property value...
    </target>


I am sure there are lots of other tips as well, but that's it for this post. Comments, feedback welcome.