vendredi 9 avril 2010

Blog Moved


Go to Geoffrey Vandiest blog

lundi 11 mai 2009

Starting with F#

Today I took the important decision to learn a new language. I made this decision not based on the actual need, at the moment my customers all works in VB.NET or C# - two languages I master pretty good. But I took this decision because I firmly believe that learning a new language will open my mind to new concepts and finally make me a better developer.
I selected F# because:
- It’s a .Net language – so I don’t have to relearn an entire new framework
- Functional languages are “Hot”
- It’s full of declarative constructs – what will help me better understand the new C# features
- It has a strong mathematical connotation – and I’m working in the trading sector for the moment, so it could be applicable after all
I started by watching the intro of Luca Bolognese on channel 9 -> http://channel9.msdn.com/pdc2008/TL11/ , really interesting if you want an introduction based on real code samples without too much blabla…
I installed the September CTP from http://www.microsoft.com/downloads/details.aspx?FamilyID=61ad6924-93ad-48dc-8c67-60f7e7803d3c&displaylang=en
And follow the book Expert F# - I ordered it from Amazon but it’s not delivered yet so Ive start reading it and practice the examples from google books: http://books.google.be/books?id=NcrMkjVxahMC&printsec=frontcover&hl=en#PPA23,M1

lundi 16 février 2009

ASP.NET MVC RC1 - ActionLink cannot be used with type arguments

I know I’ve been a little lazy. It’s actually a while since I posted something on my blog, you know I was busy, new customer, new project….

But this time I couldn’t resist taking a moment to write how I resolved this stupid bug that cost me a lot of frustrations when migrating a test site from ASP.NET MVC preview 5 to RC1.
When I launched my site after having migrating I discovered that I couldn’t use the Actionlink with type arguments anymore - like this:

I got this error:
The non-generic method 'System.Web.Mvc.Html.LinkExtensions.ActionLink(System.Web.Mvc.HtmlHelper, string, string, string, string, string, string, System.Web.Routing.RouteValueDictionary, System.Collections.Generic.IDictionary)' cannot be used with type arguments

I discovered that the Html.ActionLink generic method is in fact an extension method located in another assembly Microsoft.Web.Mvc. I found the source code of this assembly inside the sources of the mvc project on codeplex:
http://www.codeplex.com/aspnet/Release/ProjectReleases.aspx?ReleaseId=22467

So to solve this bug you need to compile the assembly add the reference and don’t forget to add also the namespace inside the web.config in the namespace section:

Hopefully this post will help you because it took me a couple of hours to find the solution for this stupid problem….

mercredi 17 décembre 2008

What's the difference between just testing & Unit testing

A customer asked me today what was the difference between unit testing and just testing?

I thought this was a dame good question, a question every Agile developers should’ve asked himself at least once because every developer does testing. I don’t know any developer that do not check if his code is working before delivering it to his customer. I suppose neither have you?
So what differentiates a regular developer that test his code manually and once that that knows better, one that actually practice unit test testing?

Based on my experience I would say that the basic characteristics of unit tests are:

 Written by the developer: The test is materialized as a piece of code. Other types of tests can be made manually or by using a 3th party tool but unit test are programs written by developers. They mostly use the same language as the one used for the SUT.

 Test a unit of code: The test verifies the units (methods or classes) of another program. Normally unit test verifies only one unit of code (method) at a time. Unit test verifies if a certain piece of code works but is not testing if several pieces of the program integrate well.

 Use assertions: The correctness of the methods is verified by checking pre defined assumption. These assertions are predicates made by the developer to verify the correctness of a program.

These three charecteristics are really the once that differentiates just automated tests from unit tests but I suppose other characteristics could be found, characteristics that differentiate between good & bad unit tests!

vendredi 18 mai 2007

The good, the bad and the ugly

The bad old days

Long ago I strongly believed in deterministic approaches for software engineering. At school I learned the classical way to build software. It’s based on the experience of how thinks where build in the concrete world. In many literatures you find the analogy between building a house and building a piece of software. First you need an architect that will draw the detailed plan of the house. Every aspect of what will be in the house has to be described in the plan. When the plan is ready the entrepreneur can start with building the house. The architect will inspect regularly the house making sure that everything is like he described on his plan. If something is found wrong it first has to be fixed before we can continue build on it because afterwards it will cost a lot more of money to fix it, if it can be fixed at all.

For a large majority of software professionals this is the model that inspire the way their build software. A big upfront design has to take place and the result of it is a detailed analyse. Only when the analysis is complete we can start developing. The developer just has to follow what is specified and at the end you should get the application that your customer desire. I remember the time when I learned engineering software this way. It was at the university, I already was graduated but I decided to further specialise in what was called IT management. On that time I already worked for the IT industry. So because I couldn’t be present on the daily courses I had made a deal with my professors to see them once a week to review what I had studied during the week. This provided me the opportunity to better interact with my teachers and I remember a talk with my professor for software engineering.

During that week I had reviewed the famous waterfall model of software engineering. This model is completely based on the classical approach: you start with an analysis, you design the architecture, you build the software after you test every piece and finally you integrate the whole. I found the concept appealing but I couldn’t recognize this way of working in the organization I was working for at that time. So I asked my professor if he had really seen this type of model working in the real life. This was the only time when I saw the man, for which I had and still have a huge respect, showing signs of contempt. He overreacted, basically he answered that in his organization he conducted projects based on this model but that lots of organization didn’t because they where unprofessional.

I remember that in the beginning of my career I constantly was frustrated because I never encountered projects where this recipe actually worked. Especially when working for smaller organizations the customer was not willing to play this type of game. I thought that the reason for this was that they didn’t understand what software engineering was and how complex it should be. Now I realize that we are wrong. We are all actors playing the software engineering game, fooling ourselves and the customer.

Usually we begin by an interview with the customer where we try to understand what his problem is and how we could help him solve that specific problem. Because we are self confident and that the customer has a business to run this type of interview remains mostly short. We go back to our offices and writes documents describing what should be build. Writing this type of document is fastidious and time and resource consuming. When, months later we present this document to our customer where we describe in details what we intend to build he’s not willing to spend days reading it. In fact I never had a customer that actually reed and understand what was described in the analyses, just because they had a business to run and couldn’t afford spending hours reading this document as big a phonebook full of incomprehensive technical terms. It’s the reason why we make powerpoint presentations summarizing what is described in the analyses. What happen usually is that the developers working in the project don’t read the analyses too. Partially for the same reason as the customer, they have a work to do. But the main reason why developers don’t read the document is simply because they know that it isn’t valid anymore. Between the time the analysis was written and that the developers start coding, things have changed.

The ugly rules of software engineering

This is the first rule I discovered: customers change their minds all the time, especially when you show them the final product! But why do customers change their minds? Simply because when they are confronted with the actual product they suddenly dispose of more information on the product as when it was only a mental representation. Changes occur also because you realize that some aspects of the application should be improved. When you are building the product piece by piece you are confronted to some aspects of the reality you hadn’t anticipated before. Also you are getting more and more info from the customer and your understanding of the problem domain constantly evolves. I remember projects we begun to truly understand what we were building when we had already build a consequent part of the application.
The point is that we already are getting late on the project because all this analyses has taken a lot of time and the time we used to make the analyses will not help us as it should be. When the project finally arrives in the development team the expert developers try to figure out what has to be build. The system is divided into the technical core concepts, the user interface, and the backend. Code skeletons are generated and assigned to the developers for implementation. It soon becomes apparent that the modelled solution cannot be implemented without problems. The result is that what is implemented isn’t what is described in the analyses and the analyses document becomes completely obsolete. At the end the project is not completed on time and after months of pressure we produce an inconsistent system that contains many errors. Subsequently the development is chaotic. Users find errors in the application and establish that many of their requirements were not implemented adequately. The developers are informed of this and they try to implement the changes as quickly as possible. In this process, the internal structure of the application degenerates and the cost of future changes increase further. In the end the maintenance of the software becomes a nightmare and the cost of maintenance rises exponentially till we decide to re-build the software from scratch.

Another well know problem we constantly encountered is that analysts tend to increase the complexity of the system in order to foresee every possible problem. Recently I participated on a project where the project started without a real customer and without clear business objectives. The analysis was full of technical requirement describing how the portal should be build but their was nearly no reference on what the portal should actually contain and for what it should be used. The result was that the analyst and developers increased the complexity of the modelling in order to foresee every possible problem. This leads to unused technologies, which simply means a waste of money. So the second rule in software engineering is: deciding what to build is more important than how.

The classical process of software engineering leads also to separation of people. Customer, analyst, developers works not with each other but separated by place and time. In general, it is bad when analysis, design and construction are separated. This separation enhance the gaps between the vision of what should be build and how. It is not unusual that the people who has initiated the project and so has dictated the vision are not available anymore when the actual implementation occurs. This leads to misunderstandings on the actual goals that should be fulfilled. This is the third rule of software engineering: Most of problems that arise in software projects can be tracked back to lack of communication.

Adopting good practices
People who are interested in software engineering know that this kind of problems has been detected long ago and people have begun to think about other methods for making software. Other development methods have come to my attention over the last few years. It has come to my attention when I was realizing that building a house is not the same as building software. The classical rational approaches are very good in dealing with complex but immutable things like constructions or mathematics. But when dealing with rapidly changing environments like software engineering these methods tend to be ineffective. When working for things that tend to change constantly you need more agility. This is why Agile methodologies have arisen and nearly are becoming the norm now. Agile methodologies strive for a slimmer, more lightweight development process. In my opinion when it comes to time and resource consuming factors like processes, the principle that should be haunted is that every process should have a clear justification for everything that it incorporates. Even agile methodologies are sometimes going against this rule but they are nevertheless a breath of fresh air. They help us at better managing change. They definitely enforce a good communication between everybody on the team and so help us to build what the customer wants.

Agile methodologies come with many variants, one of them is Xp. Xp is certainly one the most used agile development process used these days. The father of Xp is Kent Beck, he published the first book on Xp but Xp is not the invention of one man. A number of very respected software engineers like Ron Jeffries, Martin Fowler and Robert C. Martin started to use Xp and a community has grown up around it.
Advantages of agile methodologies
Agile methodologies encourage rapid feedback from the customer and simple practices that leads to a better quality and rapid discovery of defects. Some of these practices are:
- Pair programming: developers always work in pair so that each developer can check the work of the other and help to discover errors soon.
- Continous integration: In the classical approach every module is build by a developer and at the end all modules are assembled, this usually lead to lots of integration problems because the pieces don’t fit to each others. In an agile project the modules making the entire solution are constantly integrated so that integration problems are detected immediately.
- Test driven programming: programmer constantly writes test that constantly test the software

Many studies have proven that agile practices lead to the following advantages:


- Cost effective: One of the major advantages of agile methodologies over traditional methodologies is that agile methodologies are less costly. The practices that the agile methodology promotes have all a common goal: reducing the cost of change. In place of trying to foresee everything in advance we use our energy to make things easier to change.









source: http://www.ambysoft.com/essays/whyAgileWorksFeedback.html


Higher customer satisfactionAgile methodologies favour an incremental approach. The increment is determined by the customer. The customer prioritises the requirements and the developers are responsible for estimating the cost.
- Rapid time to marketIn place of spending months on documents that will never be used the stakeholders simply describes their requirements then the developer spend several hours or days implementing the feature that can be rapidly show to the stakeholders.
- Lower financial riskRapid time to market leads to a shorter payback period.


I’m convinced that nobody should ever follow blindly any process. Each team and each project is unique and therefore needs its own custom-made process. A process must be adapted to the locale and be continuously improved. It requires constant reflexion to constantly improve and adapt a local process. That’s why it’s so important to be aware of your environment and what the process used elsewhere are. Post-mortems are meetings where everybody of the team can speak about what went wrong during the project. These meetings can help us continuously improve the development process and to avoid the same mistakes twice. Finally I should stress the fact that at the end it’s the people that make the difference and not the process. This is why every good process should tend to make it self irrelevant.

jeudi 17 mai 2007

Web 2.0 or the self organizing web

Till recently I thought that Web 2.0 didn’t mean anything but last year I had the opportunity to participate at the SAF summit in Redmond. Their I talked to people like Michael Platt who is primarily focussed on everything that is in connection with the Web 2.0 hype. Now I’m convinced that we should (re-)think our business and techniques to be part of the revolution that Web 2.0 will be for our industry. We are at the beginning of a new era, a time of rapid evolution in the IT industry that will cause considerable changes. This disruption is underway but the baseline of what will happen in the future or who will be the new leaders have not yet emerged. Specifically online media companies like Skynet are facing big opportunities but before realizing these opportunities we’ve to rethink our business if we want to survive.

I believe that moving forward advertising based companies, like ours, will be one of the pillars of the new revenue models with great profit potentials. I also believe that the market will localize and that the future belongs to local companies because the advertisement in the future will grow horizontally. Advertising based companies will attract more and more little companies and local players as Skynet have a big advantage towards big multinational players.

Web 2.0. is mostly associated to new technologies like Ajax, REST, Mashups but in fact it is the least relevant part of it. In fact I have to confess that I hate the Ajax hype. For me Web 2.0 is more about the web organizing the knowledge generated by the web. It is partially about organizing content and communication in accordance with a new paradigm. But what are the components of this new paradigm? I believe that Tagging is one of them. Tagging is a great example of how Web 2.0. enables the web to self organizing itself.

Tagging could change the way we organize our advertisement and could be an incredible value differentiator towards our customers. By allowing users to bookmark our content we get also a way to know our self and the content we own better. Tagging could be a way to generate knowledge around our content and make invisible connections appear. We should think about different ways of letting the user tagging all types of our content on our site (pages, rss feeds, music, video’s). This will increase the user experience because the user will be able to retrieve more easily content by the way he organized the content for his own. This will also give the user the ability to retrieve content by the way other users has tagged the content. This intelligence could be an immense asset for our customers. Think about the way we could organize their advertisement campaigns by using the intelligence that the users have created for us. We could also re-unify the people and the content by making links between tags describing people and tags describing content.

Also for direct marketing tagging could mean a revolution in the way we organize new services. It seems obvious to me that we should re-invent the way we qualify our users. Let them decide how they describe themselves; don’t format the answer like we do now!