I recently came across an excellent post by Wil Shipley that talked about making code small versus making it very flexible. We've all had to make that decision at some point or the other, right? You have to add this piece of functionality somewhere. And then we think, it might be useful to others too. So, why not make it generic enough so that it not only serves our purpose but also the legions of developers who might need this feature decades down the line!
Though the goal above is not wrong by any means, we tend to overestimate the usefulness of this strategy. This practice is good in an ideal world. Probably when you are at a stage where you're writing software for a doctorate. Probably not when you're writing software that needs to ship or when you have a customer breathing down your neck for a release that is already one sprint behind schedule!
Let's face it. It is difficult to conceive every possible way the method/class/feature is going to be used, design for it and code it in the shortest possible time. It is much easier to bang away a solution that fits your current needs and then sit down and make it 'flexible'. And that time is often not really available easily. Well, the perfectionist may argue, 'That time is time well spent. It will actually result in a net gain over time.' Point well taken. But are you really sure? What if you spent a week trying to add the 'genericness' and no one really uses it in the next seven and half months?
I have had enough experiences in Effigent and in Grene where I have seen overenthusiastic developers, in the name of making a framework 'future-proof', make things so fucking complicated that they screw up many people's presents. Take for instance, the case where a colleague decided to use hexadecimal for representing the customer id. I asked, "Pray, why hexadecimal?" He said. "We really need to make this future-proof. Tomorrow when we implement this solution across the country, we may have so many customers that merely decimals will not be enough!" I wanted to say that when we have as many customers that 'decimals will not be enough', you and I will be dead and long gone to worry about this. Well, in theory, it was a noble goal. But it made things unnecessarily complicated (for the paucity of space, I will not go down that road now)!
Shipley has some really sensible rules in his company about this which I reproduce here:
- We don't add code to a class unless we actually are calling that code.
- We don't make a superclass of class 'a' until AFTER we write another class 'b' that shares code with 'a' AND WORKS. Eg, first you copy your code over, and get it working, THEN you look at what's common between 'a' and 'b', and THEN you can make an abstract superclass 'c' for both of them.
- We don't make a class flexible enough to be used multiple places in the program until AFTER we have another place we need to use it.
- We don't move a class into our company-wide "Shared" repository unless it's actually used by two programs.
Follow them at your own peril. They make a lot of sense to me for sure. The developer I was referring to has probably learnt his lesson though. In the next version of the same framework, he quietly switched to decimals! I asked him, "Why not hexadecimals?" He had a sheepish grin on his face. I did not probe further.