Thinking about a development philosophy

I’ve been thinking a lot about my “development philosophy” recently thanks to some great input from a new team member at ESH. It’s really impressed upon me again the importance of having a diversity of experience and opinions on a team, and I’ve enjoyed thinking about some of the broader assumptions that we were previously making without much thought.

The main thing that has been discussed has been the philosophy that we previously had of ‘moving fast’. Basically, I was of the opinion that it’s better to have a naieve implementation of a feature done faster than it is to have a better implementation of that same feature done slower. But it’s actually more complicated than that, and in my thinking I’ve been able to give some more specificity to that belief (which I still hold true).

Basically, it boils down to this: most of the code that we design really well is designed for easier (and faster) maintainance. I’m a little fuzzy about where I heard this and who said it, but I remember it was someone I admired, and they said that the only thing that matters when you’re designing a system is maintainability. The real cost in software isn’t in creating something new, but in maintaining and extending things that are already there. I happen to agree with this, but I also see the cost in creating something new as non-trivial and worth saving if possible. I think there’s a middle groud here that can be explored.

If maintainability is the only thing that matters, then what is the value of something that is never maintained? And wouldn’t there be significantly more value in refactoring a class that is frequently changed then in spending time making a class that never changes really great the first time you write it? To me, that seems like a much better allocation of resources when you’re creating your systems. The problem there, though, is knowing what will change and what won’t.

So, here’s my proposal - when creating something new, do it fast. But every time you touch something, you should not only do what you’re in there to do (fixing a bug, updating something, etc.), but also spend considerable time refactoring to make that code more maintainable. If you follow this approach, you will inevitably be spending more time making code that changes frequently more adaptable to that change, and you will avoid spending unnecessary time on relatively static code. To me, this is a big win. So, for someone who follows TDD (which I do most of the time), the familiar development pattern becomes bifurcated:

For new things, “Red, Green, Done”

For existing things, “Red, Green, Refactor”

But then we run into another problem - what is “new”? Would a new controller action or a new method be considered “new”, and thus we can focus on getting through it quickly, or is that a change to an existing “thing” since it’s an addition to an object? I’ve done a bit of thinking about this as well, and I’ve decided that for me, if you’re adding or altering a public method on an object, then that doesn’t qualify as “new” and means you should also do some refactoring on that object and other methods in that object.

And here’s where we run into the second big benefit of this philosophy - the avoidance of premature optimization and premature abstraction. In theory, the more something has changed, the more likely it is that you’ll have a better picture of the uses for that feature or object. When you’re first developing something, there are nearly an infinite number of use cases that haven’t even been considered, and to optimize and refactor your code to suit the existing use cases would be a mistake - especially if you’re just building an MVP of a particular feature.

One particular thing that I like to leave in when I’m creating “new” things is a small amount of duplication. When I’m creating something new, even if I feel confident that this object is going to change in the future, I don’t know how it’s going to change. I don’t know if, say, the two methods that I’m extracting into a shared abstraction will both change, or if only one of them will change the next time we get feedback from our users. I strongly agree with Sandi Metz’s belief that it’s easier to recover from dupliation than it is to recover from the wrong abstraction, so that’s why I err on the side of more duplication for new objects. I just can’t predict how it’s going to change in the future, so I’ll leave that decision until I know when it’s going to change - which is only when I’m actually changing it!

Plus, the reason duplication is bad is because it’s more difficult to maintain, but if you never actually have to maintain that object then there’s no actual damage done by that duplication. You might think that this is pretty rare, but if you do a good job designing your system to be composed of small single responsibility objects, (from my experience) you will find that there is actually a lot that you end up getting right the first time in terms of functionality. And if you don’t, you can (and should) refactor the object the next time you’re in there to fix whatever you missed the first time.

I’m the first to admit that this is a somewhat risky way to go about development. You’re kind of leaving little traps throughout your code that you might run into later, and I also agree that it’s easier to refactor something when you’re familiar with it versus coming back to it cold a few weeks or months after your first wrote it. But I also see the benefit of coming to a piece of code with fresh eyes. Even though it might be “easier” to refactor something right when you’ve first written it, I find tha I don’t see some of the really good patterns and refactorings that are possible until I’ve had some time away from a particular piece of code. I think it’s kind of how you don’t see your own typos in something you’ve just written, but if you sleep on it they’ll jump out to you in the morning.

So, is this approach perfect? Far from it. Is it for everyone? No way. But I also don’t think any such perfect option exists, and after spending a considerable amount of time examining the pros and cons, I really think this approach offers a good balance of benefits and drawbacks, and I think it’s for me. I really like the feeling of productivity, and this approach offers me a way to be exceedingly productive. Moving slower and more carefully just doesn’t feel as rewarding when I’m coding. I know there’s a need for it, and I do like finding a really great abstraction that makes my code cleaner and easier to reason about, but for me this is definitely the philosophy that works - and maybe others will feel the same.