Or, "Bigger is not Better".
I’ve learned to use Ruby and Rails over the last year and a half. Along the way I’ve been watching how the Rails core team evolves their project, and how Matz evolves Ruby. As a result, I’ve become more convinced of something I first started thinking about — but was always too timid to fight for — when I worked on Windows: every once in a while, you need to just throw out the old stuff and start afresh.
On Ruby, and moreso on Rails, the development team is willing to not just deprecate smelly old crap, but actively remove it. New versions of the Ruby language and the Rails platform are generally expected to remove obsolete or badly implemented features. This happens when code contributors come up with a significantly better replacement, or the team realizes the features are no longer essential to the project. Non-core stuff gets either recast as an optional plugin, or binned. For a good example of this, see the Rails 2.0 announcement and scroll down to "Active Record: Shedding some weight".
I’m sure this approach makes most old-school developers out there cringe, or scoff, or both. "Are those Rails guys crazy? What about backwards compatibility? Don’t they care about their customers? I guess they don’t have any customers, or not any real ones, at least."
I’ve wondered those things too. Especially right before I decided to make a bet on Rails for building 5 Blocks Out. But at the same time, I’ve worked on Windows. I’ve seen up close and personal what happens when you commit a product to backwards compatibility at all costs, and it ain’t pretty. Let’s look at the outcomes.
First, the benefits:
1) Apps built for platform version N can run on version N+1.
2) App developers can use their knowledge of platform-and-tools version N to write apps for version N+1.
OK. That’s about it for benefits.
Really. We’re done with the upside.
Now for the downsides:
1) The platform bloats in on-disk size and resource usage (RAM, CPU cycles) as new features get layered atop old. Obsolete technologies, instead of being removed, stick around forever, and… s l o w… e v e r y t h i n g. . . d o w n. It’s like the nightmare guests that overstay their welcome at a party: drinking all the booze they can find, gobbling up everything good in the fridge, and then staggering around for hours smashing into things and making a nuisance of themselves before crashing somewhere horribly inconvenient. What a drag.
2) Life gets worse for application developers. Yes, those who learned version N can apply all their learning to N+1. But let’s be realistic here: N+1 is big, and N+2 is even bigger, and N+3 is, whooo, geez, immense. Eventually, no mere mortal can comprehend the entire platform API. And that means they won’t take advantage of new features properly. What’s worse, they’ll probably be embarassed about that, so don’t expect to hear them admit it.
3) Life gets worse for platform developers and testers, too. Not only do they have to deal with platform bloat, but more and more of their jobs become — sorry, I have to say the dreaded ‘m’ word — maintenance. Eventually they are forced to perform truly unnatural acts. Like this: "Hey, does anyone know what this SuperDuperComplexJujuMagic library does? This thing is seriously fugly, and all of our code is linked into it like spaghetti." "Oh, that’s been around since version 1. Bob wrote that. It’s part Prolog, part assembler, part Esperanto. Unfortunately, nobody else understands the code. And Bob left last week to go work for Google. He’s buying a yacht and a new Hummer! Isn’t that great? So, uhh… just work around it."
Sound familiar? Old features become stale and mysterious, especially when the creators and maintainers leave the building for greener pastures. New features must be implemented in bizarre ways to work around old cruft. And new platform versions must remain not only compatible, but bug-for-bug compatible with old versions. So the poor devs end up having to write ugly, smelly code. Code so ugly and smelly you’d be ashamed to tell your mama about it.
4) Customer resistance to upgrades increases over time. "My developers want to skip version N+1. We have no problem waiting for N+2." "The sales team can’t afford to buy upgrades for all of our apps this year. We’ll just wait." "Our IT department takes 18 months to test your OS for bugs before rolling it out, so we only want to do that at most once every 3 years." This is real. This is the conversation Microsoft sales reps have with their big customers every time a new version of Office or Windows rolls out. It is the reason marketing teams argue about whether a release should be positioned as a "dot" release or a "major" one. It’s why Microsoft has been trying for years to move to a service-and-subscription model, where customer payments look like an annuity stream instead of once-every-few-years-big-bang. It is also the reason they are belatedly getting serious about running their apps in hosted server environments.
Imagine what would happen if innovation in general worked this way. Picture, if you will, Honda telling their designers to create a car that customers would love, with the caveat that they would have to keep all the anachronisms from every previous car model they had ever made. "Sorry, I know the gas tank in the ’02 was prone to explosion on impact, and it only gets 3 miles to the gallon, but we have to keep it around because of our commitment to backwards compatibility". Sounds crazy, doesn’t it? But this sort of conversation happens in big software shops every day.
You could argue Ford and GM have been doing this for years with their trucks and SUVs: selling ever-more-glitzy and ever-more-hulking body designs atop inefficient and increasingly archaic platforms. (The platform is car innards in this case… the engine, chassis, and other nasty bits). Now, as gas prices are imposing a resource cap, reality bites. The lipstick-on-a-chicken strategy is finally failing.
Seemingly unlimited resources have a lot to do with this sort of piggish behavior. Resource scarcity, in contrast, is actually a good thing. But that’s another whole topic.
Imagine how much better Windows would be if the people working on it were allowed to do spring cleaning once in a while and throw out smelly old stuff. Yes, there would be a cost, both internal and external, but the benefits would be legion. App developers would appreciate writing code atop a more modern, constantly rejuvenated platform. Platform developers and testers (and their mothers) would be far happier, because they’d be writing elegant and clean code, code worth writing home about. And most importantly, customers would enjoy running fast, reliable, and cleanly designed products, instead of cluttered unstable resource pigs.
Software innovators need to get into the habit of throwing old stuff out, just like people who design other types of products and services do. And software companies need to avoid — or extricate themselves from — business mod
els that prevent it.
 Remember the old days of personal computing, when there was a very real cap on memory and disk space? This was the case for the Commodore 64, and the old Apple computers, and most video game consoles and mobile phones created thus far. You simply couldn’t offer backwards compat, because resource constraints wouldn’t permit the bloat. These days, on PCs, the resource constraint is gone, but the reality is we have hit a bunch of other constraints: the size of app developers brains, the depth of app purchasers’ wallets, and the limited desire of customers to roll out big bang product releases.
 My comments aren’t directed solely at Windows. It’s highly likely that MacOS, Solaris, and other mainstream commercial OSes suffer from exactly the same problem. But I’ll stick to what I know best here.
 Sorry, I’m feeling a little harsh today. Anyone want to debug my Toshiba Portege, which slows to a crawl and locks up? Or perhaps Katrin’s MacBook, which refuses to go on standby.