Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | Contact Bookmark The Sterling Report


   

Will the enterprise market spend significant IT budget on Windows Vista in 2007?

Yes

No


Reinventing the Software Development Strategy
continued... page 2


If the design calls for a bolt with a tensile strength of 10kN, you test the bolts you get from the supplier to make sure they have the required strength. It's so simple that it sounds obvious - you wouldn't test the bolts by building the car and seeing if it falls apart. But you could design a car that wasn't testable in this way. Imagine if the bolts, instead of being standalone, were actually cast as part of the engine block, and instead of having a threaded nut, they had a special hook fastener that only mated with mountings on the car frame. Now you'd have to build most of the car to know whether or not a little five-cent part was going to fail. Bad design, right? But most software looks like this.

Everybody knows that modular software with independently testable pieces is good. The trick is building it that way. The real point of the new strategy is bringing every resource available to bear on the problem (just as Toyota tried to figure out every possible way to eliminate inventory and eliminate defects). Getting the right architecture is important, but there are many other elements.

You’re going to need the ability to write tests that exercise your system components, and then run those tests quickly even as the number of tests climbs into the tens or hundreds of thousands. You’re going to need to train developers on how to write effective tests and how to design testable code. This is a discipline that is still poorly understood. You’re going to need development tools that let you rapidly create tests and restructure your code base as needed to support the goal of provability. And, perhaps most importantly, you’re going to need to craft an organizational structure that supports kaizen – the ability to critically self-examine all your systems, tools, architecture and process and improve them as each realized improvement opens up new potential improvements.

The New Strategy
I’m not going to get into the details here, but developing a strategy that results in this level of provability requires change to every aspect of a software company, and that’s why I say that it involves an interlocking system of tools, process, architecture, and organization change. How to do this is beyond the scope of this article (but you can learn more by reading any of the excellent books about agile development or extreme programming). But there is a growing awareness in the industry of the potential of this kind of strategy. As you would expect, the charge is being led by small, nimble companies who can implement radical change without fighting their own bureaucracy, but even some larger companies are starting to experiment with new, agile ways of writing software.

If it were a simple matter of making small, incremental changes to the standard “code-and-fix” model, then I think we would already have much wider adoption. But there is a fundamental stumbling block that stops most organizations from successfully implementing the new strategy.

To see why, think about the training wheels on my three-year-old son’s new bicycle. He loves it! He can go anywhere, and it's much faster than his tricycle. Now, you know, and I know, that training wheels are actually a big disadvantage on a bike. They're heavy, they add tons of rolling resistance, they rob the rear wheel of traction, and they make the bike flip over if you go round a corner too quickly. Which is why, sooner or later, he's going to give them up. But when he does, things are going to get worse before they get better (we’re already stockpiling bandaids). He’ll actually have to go slower for a while to go faster in the end.

The same thing is true of the new strategy. There's a big upfront investment that lets you go faster - eventually. Among other things, you need to teach the team how to work in the new style. You need to invest in the testing infrastructure to support test creation and execution. You need to spend a lot of time writing tests. You almost certainly need to re-architect the product to break it up into testable modules. These are all big costs, and you don't see any benefit for months or years.

Most software teams are stuck in a vicious cycle. The last release was late, because it was buggy, which means that there's no time in the next release to do the prep work to make it less buggy. But because you pile on more features, the next release is even worse, which means you have even less time to invest. Bulking up the team—particularly with offshore resources—just adds to the noise level and increases the rate at which you add problems.

Pretty soon you've descended into total chaos - which is where most software teams live. This state of disorder works directly against the kind of disciplined system building that supports the new strategy. On the other hand, if you can muster the will to make the investments, you can run this cycle in reverse: improving the quality of the product and the systems that produce it reduces bugs, which then frees up time to make more improvements, which then reduces bugs even more. If you can get over the initial hump, you can build tremendous competitive advantage this way - while your competitors are stuck in their vicious cycle, you can be reaping the ever-increasing benefits of your virtuous cycle.

In my company, we have been energetically pursuing agile development and the goal of full provability for more than four years. We’re still far from where we’d like to be (just as Toyota was far from the goal of no-inventory, zero-defect manufacturing in 1965), but we’re definitely over the hump.

With much of the architecture work done, the support systems in place, and the team oriented towards the goal of provability, we now see the benefits in the form of greater ability to respond to customer requests, more predictable releases, and many fewer defects found manually and in the field. The initial investment is paying off handsomely. Our customers love it when we share solid pre-release builds with them, months before we’ve done any manual QA. We also reap the benefits of high developer morale and pride in the work we turn out.

We also realize that the road is ahead is much, much longer. Each time we make an improvement, we see new opportunities to improve that were hidden from us before. We believe that many of the greatest benefits are still to come. We imagine the day when we’ll be able to completely inline the QA effort, such that we can keep adding new features up to the last day of the release, run an overnight automated test, and ship to production customers the next day.

We dream of being able to expect that no bugs will be found in the field. Impossible? Maybe. But we’re convinced that there’s an actionable strategy that can get us as close to those ideals as we want to go – if we’re willing to put in the hard work and creative thinking required.

If you're an investor in Silicon Valley (and if you work and live here, you’re an investor, whether you invest your money or your time) you need to understand the potential impact of these changes. I don’t think that the new strategy, per se, can turn a bad business model into a good one (neither can AJAX, but that’s a different story!).

But in a competitive market, it can make a huge difference to product quality, time to market, and development costs. As Toyota found, the cumulative effect of lots of small improvements can lead to enormous competitive advantages. And when the changes reach a tipping point, they can cause a restructuring of the entire competitive environment.



John Seybold is Chief Technology Officer, Guidewire Software, Inc. He has a successful track record designing large-scale, mission-critical enterprise applications. As a co-founder of Guidewire, he designed much of the core platform infrastructure that provides performance, scalability, flexibility, and ease of integration across the product suite. John has also been instrumental in establishing Guidewire's test-driven development strategy, which allows the company to deliver provably reliable and high-performing software on a highly predictable schedule. Before Guidewire, he worked at Kana Software as a software architect. John has an A.B. from Harvard University and a D.Phil. from Oxford University, where he was a Rhodes Scholar. For article feedback, contact John at jseybold@guidewire.com

     




  Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | Contact | Terms of Agreement
© 2006 The Sterling Report. All rights reserved.