Top 10 Reasons Why Software Projects Go Bad

Likes: 0Rating: 0.0

Reliable data from software companies about overall industry performance is not that easy to quantify, but recent surveys have concluded as many as 50 percent of companies surveyed reported being dissatisfied with overall software performance in terms of it having met original objectives. Of those same companies surveyed, as many as 70 percent reported at least one project failure for the latest twelve-month period.

Complaints

The most consistent (common) complaints from companies about software projects going bad are:

  • poor integration
  • disappointing performance (including unmet design objectives)
  • overly optimistic development- and completion-time estimates
  • budget inflation

Here is a stripped-down and highly generalized summary of what you need to know about why software projects fail. The reasons are listed below in no specific order of importance, and by no means is the list meant to be exhaustive or all-inclusive. It’s a start.

All the time in the world

Remember the meek-as-a-mouse, bespectacled, Burgess Meredith character from The Twilight Zone who survived an atomic blast and as the last man on Earth had “all the time in the world” to read his beloved books, only he breaks his only pair of glasses? Okay, when it comes to software projects we never have enough time as we think.

It is the most common mistake made in software development to underestimate the time required to complete a project. Why? Well, the shorter the time frame the happier the client. Second, developers don’t want to come across as defeatist or as amateurs who are in over their heads. “Pssh. No problem.” The reality? “Uhhh . . . yeah. Hmmm.”

When it comes to how budgets are established and who creates them the same argument applies. At the start it seems like enough. It usually isn’t. Either you as the client will be asked to ante up at some point downstream, or the developer will attempt to economize. (See Item 5 below)

It’s fine. I checked it. Sort of . . .

It’s called software development for a reason: what developers really like to do — and what they want to spend as much time as they can on — is writing code. It’s a challenge and the ultimate thrill. Testing, however, is time-consuming and mind-numbingly tedious. It’s a development buzz kill. This isn’t working. There’s a problem. You need to go back and redo it. Testing is what developers need to do; it isn’t what they want to do. How good can anyone be at something they don’t really want to do?

“I have this fantastic idea!”

Most million-dollar ideas aren’t worth two cents. Remember that inspired idea you had that night at the local tavern that you excitedly sketched out with a felt-tip pen on a damp crumpled cocktail napkin and excitedly passed on to your bewildered CIO with instructions to green-light ASAP? The inspired idea you can’t quite recall this afternoon but that has been contracted out to an IT contractor with a budget of ten million dollars?

Be realistic about your expectations. Be clear about objectives. And make sure those objectives and expectations are clearly understood by everyone involved in the project. In short, never assume they know what you mean when you ask, “Everyone understand what I mean?”

Where should I put this?

More than likely the system you have wherever you are is not really one “off the shelf” system at all but more many generations of systems and upgrades arranged layer upon layer upon layer or like a gigantic pot of tangled-up spaghetti. How do we know where an integration ends and another begins?

Every time a system undergoes an upgrade you are adding to a pre-existing system that has it own unique rules and biases about how it behaves. Integration issues are plentiful and they can be difficult to find but if they aren’t it can be catastrophic. Testing (regression) requires walking back an upgrade to make sure the new stuff hasn’t inadvertently introduced functional issues into the old system.

Robbing Peter to pay Paul: the time here versus time there paradox

Because of the sunny day paradox (when it isn’t raining a hole in the roof isn’t a concern) developers often come to terms with an overly optimistic project schedule by looking elsewhere in the process for opportunities to “make up” the difference. The testing budget is a tempting target. Developers don’t like testing anyway, so what better excuse not to test as much as they should than not having enough time or money in the budget for it?

“Good fences make good neighbors”

With all due respect to Robert Frost, building higher walls around your network is not making it or you safer. If hackers want in, they’ll get in. Your vulnerabilities to security hacks are probably not where you think they are anyway. Meaning, it’s everything inside the wall you need to worry about. Now, technically, security is not properly speaking a developer fail. It’s not news to you, however, that security digests an enormous portion of your overall IT budget. It isn’t necessarily a question of spending more on security but spending smarter. Meaning: not all vulnerabilities are the same. So don’t treat them as if they are. Deal first and most aggressively with the vulnerabilities you and your business can least afford to risk.

Bad project management

Bad project management is like a mysterious disease that produces lots of symptoms but can be hard to locate. It isn’t so much a single answer as a long list of questions: What are your expectations, and whom have you hired to manage them? How confident are you that he or she has the leadership experience to effectively manage a big software project? If you have contracted with a big IT firm, can you trust your people to tell you exactly what you need to know (not what you want to hear) to make sure the project is moving ahead on time and on budget? What are your performance metrics? Do you know where your software testing is being done? The best (and only) antidote for bad management is effective top-to-bottom accountability.

The Olga Korbut problem

A trained gymnast can bend into crazy shapes and her flexibility is a wonder. Software projects are not built like gymnasts, however, and it doesn’t matter what you hear about development methodologies (like Agile) with flexibility promises that seem suited to improvisation: adding details, additional features, or new requirements to a software project at any time or anywhere downstream in its development cycle is costly, time-consuming, and raises the probability of making more mistakes.

You want to add some cool new feature? Sure, it can be done. But new code needs to be written and it needs to be integrated into what exists already and that means more time and more money and lots more testing. If you are okay with all that, well, brilliant; but if not . . .

“Hello? Yes. I have a Mr. Disaster on the other line?”

When disaster calls, will you pick up and listen to what he has to say or hide under your desk and hope he stops calling? A properly budgeted, scoped, designed, engineered, and well-managed software development project that never jumps the rail is a fantasy. The technology is just too complex for military-style efficiency and precision. However, warning bells at some point in the development process are not necessarily heralds of Armageddon. Expect bumps in the road. Most issues or problems that arise that are reported in a timely fashion can be fixed or solved in a timely fashion. Warning signs that are ignored or downplayed, however, are like viruses that hibernate for a short time before erupting in far more virulent magnitudes later in the process.

Inadequate testing

We fibbed a bit when we said there was no order of importance to this list. Inadequate testing should be near the top. And it makes sense. For every reason why software fails (bad or faulty code, inexperienced project management, unrealistic expectations, poor project communication, time and / or budget pressures, and so on) testing is the ultimate reckoning. Testing will always tell you what you need to know.

It’s the last outpost between you and the wall and the barbarians that menace you from the frontier beyond. Testing will always tell you what you need to know. But that brings us to the Rubicon of professional-quality testing: inadequate testing. And that adjective — inadequate — is hugely critical. We would also add the criteria “experienced” to the mix: inadequate and inexperienced testing.

As we will discover in the next chapter, it never happens that software is never tested. What has happened and what seems to be happening with increasing regularity is that software either has not been tested enough or testing goals are not properly prioritized. It is both an objective and subjective problem: what is being tested and how much testing is enough are among the objective problems; who is doing the testing, and how well it is being done are among the subjective components.

Deal Objectively

Trying to deal objectively with the subjective component (“This outfit will conduct the same system test as these other guys for half the cost“) of the process or subjectively with the objective component (“Two weeks for testing phase should be fine”) is a source of many software project fails.

Stay tuned for more thoughts on testing challenges and solutions.

Please reach out to me by email (arodov at qacstaging.wpengine.com) or via LinkedIn.

Alex Rodov is the Founder and Managing Partner at QA Consultants.