开发流程 关于缺陷追踪系统

昨晚读了一篇缺陷跟踪的文章,觉得对我们有帮助, 整理下了文章作者的观点:

 

1.  在迭代中发现的问题不能算是bug,只有产品负责人才有权利把某个东西叫做“bug”,在健康的敏捷团队中,理应不需要任何bug跟踪系统。

2. 敏捷环境中的bug指的是,产品的表现与产品负责人的正常的期待产生冲突

2.1 在产品完成之前,跟“产品负责人的期待”不匹配的行为不能算是“bug”,人们需要采取的唯一行动就是立刻修复。(对bug的容忍度为0,既然找到以后就要修好,所以不需要给它们起名字。也不需要设置优先级,我们不需要在bug跟踪系统里面做跟踪。我们只是马上搞定。

2.2 产品完成以后, 软件所表现出来的行为会跟产品负责人的期待相冲突。这样我们就有了bug。这个时候才需要缺陷追踪系统。

2.3 尽量减少被记录的BUG,发现的问题尽可能立刻解决, 为BUG排等级不会带来任何好处。

 

下面是我自己的观点:

1. 追踪缺陷还是必要的 (但是不一定要由缺陷追踪系统完成),因为产品的测试是在程序员和测试员间进行的 (现在浏览器开发员是豆丁,测试员是eric),如果发现了问题没有把问题记录起来 (即使这个问题他们很快就解决了),那么项目负责人(阿雅)就没办法对当前项目的缺陷密度和风险进行评估。

2. 作者的观点是对的,为缺陷排等级,写很多文档不会带来任何好处,所以我们的缺陷系统也应该尽量简化,方便。可以这样做: 任何缺陷只有两个状态,解决和未解决.

3. 目前我们没有缺陷追踪系统,所以我只能建一个测试项目,组员都是项目负责人,任何人发现了问题都可以创建任务,描述问题。然后我负责监督解决这些问题。

4. 基于 bug0容忍的原则, 测试类项目能不能稍微改进下,发现的问题如果当天没有解决,以后每天定时RTX提示一次给相关的程序员。

 

 

 

 

 

 

 

 

原文:

Handling Bugs in an Agile Context

March 13th, 2009 
Filed under Agile

I was honored to be included on the lunch and learn panel at theSoftware Quality Association of Denver (SQuAD) conference this week. One of the questions that came up had to do with triaging bugs in an Agile context. Here’s my answer, in a bit more detail than I could give at the panel.

The short answer is that there should be so few bugs that triaging them doesn’t make sense. After all, if you only have 2 bugs, how much time do you need to spend discussing whether or not to fix them?

When I say that, people usually shake their head. “Yeah right,” they say. “You obviously don’t live in the real world.” I do live in the real world. Truly, I do. The problem, I suspect, is one of definition. When is a bug counted as a bug?

In an Agile context, I define a bug as behavior in a “Done” story that violates valid expectations of the Product Owner.

There’s plenty of ambiguity in that statement, of course. So let me elaborate a little further.

Let’s start with the Product Owner. Not all Agile teams use this term. So where my definition says “Product Owner,” substitute in the title or name of the person who, in your organization, is responsible for defining what the software should do. This person might be a Business Analyst, a Product Manager, or some other Business Stakeholder.

This person is not anyone on the implementation team. Yes, the testers or programmers may have opinions about what’s a bug and what’s not. The implementation team can advise the Product Owner. But the Product Owner decides.

This person is also not the end user or customer. When end users or customers encounter problems in the field, we listen to them. The Product Owner takes their opinions and preferences and needs into account. But the Product Owner is the person who ultimately decides if the customer has found something that violates valid expectations of the behavior of the system.

Yes, that does put a lot of responsibility on the shoulders of the Product Owner, but that’s where the responsibility belongs. Defining what the software should and should not do is a business decision, not a technical decision.

Speaking of expectations, let’s talk about that a little more.

When the Product Owner defines stories, they have expectations about what the story will look like when it’s done. The implementation team collaborates with the Product Owner on articulating those expectations in the form of Acceptance Criteria or Acceptance Tests.

It’s easy to tell if the software violates those explicit expectations. However, implicit expectations are a little more difficult. And the Product Owner will have implicit expectations that are perfectly valid. There is no way to capture every nuance of every expectation in an Acceptance Test.

Further, there are some expectations that cannot be captured completely. “It should never corrupt data or lose the user’s work,” the Product Owner may say, or “It should never jeopardize the safety of the user.” We cannot possibly create a comprehensive enough set of Acceptance Tests to cover every possibility. So we attend to both the letter of the Acceptance Tests and the spirit, and we use Exploratory Testing to look for unforeseen conditions in which the system misbehaves.

Finally, let’s talk about “Done.” Done means implemented, tested, integrated, explored, and ready to ship or deploy. Done doesn’t just mean coded, Done means finished, complete, ready, polished.

Before we declare a story “Done,” if we find something that would violate the Product Owner’s expectations, we fix it. We don’t argue about it, we don’t debate or triage, we just fix it. This is what it means to have a zero tolerance for bugs. This is how we keep the code base clean and malleable and maintainable. That’s how we avoid accumulating technical debt. We do not tolerate broken windows in our code. And we make sure that there are one or more automated tests that would cover that same case so the problem won’t creep back in. Ever.

And since we just fix them as we find them, we don’t need a name for these things. We don’t need to prioritize them. We don’t need to track them in a bug tracking system. We just take care of them right away.

At this point someone inevitably asks, “But don’t we need to track the history of the things we fix? Don’t we want to collect metrics about them?” To that I answer “Whatever for? We’ve caught it, fixed it, and added a test for it. What possible business value would it have to keep a record of it? Our process obviously worked, so analyzing the data would yield no actionable improvements.”

If we are ever unsure whether something violates the Product Owner’s expectations we ask. We don’t guess. We show the Product Owner. The Product Owner will say one of three things: “Wow, that’s a problem,” or “That’s outside the scope of this story, I’ll add it to the backlog,” Or “Cool! It’s working exactly as I want it to!” If the Product Owner says it’s a problem, we fix it.

If the Product Owner says “Technically, that’s a bug, but I would rather have more features than have you fix that bug, so make a note of it but leave it alone for now” then we tell the Product Owner that it belongs on the backlog. And we explain to the Product Owner that it is not a bug because it does not violate their current expectations of the behavior of the software.

Someone else usually says at this point, “But even if the Product Owner says it’s not a problem, shouldn’t we keep a record of it?” Usually the motivation for wanting to keep a record of things we won’t fix is to cover our backsides so that when the Product Owner comes back and says “Hey! Why didn’t you catch this?” we can point to the bug database and say “We did too catch it and you said not to fix it. Neener neener neener.” If an Agile team needs to keep CYA records, they have problems that bug tracking won’t fix.

Further, there is a high cost to such record keeping.

Many of the traditional teams I worked with (back before I started working with Agile teams) had bug databases that were overflowing with bugs that would never be fixed. Usually these were things that had been reported by people on the team, generally testers, and prioritized as “cosmetic” or “low priority.”

Such collections of low priority issues never added value: we never did anything with all that information. And yet we lugged that data forward from release to release in the mistaken belief that there was value in tracking every single time someone reported some nit picky thing that the business just didn’t care about.

The database became more like a security blanket than a project asset. We spent hours and hours in meetings discussing the issues, making lists of issues to fix, and tweaking the severity and priority settings, only to have all that decision making undone when the next critical feature request or bug came in. If that sounds familiar, it’s time to admit it: that information is not helping move the project forward. So stop carrying it around. It’s costing you more than it’s gaining you.

So when do we report bugs in an Agile context?

After the story is Done and Accepted, we may learn about circumstances in which the completed stories don’t live up to the Product Owner’s expectations. That’s when we have a bug.

If we’re doing things right, there should not be very many of those things. Triaging and tracking bugs in a fancy bug database does not make sense if there are something like 5 open issues at any given time. The Product Owner will prioritize fixing those bugs against other items in the product backlog and the team will move on.

And if we’re not doing things right, we may find out that there are an overwhelming number of the little critters escaping. That’s when we know that we have a real problem with our process. Rather than wasting all that time trying to manage the escaping bugs, we need to step back and figure out what’s causing the infestation. Stop the bugs at the source instead of trying to corral and manage the little critters.

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章