開發流程 關於缺陷追蹤系統

昨晚讀了一篇缺陷跟蹤的文章,覺得對我們有幫助, 整理下了文章作者的觀點:

 

1.  在迭代中發現的問題不能算是bug,只有產品負責人才有權利把某個東西叫做“bug”,在健康的敏捷團隊中,理應不需要任何bug跟蹤系統。

2. 敏捷環境中的bug指的是,產品的表現與產品負責人的正常的期待產生衝突

2.1 在產品完成之前,跟“產品負責人的期待”不匹配的行爲不能算是“bug”,人們需要採取的唯一行動就是立刻修復。(對bug的容忍度爲0,既然找到以後就要修好,所以不需要給它們起名字。也不需要設置優先級,我們不需要在bug跟蹤系統裏面做跟蹤。我們只是馬上搞定。

2.2 產品完成以後, 軟件所表現出來的行爲會跟產品負責人的期待相沖突。這樣我們就有了bug。這個時候才需要缺陷追蹤系統。

2.3 儘量減少被記錄的BUG,發現的問題儘可能立刻解決, 爲BUG排等級不會帶來任何好處。

 

下面是我自己的觀點:

1. 追蹤缺陷還是必要的 (但是不一定要由缺陷追蹤系統完成),因爲產品的測試是在程序員和測試員間進行的 (現在瀏覽器開發員是豆丁,測試員是eric),如果發現了問題沒有把問題記錄起來 (即使這個問題他們很快就解決了),那麼項目負責人(阿雅)就沒辦法對當前項目的缺陷密度和風險進行評估。

2. 作者的觀點是對的,爲缺陷排等級,寫很多文檔不會帶來任何好處,所以我們的缺陷系統也應該儘量簡化,方便。可以這樣做: 任何缺陷只有兩個狀態,解決和未解決.

3. 目前我們沒有缺陷追蹤系統,所以我只能建一個測試項目,組員都是項目負責人,任何人發現了問題都可以創建任務,描述問題。然後我負責監督解決這些問題。

4. 基於 bug0容忍的原則, 測試類項目能不能稍微改進下,發現的問題如果當天沒有解決,以後每天定時RTX提示一次給相關的程序員。

 

 

 

 

 

 

 

 

原文:

Handling Bugs in an Agile Context

March 13th, 2009 
Filed under Agile

I was honored to be included on the lunch and learn panel at theSoftware Quality Association of Denver (SQuAD) conference this week. One of the questions that came up had to do with triaging bugs in an Agile context. Here’s my answer, in a bit more detail than I could give at the panel.

The short answer is that there should be so few bugs that triaging them doesn’t make sense. After all, if you only have 2 bugs, how much time do you need to spend discussing whether or not to fix them?

When I say that, people usually shake their head. “Yeah right,” they say. “You obviously don’t live in the real world.” I do live in the real world. Truly, I do. The problem, I suspect, is one of definition. When is a bug counted as a bug?

In an Agile context, I define a bug as behavior in a “Done” story that violates valid expectations of the Product Owner.

There’s plenty of ambiguity in that statement, of course. So let me elaborate a little further.

Let’s start with the Product Owner. Not all Agile teams use this term. So where my definition says “Product Owner,” substitute in the title or name of the person who, in your organization, is responsible for defining what the software should do. This person might be a Business Analyst, a Product Manager, or some other Business Stakeholder.

This person is not anyone on the implementation team. Yes, the testers or programmers may have opinions about what’s a bug and what’s not. The implementation team can advise the Product Owner. But the Product Owner decides.

This person is also not the end user or customer. When end users or customers encounter problems in the field, we listen to them. The Product Owner takes their opinions and preferences and needs into account. But the Product Owner is the person who ultimately decides if the customer has found something that violates valid expectations of the behavior of the system.

Yes, that does put a lot of responsibility on the shoulders of the Product Owner, but that’s where the responsibility belongs. Defining what the software should and should not do is a business decision, not a technical decision.

Speaking of expectations, let’s talk about that a little more.

When the Product Owner defines stories, they have expectations about what the story will look like when it’s done. The implementation team collaborates with the Product Owner on articulating those expectations in the form of Acceptance Criteria or Acceptance Tests.

It’s easy to tell if the software violates those explicit expectations. However, implicit expectations are a little more difficult. And the Product Owner will have implicit expectations that are perfectly valid. There is no way to capture every nuance of every expectation in an Acceptance Test.

Further, there are some expectations that cannot be captured completely. “It should never corrupt data or lose the user’s work,” the Product Owner may say, or “It should never jeopardize the safety of the user.” We cannot possibly create a comprehensive enough set of Acceptance Tests to cover every possibility. So we attend to both the letter of the Acceptance Tests and the spirit, and we use Exploratory Testing to look for unforeseen conditions in which the system misbehaves.

Finally, let’s talk about “Done.” Done means implemented, tested, integrated, explored, and ready to ship or deploy. Done doesn’t just mean coded, Done means finished, complete, ready, polished.

Before we declare a story “Done,” if we find something that would violate the Product Owner’s expectations, we fix it. We don’t argue about it, we don’t debate or triage, we just fix it. This is what it means to have a zero tolerance for bugs. This is how we keep the code base clean and malleable and maintainable. That’s how we avoid accumulating technical debt. We do not tolerate broken windows in our code. And we make sure that there are one or more automated tests that would cover that same case so the problem won’t creep back in. Ever.

And since we just fix them as we find them, we don’t need a name for these things. We don’t need to prioritize them. We don’t need to track them in a bug tracking system. We just take care of them right away.

At this point someone inevitably asks, “But don’t we need to track the history of the things we fix? Don’t we want to collect metrics about them?” To that I answer “Whatever for? We’ve caught it, fixed it, and added a test for it. What possible business value would it have to keep a record of it? Our process obviously worked, so analyzing the data would yield no actionable improvements.”

If we are ever unsure whether something violates the Product Owner’s expectations we ask. We don’t guess. We show the Product Owner. The Product Owner will say one of three things: “Wow, that’s a problem,” or “That’s outside the scope of this story, I’ll add it to the backlog,” Or “Cool! It’s working exactly as I want it to!” If the Product Owner says it’s a problem, we fix it.

If the Product Owner says “Technically, that’s a bug, but I would rather have more features than have you fix that bug, so make a note of it but leave it alone for now” then we tell the Product Owner that it belongs on the backlog. And we explain to the Product Owner that it is not a bug because it does not violate their current expectations of the behavior of the software.

Someone else usually says at this point, “But even if the Product Owner says it’s not a problem, shouldn’t we keep a record of it?” Usually the motivation for wanting to keep a record of things we won’t fix is to cover our backsides so that when the Product Owner comes back and says “Hey! Why didn’t you catch this?” we can point to the bug database and say “We did too catch it and you said not to fix it. Neener neener neener.” If an Agile team needs to keep CYA records, they have problems that bug tracking won’t fix.

Further, there is a high cost to such record keeping.

Many of the traditional teams I worked with (back before I started working with Agile teams) had bug databases that were overflowing with bugs that would never be fixed. Usually these were things that had been reported by people on the team, generally testers, and prioritized as “cosmetic” or “low priority.”

Such collections of low priority issues never added value: we never did anything with all that information. And yet we lugged that data forward from release to release in the mistaken belief that there was value in tracking every single time someone reported some nit picky thing that the business just didn’t care about.

The database became more like a security blanket than a project asset. We spent hours and hours in meetings discussing the issues, making lists of issues to fix, and tweaking the severity and priority settings, only to have all that decision making undone when the next critical feature request or bug came in. If that sounds familiar, it’s time to admit it: that information is not helping move the project forward. So stop carrying it around. It’s costing you more than it’s gaining you.

So when do we report bugs in an Agile context?

After the story is Done and Accepted, we may learn about circumstances in which the completed stories don’t live up to the Product Owner’s expectations. That’s when we have a bug.

If we’re doing things right, there should not be very many of those things. Triaging and tracking bugs in a fancy bug database does not make sense if there are something like 5 open issues at any given time. The Product Owner will prioritize fixing those bugs against other items in the product backlog and the team will move on.

And if we’re not doing things right, we may find out that there are an overwhelming number of the little critters escaping. That’s when we know that we have a real problem with our process. Rather than wasting all that time trying to manage the escaping bugs, we need to step back and figure out what’s causing the infestation. Stop the bugs at the source instead of trying to corral and manage the little critters.

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章