Rating the Importance of Problems

Note: I made some significant changes to this essay in June 2003. The [broken link original version of the essay] is archived, but I no longer agree with some of the points I made therein.

Determining the relative importance of the problems you find is an essential step in the quality assurance process. Typically, the quality assurance role is located between the user and the development team. The QA team usually doesn’t have the responsibility of fixing problems, but QA does have to usher problems through some kind of resolution track. Chances are, problems get into a queue that is served in order of priority, and a careful process for rating importance will help you focus on resolving problems rather than on defending your methodology.

A major stumbling block to timely and systematic resolution of problems is the failure to divorce a subjective judgement of the importance of a problem from an objective observation of the extent and/or ramifications of a problem to the site structure or functionality. Allowing management to decide which problems get handled first without any reference to the set of all problems or the pool of available resources will interfere with productivity.

Different teams and different organizations are going to have different ways of approaching prioritization; I will outline a scheme that is simple and consistent, and able to handle a wide range of situations.


The first step is to determine the scope or extent of the problem — how much of the site is affected? How many pages are broken? How important is the broken functionality? Severity should reflect a qualitative appraisal of the problem’s extent without any discussion of where the problem appears on the “to fix” lists.

Some Severity Guidelines

Severity 1: the widest scope of a problem, with the entire site affected.

  • infrastructure has failed (a server has crashed, the network is down, etc.)
  • a functionality critical to the purpose of the website is broken, such as the search or commerce engine on a commerce site
  • in some cases, a problem interfering with testing might be considered a sev1, if you are in a phase where a deadline hinges on the completion of testing

Severity 2:

  • a major functionality is broken or misbehaving
  • one or more pages is missing
  • a link on a major page is broken
  • a graphic on a major page is missing

Severity 3:

  • data transfer problems (like an include file error)
  • browser inconsistencies, such as table rendering or protocol handling
  • page formatting problems, including slow pages and graphics
  • broken links on minor pages
  • user interface problems (users don’t understand which button to click
    to accomplish an action, or don’t understand the navigation in a
    subsection, etc.)

Severity 4:

  • display issues, like font inconsistencies or color choice
  • text issues, like typos, word choice, or grammar mistakes
  • page layout issues, like alignment or text spacing

Who manages severity assignments?

I strongly believe that the quality assurance team should manage severity assignents for logged problems. The qa team will tend to log most of the problems, so they need to be careful and honest in their evaluation of severity. I can’t overemphasize the need to be unrelentingly objective when assigning severity; do not give anybody the chance to use sloppy severity assignments against your qa methodology.

The qa team should also review bugs logged by people outside the team, checking for accuracy, validity, reproducibility, and of course severity. If you do change somebody’s severity assignment on a bug, be sure to notify them and explain the reason for the change.


The second step is judging the priority of the problem. Priority describes an assessment of the importance of a problem.

Some Priority Guidelines

Critical priority: the priority is so high it must be done now. Critical items should be tackled first, because the effects of such a problem cascades down the site’s functionality and infrastructure.

High priority: these are problems that are very important, and that are required before the next “big” phase, i.e., they must be solved before launch, or grand opening, or before the news conference, etc. Any problem interfering with a major site functionality is a high priority. Any problem that will make you or your site look stupid or incompetent or untrustworthy is a high priority.

Moderate priority; these are problems like a broken graphic or link on a minor page, or a page that displays badly in some browsers. Moderate problems can usually wait until the more important problems are cleaned up, a common approach during “crunch times”.

Low priority: these are display issues affecting a few pages, such as typos or grammatical mistakes, or a minor element that is wrong on many pages.

Every severity level has a corresponding default priority level, but priority allows for some input from human and business needs. Problems with low severity can easily get assigned a higher priority; for example, if you have a simple typo on a page, that’s sev 4 because of its low scope — but if that page is your home page and the typo is your company name you can bet that is a high priority problem.

Carefully assigning values for severity and priority yields the following order:

	sev 1, critical
	sev 1, high
	sev 1, moderate
	sev 1, low
	sev 2, critical
	sev 2, high
	sev 2, moderate
	sev 2, low
	sev 3, critical
	sev 3, high
	sev 3, moderate
	sev 3, low
	sev 4, critical
	sev 4, high
	sev 4, moderate
	sev 4, low

The point of using these two scales of severity and priority is that problems get put into perspective with regards to your resolution process and resources. Problems with greater scopes should be addressed before lesser-scoped problems, regardless of priority.

Who manages priority assignments?

I believe that the qa team should not assign priority, for two reasons. First, priority is a subjective assessment. While qa folks will always have an opinion about how important a problem is, the qa team’s assignment of importance should be the objective severity ranking; stick with the objective and keep subjective opinion out of any potential arguments about importance.

Second, priority is usually a tool for guiding development and maintenance work. The issues considered in the decision of what gets built (or fixed) when can range far and wide outside of the qa team’s focus. The qa team does have a say in the decision implicitly because severity plays a role, and the team can lobby for certain problems to be fixed sooner; the qa team’s opinion can play a role in what priority gets assigned, but I don’t think that the primary assignment of priority should come from the qa team.