shutterstock 1238458639A Test case is simply a list of actions that need to be executed to verify a particular functionality or feature of an application under test. Test cases can also be used to document and help understand a product’s functionality. For example, developers can examine test cases prior to developing a feature/function to understand what the function should do or execute test cases for an existing functionality rather than reading a user manual. This document provides some guidelines for writing test cases. Depending on the context of your business, you may want to implement one, many, or all of these guidelines.

 

What Should Be In a Test Case?

Test Case Naming Convention

Although this seems simple and straightforward, many don’t practice this in a disciplined manner. It’s important to name your test cases in a clear and logical manner so that it’s apparent to others, who may refer to them in the future, what they are testing. For example:

For a project/product called “ERP” which has a functional area named “Login”, write a test case to verify whether the user is able to login using an email and password.

Rather than simply numbering tests like TC_01, use a naming convention so that it gives a brief idea of what the test is for just by looking at its name.

  • ERP-TC-Login-Success or ERP-TC-Login-Valid

If your test cases are issue types within an issue management system like Jira, then it’s helpful to have the TC as part of the name, just at you might have the acronym US to indicate a user story. In both cases, make the name relevant to the project/module/functional feature under test. Additionally, try to make the names such that by just looking at the test case ID or test case name, you know how it fits into an overall hierarchical structure of the test cases and how it fits into the structure of the software. For a test case that is used to verify printing reports in landscape, you may use:

  • ERP-TC-PRT-Landscape-Valid

You can also get creative in your usage and application of all CAPS and capitalization. The important thing is to make sure it is used consistently across the application and especially if multiple testers are writing test cases. This way everyone instantly knows what the test case is for just by looking at the name.

 

Test Case Description

The description provides the details about what you are going to test and the particular behavior or action is verified by the test. Typically, this includes:

  • Test to be carried out/behavior being verified
  • Preconditions and assumptions (any dependencies)
  • Test data to be used
  • Test environment details (if applicable)
  • Test tools to be used for the test (if applicable)

More information about assumptions, preconditions, post-conditions, and test data is provided below.

 

Assumptions, Preconditions, and Post-Conditions

Test cases should include all assumptions that apply to a test, along with any preconditions that must be met before the test can be executed. This may include, but not be limited to:

  • Any user data dependency (i.e. which page from which the user must start or initial data values, etc.)
  • Test environment dependencies
  • Special setup to be done before Test Execution
  • Dependencies on other test cases
  • Postconditions to provide instruction to restore the system to its original state as to not encumber later testing.

Lastly, create a cleanup routine at the end of the test case or test cycle as too much or “messy” data can reduce the visibility of bugs as well as influence the success of your test automation scripts.

 

Test Data      

Identifying and preparing test data can sometimes take the most time in a testing cycle. In some cases, it’s easier to create test data versus taking the time to identify and compare the actual data. When possible, provide the test data to be used for the test case within the test case description or with the specific test case step.  A few pointers:

  • If you know the test data can be reused over time, mention the exact test data to be used for the test.
  • If the test verifies specific values, specify the value range or describe what values are to be tested for which field. Do this for negative scenarios as well.
  • Because testing with every value is impractical, choose a few values from each equivalence class to provide good coverage for your test.
  • In some cases, it may be most appropriate to describe the type of data that is required to run the test rather than the real test data value. This applies to projects where the test data keeps changing as multiple teams use it and may not be in the same form when used the next time.

Lastly, create a cleanup routine at the end of the test case or test cycle as too much or “messy” data can reduce the visibility of bugs as well as influence the success of your test automation scripts.

 

Functional Area Covered

By having a field or label that contains the functional area covered, it is easy to extract all test cases pertaining to that function. Then, if you want to do targeted regression, you can test this specific area. For example, if you make an update to your accounts receivable function and label AR in the test case, then you can make sure that you execute all AR-related tests.

 

Test Case Priority

Including priority information in the test case will help you and your team test more effectively.

For instance, you can assign:

  • “Build Verification Test” as 1st priority test cases,
  • “Smoke Test” as 2nd priority test cases,
  • “Regression Test” as 3rd priority.

Automation

Utilize your test case management system to integrate and manage automation with manual tests. For example, with Gherkin/Cucumber, if a test case has been automated in this platform by QA, and the test case was changed by the product owner, then the automated test should be updated according to the change. The system can send a notification from the Cucumber Plugin. Even without such notifications for changes to test cases, denote what tests are automated and the connected user story or requirement. This helps you to know if the automation should be updated when a requirement is changed.

 

Expected Result

Test cases should clearly describe the expected result of the test case. Each test design step should clearly mention what you expect as an outcome of that verification step. Denote in detail what page/screen you expect to appear after the test step and any updates you expect as an outcome to be made in back-end systems or databases (changes that should be made to DB table which would affect other functionality). Screenshots or specification documents can be attached to the relevant step, noting how the system should work.

 

Use Reference Information or Artifacts When Appropriate

You and your team will need to decide when it is most appropriate to attach screenshots and other reference information. You don’t want to have a test case simply say, “Do as in the screenshot” or “Follow the specification”. On the other hand, sometimes a specific section on the screen or test step needs a reference point to ensure understanding. In these cases, attaching specification documents or screen designs can replace a few paragraphs of textual explanation.

 

Traceability

If possible, map test cases to user stories or functional requirements. This way, if your test case execution has unexpected behavior or results, it can be traced back to the user story to see if any changes were made to the user story, or if it’s a valid test case. Also, when the user story/requirement changes, you can update/change the mapped test cases appropriately.

 

Writing Test Cases

Cover all Verification Points in Test Design

Well-written test cases have properly defined test case verification steps covering all the verification points for the feature under test. In the above example for printing in the landscape, you would also need other print modes and functions to be verified. This is particularly important depending on the domain such as accounting, retail, and manufacturing where printing is done for verification and shipping. Examine your product artifacts (Requirement documents, Use Cases, User Stories, specifications, etc.) to ensure that your test cases cover all verification points within each function and within each user story.

When working in an agile environment, it may not be practical to cover every single test as a test case. It will be up to your team to decide what level of granularity is needed for test cases depending on your culture and context. You may choose to cover some of these verification steps through targeted exploratory testing. Exploratory testing, when done properly, can serve as an excellent means for rooting out defects as well as a foundation for test design and developing new test cases.

 

Test Case Organization and Flags

For test cases that cover functionality that is not explicit such as special scenarios like browser-specific behaviors, cookie verification, usability testing, web service testing, and error checking, organize these into sets and subsets and flag them as special functional test cases. For instance, test cases to verify error conditions can be written separately from functional test cases with steps to verify the error messages and a corresponding flag and/or denoted within the test case naming convention.

When organizing scenarios into sets, a particular feature may have many input combinations. In this case, separate the test into sub-tests. As in the previous example, to verify the login works with invalid input, you can divide this negative testing functionality into subtests. For instance:

  • Use invalid email-id
  • Use invalid password
  • Use blank email-id field

 

Concise Yet Understandable

When designing test cases, remember that test cases may not be executed by the one who designs them. Test cases that are only understood by the one who designed them have limited long-term value. Test cases should be:

  • Simple. Understood by everyone (future employees or those that may be temporarily assisting the QA effort)
  • Concise. If the Test Case has too many steps, break it down into sub-test cases as noted above.

 

Reusability and Maintenance

You should write test cases such that they can be reused in the future. Before writing a new test case check if there are test cases already written for the same function or feature. This can save time and avoid redundancies in your test management efforts. Also, if you have existing test cases for the same module, update these test cases if necessary.

 

End-User Viewpoint

When writing test cases, think from the end users’ perspective, and from an end-to-end point of view. Also, keep in mind the different roles that may use the same or similar functionality but in a different way.

 

Test Case Review

Test cases play an important role in documenting the knowledge of the organization and should be considered a company asset. As such, making sure they are updated, correct, and conform to your company standard should be a priority. To ensure their continuous value, consider a test case review process whereby peer testers, business analysts, consultants, developers, product owners, and other relevant stakeholders periodically and regularly examine test cases to determine how they can be improved and remain valuable and relevant for maintaining and improving the software quality.