Weighted Scoring Model: A Technique for Comparing Software Tools
/This post is a response to one of my readers who wanted to know how to compare different software solutions. In her case, she wanted to compare bug-tracking tools and recommend an option to her team members.
Since this is a situation that most analysts would face at one point or the other, I decided to share this technique for comparing software.
It’s not the only approach and you should be aware that the outcome of the comparison is highly dependent on your understanding of the software features (criteria), the amount of information you have on each software, as well as the subjective perception of the importance of each feature to different stakeholder groups.
The approach is based on the weighted scoring model. Using a bug tracking software comparison as a case study, let’s examine how it works.
1. Identify your requirements. It’s important for the bug-tracking tool to support your process, and not impose its own workflow on your team. By asking yourself what you need from the software, you can get a sense of the criteria needed to guide your evaluation. Also, gather input from everyone on your team (including potential users). This way, the results of your evaluation will be considered fair, balanced and acceptable to everyone.
The list below contains examples of criteria to consider. Each criterion should be rated based on its level of importance to the business. For example, you can assign a weight of 30%, 20% or 10% to each criterion, as long as all the percentages add up to 100%.
- Customizable Fields: You would want to choose a software that developers in your organization can customize. For example, if your organization only has developers with Java expertise and you select a solution written in C++, you are likely to encounter problems down the line.
- Ease of Use: If there are too many fields requiring data or the application is difficult to understand, people might go round the system to avoid using it.
- Email Notifications/Alerts: Team members may need to be alerted when issues are assigned to them. For example, managers may need to be informed when new bugs are submitted or developers may want to know when bugs have been assigned to them.
- Reports & Searches: This is used to gather information on how the process works, resource allocation and other useful information on existing bugs. The search functionality will also make it easy to find information over time.
- Bug Change History: Do you need to record changes made to a bug from inception? Some tools provide an audit trail/time stamping feature that records changes made to issues in the revision history section.
- Security: You may want to set different permissions based on user accounts or groups. Some tools allow you to define who can edit what in the application.
- Workflow: At the very least, the tool should be configurable to your process and you should be able to specify the steps of the process and their order.
- Cost: Have you considered the total cost of maintenance, training and hardware?
- O/S and Hardware Platform: If your organizational standard is a MAC and the software is built to work on only Windows and Linux, you may have a problem.
- Integration: Will you need to integrate the system with other products or tools? You may want to integrate it with other test management tools, project management tools, help desk applications, requirement management tools or configuration management software. How easy will integration be?
- Adaptability: This means that the system should be able to track more than just bugs. You may also want the system to track other types of issues such as support calls, change requests and new feature requests. The software should have different templates for tracking different issue types. Fields, reports, notifications and workflows should be customisable, at the minimum.
- Software Version Management: You'll want to keep a careful track of versions so that testers can test the right version of the software in which the bug was fixed. You may also want to consider what support comes with using the tool. How does the provider charge for support? What are the options for upgrading?
Once you’ve assigned weights (or priorities) to these criteria, you can proceed to the next stage.
2. Rank each software option based on these criteria – this is where the bulk of your work lies: identifying which criteria is fulfilled by each software. You could download the evaluation copy of each software, if available, so you can get a feel for each and generate some useful insights. A maximum of 5 software options is a good start for comparison. Rank each software using a numeric value (e.g 50 out of 100). Don’t try to compare too many software tools at a go.
3. Assign scores (based on each criteria) to each software you have selected. Every numeric value assigned to each software option is multiplied by its corresponding criteria weight - the resulting values against each software solution are then added up to arrive at a total for each. The software with the highest total score is the way to go.
This model can help in any situation where it is necessary to evaluate different options; it is not limited to software comparisons. It will assist you in presenting your findings with absolute confidence and providing facts to back up your choice.
See a video on how the weighted scoring model is applied to prioritizing requirements:
User story maps are an interesting and collaborative way of eliciting user requirements. One of the reasons why I find it so powerful is because it provides a unique approach for aligning discussions relating to the user, their goals, the process that supports the accomplishment of their predefined goals; and the requirements that need to be addressed to solve business problems.