Simple Test Estimation Model

Gokul,

Came across this simple test estimation technique when browsing for some other content. The website where i found this claims that they have had reasonable success using it. Now, to the technique:

Test Estimation
================
Testing Days = Dev Days / 3

Testing People = Dev People / 2

Testing Cost = Testing Days * Testing People * Billing per day

Test Effort Estimation

For a long time, I wonder about the effort estimation on Software testing. Recently I read a piece of information on this,

How do managers estimate the effort needed for software testing? There may be many in house and practical gut-feeling methods, but scientifically, the following methods are used,

- 'Percentage-of-Development' approach
- Implicit Risk-Context approach
- Metrics based approach
- Test work breakdown approach
- Iterative approach


The practical 'gut-feeling' approach may or may not indirectly follow the abovesaid approaches. None of the methods are 'recommended', as the best.

Metrics in Agile

Since all are turning to agile, definitely software tester have to equate their traditional methods into agile.

In non-agile there are some metrics used in testing

For e.g.,

To find the efficiency of test cases, we can say

[No.of.test cases those can be mapped into defects] / [Total no.of test cases]

is one of the ratios used in traditional software testing.

How to do this in agile process, where test case itself is not important (at least , test case documentation)

Likewise, there are many ratios/metrics which were non-agile period may not be directly applicable in agile era.. looking for the conversion or in simple terms,

"What are the ratios and metrics those can be useful in agile testing?"


BlackBox Vs Agile

For quite sometime, I am hearing lot of things about Agile development, and in turn agile testing. Lot and lot of things about testing frameworks, unit testing, early testing etc.

All these methods are trying to kill Blackbox testing and the role of black box tester. None of these methodologies, emphasize about the role of Blackbox tester. All these are telling about early testing, extensive unit testing, automating unit testing etc.

But is this true? Will these testings w/o blackbox testing achive result? I do not think so, the primary reasons being, that many 'ities' are missing the above said agile methodologies,
-Usability

-Portability
-Compatibility
etc.
are the 'ities' those are taken care by Blackbox tester which the automated unit testing cannot achieve.


But the agile development and so called agile testing is blown out of it 'deserved' proportion and it tries to occupy the land of Black box testing, which should be defeated by blackbox testers. But alas, I couldnt find proper propoganda of blackbox testers against this agile wave.

Guidelines to follow before filing defects

I created this for my team but felt it would be useful for others if i make it public


General Guidelines
Be precise
Be clear - explain it so others can reproduce the bug
One bug per report
No bug is too trivial to report - small bugs may hide big bugs
Clearly separate fact from speculation
If a bug already exists in the same area then make yours as an annotation to the existing bug instead of filing a new bug
Search the bug tracking system and ensure that the bug does not exist already
Explain the bug in a quick and understandable summary
Mention the browser, operating system, printer and other relevant details clearly
Take a snapshot of the issue if required and add them to the bug report as attachment. Try to attach a jpeg as the size would be small and can be opened easily
Mention the steps to reproduce, expected result and actual result clearly
Mention the component/module where the error occurred
If you unable to reproduce the bug consistently, please mention the same in the remarks/notes section
Have a word with your peer to just ensure if he is also facing the same issue.
Have the right service packs in place for your browser, operating system etc
If the bug is pertaining to look and feel, usability etc test it on multiple browsers before filing it
Clearly categorize the testing level where the bug was found such as unit/integration/system/acceptance
Mention the type of testing you did to unearth the bug such as smoke/regression/ad hoc etc

Build Guidelines
If the bug is identified in an older build, reproduce your bug using a recent build of the software, to see whether it has already been fixed.
Mention the build release date and build number clearly before filing the bug
Make sure that you are using the right build
Read build release notes carefully before filing a defect since the bug that you have identified could be a known issue.

Enhancement Request and Defects
An enhancement request is a case where the tester feels that the existing feature can be either enhanced with a new functionality or completely replaced with a new functionality to give the end user a better product experience. One example is to replace the option of choosing countries using multiple checkboxes by providing a user with a combo box.
A defect is a case where there the expected behavior of the system is not matching the actual behavior when testing it.


Priority/Severity Guidelines
Severity refers to how bad or critical the bug is from a functional/technical viewpoint. Severity of a bug is defined by the tester
Priority refers to the importance of fixing the bug from a customer viewpoint. Usually this is decided by the development team/manager in discussion with the QA manager/team

When assigning priority/severity for a bug, please follow the guidelines below.
P1 - Fix the bug ASAP
P2 -Bug can be fixed before release
P3 - Bug can be fixed even after release
P4 - The bug is very minor, can be fixed anytime
S1 - Blocker
S2 - Critical
S3 - Major
S4 - Trivial/Minor

Types and Levels of Testing


I have often found testers getting confused between types and levels of testing. Levels of testing is associated with SDLC whereas types of testing are not. To put it straight, there could be only four levels of testing which are Unit, Integration, Systems and Acceptance. The classic "V" model is a good example for levels of testing.
Types of testing are more detailed and differ based on context. For instance, Smoke, Sanity, Adhoc, Exploratory and other similar types are performed mostly in the functional category based on the application need. For the benefit of everyone, i am sharing my knowledge on the types of testing using the picture above . I request the visitors to correct me in case if i am wrong.

Pairwise Test case Generator

Many of the testing experiences, especially product testing, testers will come across situation like executing test cases in


3 Browsers (Chrome,FireFox,IE)
2 Databases (Oracle,SQLSERVER)
4 operating systems(Linux,Windows,Solaris)
etc..

So the total no.of.combination is 3 X 2 X 4 = 24 combinations

If we have even 100 test cases, there comes an exhaustive no of 2400 test cases along with these combinations.


In order to avoid this, we use Pairwise testing, by removing the redundancy in the 24 combinations (in above said example).

How to reduce the redundancy is an important but slightly cumbersome knowledge, but www.testersdesk.com provides a free online tool to create an optimum result on pairwise testing.

Refer: http://www.testersdesk.com/pairwse_testersdesk.html

We have to register to use this tool.

Test Coverage - Contd

Vasu,

Lets assume that in your project, customers are finding more production defects, if you ask me , then i can say

1.I am finding your test strategy is wrong.

2.Whenever production bug is found by user, just analyse whether the bug they found (and you lost) is because of carelessness / negligance of your testers or because of the your nature of testing i.e your test strategy

If first is the case, then we need to make sure it wont happen and we may practice techniques like pair testing (two tester in same test area) or more tight supervision of test execution etc. If second is the case, you have to revisit the strategy.

3. If you keep on getting production bug then you need to get the list of production bug and do the gap analysis with your test cases. It is not simply adding the defect as test cases, but you do the 'gap analysis'. For e.g take X number of defects found in production and see the test cases in that area , the question should be asked is 'Why they havent added as test cases before?'. Finding a pattern between "{Z number of defects found by production users} vs {Y number of (probably ineffective) test cases} will also throw some light.

4.Diversify the testing techniques.This is one of the important area that should be concentrated. Change the testing techniques often during test life cycle.According ot context driven testing school there are four dimensions of testing,
-People oriented (beta testing, UAT etc)
-Risk oriented (Risk based testing)
-Coverage based (like documentation based testing, testing page by page of product documentation, or combination testing etc).
-Activity based (like regression testing, Long sequence testing etc)

There are many techniques in each section. But one of the important technique that i would like to quote is "Map and test all the ways to edit a field". You can change a field value in different ways. Depending upon the context, this technique may prove powerful.You have to test in all the possible ways.

5.Use gray box testing: do the black box testing by applying white box testing techniques like Decision coverage and condition coverage.

6.Clear the old test cases. See what are the existing test cases are obsolete and delete it or make them as non-executable. In this way you can save time & energy which can be used for new powerful attacking test cases.

7. Get powerful test data, it depends upon the project, some project real time data will act as powerful agent.

8.Get the help of end user (subject matter expert / business analyst). Schedule a meeting and go with important questions, like pattern of usage, pattern of real time data, more use cases / user stories etc. based on which we can create more powerful test cases. If it is product, then ask product manager, or marketing person to gain this knowledge on common usage.

9.Get the use of support team, use support team's knowledge on their calls in your testing.

Iteration by iteration, in this way we can reduce the production bugs.

How to increase Test Coverage?

I have often been asked by my clients to increase application test coverage through testcases and reduce customer defects and i wish i have a magical wand to help them.

In my perspective, increasing test coverage depends on various factors. How knowledgable testers are on the application, how effective they are when it comes to understanding of new requirements and creating testcases for that, do they step into the customer's shoes during test design are some of the key aspects which help in increasing test coverage.

Testcase effectiveness is a metric that talks about how effective a testcase is in finding defects and any testcase that is executed repeatedly over a period of time without identifying defects should be reviewed for its effectiveness. And, to design an effective testcase, the tester's knowledge and his approach of the application from the customer's viewpoint definitely matters.

I have seen products that have more than 10000 testcases for the application but repeatedly after every release customers file many defects which should have been ideally caught by QE(Quality Engineering/Testing team). And needless to mention, all these customer defects are being created as testcases after every release and are added to the test execution bucket for the next release to ensure that the same defects are not repeated and for increased test coverage. This eventually results in a lot of testcases being created but somehow the end goal of "coverage " never gets fulfilled.

So, the question is, how do we bring 80/20 rule here? 20 percent of the testcases should find 80 percent of the defects in the application. Because unless we do that, execution will be a laborious task with no goal and at the end of the day testers will still say "I have not tested this part of the application well and hence could be a risk". So, how can testers provide customers with a quality product by testing as much as possible within the given execution timeframe?

p.s: I am definitely not looking at 100% test coverage.

Excel in Testing

As excel is the de facto tool of any tester, usage of excel starts from test planning, test case creation, test execution and most importantly test reporting, even if we use other tools like MS -Word, web forms, rich text email, or other customized sophisticated technologies, testers may directly or indirectly use MS-excel in test reporting and thus, knowing excel or excelling in excel is one of the key responsibilities of tester. Usually even commercial tools have the option of exporting the test execution status to spreadsheets.

Test reporting without any functions or conditions: This is just copy paste or data entry operations of entering pass or fail or blocked against tests and filling corresponding bug numbers against failed test cases. Though this method is simple, it is time consuming and definitely is not the smarter way of working.

Test reporting with functions and conditions: In this way, we can write macros for copying the cell data across spreadsheets in a workbook, change cell colour based on test status,

refresh the whole workbook on periodical basis, summarize test results using aggregate functions and if conditions--all using single macro click.

Macro programming starts with simple formula entering in the cells and it goes up to complex function calling using VBA.

Personally I have used formula like,

=COUNTIF (Test_Area_Login! C3:C100,"P")

Here Test_Area_Login is the name of the worksheet (which obviously contains Login related test cases), this formula tells that count the number of 'P' entry between the cells C3 and

C100, so between C3 and C100 (which is 97 test cases, too high for login :-)) if tester passes 40 test cases by entering 'P' and fails 50 testcases by entering 'F' and 7 testcases are

blocked (entered as 'B'), now the above formula gives 40.

In the same way, failed test cases can be found by

=COUNTIF (Test_Area_Login! C3:C100,"F")

Suppose let’s assume some test cases are marked as N/A which stands for Not applicable. So in calculating total number of test cases, we should subtract these test cases, which can be done by

=COUNTA (Test_Area_Login!C$3:C$100)-COUNTIF(Test_Area_Login!C$3:C$100,"NA")

Moving forward, we can access and control the cell values.

Worksheet ("Test_Area_Login").Range ("C1").Value=10 -->Assigns 10 the cell C1

We can change the active cell from one cell to another cell by,

ActiveCell.Offset (1,0)=1 (Places 1 on the next row)

ActiveCell.Offset (0, 1) =2 (Places 2 on the next column)

We can also change the background colour of the cell based on our own condition (if passed, then green, if failed, then magenta etc.)

We can do so by,

If (Worksheet (Test_Area_Login).Range ("C1").Value="P") then

Worksheet (Test_Area_Login).Range ("C1").Interior.ColorIndix=3

The interior property references the colour and style of the shading; it uses an index, rather than real colour. What is the colour of magenta? I don’t know, we can find it , by the following

VBA procedure,

Sub DisplayPallet()

Dim N As Long

For N = 1 To 56

Cells(N, 1).Interior.ColorIndex = N

Next N

End Sub

The options through the macro and VBA are countless, what I have touched here is just a tip of ice berg. Usage of macro and VBA programming will definetely rise the bar in test reporting and make it more sophasticated and smart.

Welcome

Welcome to our blog, yet another blog in software testing. We know that there are numerous blogs in software testing discussing abounding no. of. Topics.

In this blog, our intention is to share our testing experiences, challenges we met, questions we have (and you have) while continuously getting the input from you.

Well, there are many grey areas in software testing,though explored by many of the industry’s most powerful testers, but still remaining as puzzles.

-Are testers Non-technical persons middle of 'techi developers'?
-Self respect of tester in a development team
-Measuring the capacities of testers in a testing team
-Precise test estimation
-Power testing
-Effective automation
-Using Open source tools and scripting language in software testing
-Evolution of testing and tester in agile period

are some of few...

There are many areas in outer world, which are not (yet) completely explored by software testers,at least you and me, many mathematical subjects (like mathematical modeling, permutations and combinations), Statistics (Sampling and probability) , Logic (Complex truth table) are to name a few..

Still we do not know the precise answer to questions like 'How you missed this bug?','Why automation framework is not robust across builds?','Can we release the product?' etc.

This blog will continuously explore these grey areas, will get questions (as well as answers) from friends to solve our testing problems,ways to improve our testing, ways to make our self powerful testers,attaining international standards (see, v shud be a bit enthu.. :-))

So, START MUSIC...

-Vasu & Gokul