Tuesday, August 7, 2012

Why not to create Test Documentation


Before you read further, I would like you to do a small exercise. For each product/release that you have tested, calculate an approximate number for,
  1. Total no of Test documents you have written till date.
  2. Total no of test cases you've written.
  3. If it takes 5 minutes to write one test case along with the scripts, then total time consumed?
  4. Last time the scripts were updated



Once you are over the excitement of the numbers that you see in front of yourself and for those who could imagine how large the number would be and didn’t bother to calculate, answer yourself, what usually happens when the product/feature is released?

Usually, important/happy cases are documented in the regression pack, which is usually automated in many cases. From then onwards, the feature test documentation is not used anymore. Then why do we spend so much time on it? Let me try to answer this.

  • The most obvious reason is that the management has asked us to do so because the piece of documentation is getting the company money or “billing the client”. We all like to earn money but is that the end goal?
  • We might be writing it because this is what testers have been doing for past so many years. The problem with this is that no one questions what they’re doing and gladly accept what they have been taught by their seniors.



What do we usually achieve when we try to follow the above process?

  1. Missed Deadlines because we were so busy writing test scripts and getting them reviewed from the customer that we forgot to test the product
  2. Heated discussion with the developers regarding why they did not mention it in the FSD in the first place?
  3. A Buggy product
  4. An unhappy end user and customer


If the above points doesn’t seem so convincing to you, then ask yourself, “Do I like to write test documentation?” If your answer is still Yes, then I hope god is with the product that you have tested. 


What could you do instead of wasting time in test documentation?
  • Take time in understanding the requirements. Create a mind map.
  • Have a question & answer round with the requirements analyst.
  • Have a debate with the Developers
  • Test to explore the product. Yes, I clearly meant to say, practice exploratory testing.


I don’t have a goal to change the world, but if you are passionate about testing and want to test the product instead of wasting your time in writing how to test the product, then do think about it.



Friday, April 27, 2012

Evidence of Testing

Recently the client, for whom I am testing, asked me whether our team records any evidence or not. I didn’t understand what they were trying to say, so I asked them what they meant by evidence. They told me that every test case should have a screenshot of the system under test when executing, so that they know we have actually tested the product. Perhaps they thought of us as 'checkers'  rather than 'testers'. They also might just wanted to see that if we were performing the tests correctly or not. 
Who would confirm that? Would it be the Dev team? This seems un-necessary because the test cases/scripts written are reviewed by the client. So any incorrect test case/script would have been rectified at that point itself.

We usually write pass/fail against each test case and I knew that it was more than enough ‘evidence’ to show that we did the execution. Why did they require evidence? Did they not believe us that we were actually testing the system and not just looting them?
On second thoughts, it might be possible from their perspective. Just imagine if your product has been sent thousands of miles away from you to be tested by persons whom you have never met in your life, you would definitely have a doubt. Even if you can’t imagine, you have to give the benefit of doubt to them.
So we can say that there was a trust deficit between the client and the customer, which is not at all good. Trust comes only from experience and not by evidence.

The point I want to make is not just about trust, it’s more about gathering evidence. Should testers be actually gathering evidence? Isn’t pass/fail enough?
I don’t think that evidence is necessary. Firstly, it’s boring, time consuming and tedious task. This is not the reason why testers became testers, to prove that they actually perform the tests. The amount of time a tester devotes to gather evidence, in that time you can perform more tests, whether scripted or not and actually benefit the product from it.

Let’s assume that you do gather evidence and attach screenshots along with the test cases. What benefit can we achieve from this exercise? The only answer that I can think of is none. There’s none benefit at all. Even if a bug does come up in UAT or a customer site, what will you do with the evidence?? The bug has to be fixed anyways. The blame will be on the testing team, but isn’t the testing team blamed every time when a bug is raised. Even if you had passed the test cases in SIT, then also the trust would vanish, the primary reason for which the evidence was taken.

The other disadvantage of gathering evidence is the maintainability. In every cycle of execution, you would have to maintain every version’s screenshots, which again is a tedious task.


Let’s say that capturing evidence is a necessary evil. What are the possible solutions to filter it down and not make it a tedious exercise?

1)      Get a screen recorder which can record the steps that you are executing.
2)      If you can’t afford one, then take evidence of only those test cases that are high priority. (Still a useless process)
3)      Ask the client for more time. They would definitely think about it again.
4)      It should only be done in the final test execution cycle before the release.


Tuesday, November 22, 2011

How Not to Test


For the past 3-4 months I have been bombarded with new functionalities to test. I have been asked to create test cases, write test scripts and also execute them. My focus is on the test execution part. Writing Test cases and test scripts deserve independent space.

Test execution is the process of the following test scripts. Yikes, doesn’t sound interesting and also not cool. Following steps, seriously? Why on earth would I want to follow the scripts I wrote? I’m better off without them or in fact any tester is better off without scripts. That‘s why I didn’t do it. Testing is an intriguing process and should be left so.

I was very excited for the build to arrive and start off with the testing. Finally it arrived and the next thing that happened was a pure learning experience for me as a tester. 

I got over excited and started to test everything and I mean everything, in the first couple of days.

I was testing everything but it was all ad hoc. It was like starting from Z, going to E, back to Z, then to A and so on. There was no sequence. This did not mean that I was not finding bugs, I was. Then where was the problem? The problem came when I scrutinized the bugs that were raised. The bugs were found while executing complex scenarios. There was not a single use case or test case of low complexity that had been tested. This is not right. This is not how you should approach early testing.

A tester needs to first understand the system that he/she is going to test. Do a tour of the functionality, get to understand what the developers are trying to achieve and what the stakeholders actually desire. 

Keeping this in mind, I've started created mind maps. These mind mapping tools help you to organise your thought aka map the understanding that is going on in your brain.

Once you’ve created mind maps, the next thing would be analyse the mind maps and then design your tests. Not only this would help in better understanding of the functionality but will also help you in better analysis and consequently better tests. (Read Cem Kaner's slide on Function Testing in BBST Course.)

To conclude, don’t be over excited when testing. Use the excitement constructively and then test.


PS: Over-excitement is one of the ways of not approaching to testing. There are plenty of others which I will definitely post once I experience them.

Saturday, October 22, 2011

First WT Experience-WT 01

I had my first Weekend Testing Session today. For those of you who don't know what Weekend Testing is, go to http://weekendtesting.com.

I've always wanted to participate and when I did get an opportunity I didn't let go of it. When WT69 was announced, I mailed for the participation right away. There were still 2 days left for the session and I was so much excited that I even set a reminder on my mobile phone. Sounds crazy, but didn't want to miss out on it.


At 15:45, Skype informed me that Weekend Testing has added me as a contact. :) Ajay briefed all of the testers with the mission.


Mission

Today's session is about Time Zone Conversion
http://www.timezoneconverter.com/cgi-bin/tzc.tzc

http://wwp.greenwichmeantime.com/gmt-converter/index.htm

Compare and contrast these two time zone converters.

1. How useful is each link to the user, tester?
2. What are the advantages/disadvantages in each of the converters?
3. If you had to design only ten tests for each of the converters, what would they be?



Experience
This was first my weekend testing session and it was a great experience. Even though the constant pinging on Skype didn’t help, I managed to keep my focus on the testing and not get disturbed. When I first analysed the mission I thought that one hour will not enough for this mission, that’s why I wasn’t able to complete the mission, not because the time wasn’t enough but because I had made my brain convinced that time wasn't enough. 

After analysing the mission I thought of deciding on a approach which actually helped me in time management.  

Approach

  • Divided the time for each converter
    • 15 mins for Time Zone
    • 15 mins for GMT
    • 10 mins to compare & contrast
    • 15 mins to design tests


Throughout the session, there was some sort of a 'rush' that was going through me, as if I had something to complete or to achieve. Not able to figure out what & why. 


Learning

  • Wasted the initial 15 mins by getting excited and not focusing on the mission.
  • Didn’t analyse the mission properly because once the session was about to complete I couldn’t answer the first question. Should have analysed and asked questions at the beginning of the session itself.
  • Didn't have a proper format to write the report, so just wrote in any format , not tidy. 


The one thing which I believed that I did right was keep the mission in mind. I didn't go out and play with the testable.

I am definitely going to be looking forward for the upcoming sessions.



Sunday, October 2, 2011

Tester Vs Developer – Is this the worst it could get?

We all know the kind of relationship the tester and the developer share. It’s not good, there is bitterness involved from both sides.

Take a look at the below conversation which happened between the developer and a tester after the tester raised an invalid defect in the developer's module.

<Start>

Developer: You raised a defect?

Tester: Which one are you talking about?

Dev: The one you found in my module. Defect ID: xxxxxxx

Tester: Oh yeahh...What about it??

Dev: Can you repro this on your kit once again and show me where the error is coming??

The tester, with a big smile on his face, shows the developer the error.

Dev: This is an invalid defect. You are missing out on some pre-requisites.

Tester: Well, there is no Help Me or Readme file here, which suggests any kind of pre-requisites or is there?

Dev:  No , there isn't but the users are aware. Go and read the release notes.

Tester: The release notes are not for me to read, it's for the customer, isn't it?? If you have any problems with this defect, then please mention it in the defect management tool.

Dev: Why would I, it's not assigned to me !!!

Tester: OK, no probs.

Dev: And please get a understanding of this functionality and processes. You don't know how to test !!

The last statement really got me boiled. I really wanted to kick him !!!!

Tester: If you have so much problems with my understanding, then give me a KT regarding this, I'll ask your manager to arrange it.

Dev: Who am I to give you a KT !!!

<End>

Now, when this conversation was happening , the Dev Manager was there and listening to each and every thing but he didn't bother to stop the developer and instead told me that he is kinda moody. Apart from the Developer questioning my integrity, this was another sad moment of that day. I don't know if this is the worst that I have faced head on with the developer or is there more to come but I really hope no other tester faces such kind of situations.

There is a very thin line between the tester and the developer which should never be crossed.  A good tester should never think that a bug is a developer’s lack of skill or lack of “anything”.  A good developer should never think that the tester is raising defects on purpose(well, that’s only an assumption :P).

It only means that the tester and developer have different understanding. A valid defect means the developer had a different understanding while coding it and an invalid defect means the same for the tester while testing.

The “fight” starts when people start taking these things personally. Why do they take it personally??? It's not your pet project, you're just working for a company which pays you a 0.'x' % of their revenue. 

Now let me ask you one question. If you are married and have a kid, if somebody points out a mistake in your kid, that his forehead is too big, then you'd obviously want to rip that person's head off. The same is the case with the developers. Developers think of their written code as their baby, well some of them do.

Lessons Learnt that day:

  1. Test twice and discuss with the peer tester before raising defects i.e if you are not sure about it.
  2. Don't get too much involved with the developer. 

Sunday, September 4, 2011

Gmail Filters are buggy !!!

I have a habit of testing open source software's in my free time because there's a saying 'Practice makes a man perfect'. So, I download software's from the internet and simply test them whenever I am free. I do this to improve my testing skills, get new test ideas and simply because I enjoy breaking software's much more than I love to create them !!!

So, this weekend, I downloaded a couple of software's to test but ended up not testing them. Instead, I gave myself a challenge to test the most popular free email service provider, Gmail. Gmail is too big to explore in a couple of hours, so I narrowed down to a feature of Gmail which I hardly use, 'Filters'. I didn't expect myself to find many bugs but I did found one lurking around which should have been identified by "Test Cases"...hehehe.....


So Here's the bug report and for the developer's, the Story Writing:

One day, a user subscribes to very popular testing magazines, testing forums and clubs. Subscribing meant that he'll receive loads of mails from them. So he wanted all of those mails to be placed under one location and not be mixed with rest of the mail in his inbox.
So he logged into the Gmail Account and on the top of the page, he saw a link named 'Create a Filter'. This made sense to the user that he should create filters for his testing mails.





He clicks on the link to create the filter, and is redirected to the following page.















The user populates the 'From' field with 'Testing', so that any mail that contains the word 'Testing' will be included in this filter. He didn't require any other fields, so he clicked on the 'Next Step' button, and the following page appears.















Out of all the options available, he decides to apply a label to all of these mails by checking the checkbox 'Apply the Label'. However, he has to create a new label and therefore chooses the option New Label.

He enters the name of the new label as 'Testing' and clicks on the 'Create' button.
















A small blue text appears and states that the new label has been created. :D
















Excitedly, he goes to the 'Create Filters' grid at the top of the screen and clicks on the Drop Down List but to his shock, the 'testing' label is not there !! :(















What ??? How can this happen....Where's it gone, I just created it !! Gmail just prompted that the new label has been created, then why the hell it doesn't show it !!!!
The user thinks that maybe it has been created and has been selected by default. Gmail wouldn't mislead their users or would it, hmm??


Anyhow, the user decides to go forward with creating the filter and clicks on the 'Create a Filter' button.
















Ahhhhhhhhhhhhhhhh !!!!!!!!!!!!! WTF !!!!

It's asking me to choose a label, but it wouldn't let me choose the label which I just created. God, Gmail should do keep their tester's tight !!!!


Through this example, there are two points on which I want to lay emphasis on:
  1. Bug Reports: A tester is or rather should not be judged by the number of defects that he has raised but the manner in which he has reported them to the developers. How much does the bug report helps the developer understand the problem not just from the logic point of view but from the user point of view.
  2. No Software is defect Free. Whether it's the Microsoft Vista or Gmail or any other popular software, each one of them has defects. They might not be of high severity but a bug is a bug. So do not hesitate in testing software's because they are popular and assuming that they'll not contain any defects.

Happy Testing !!!


PS: If you actually do come across this bug, the workaround for this to click on the 'Back' button and then on 'Create a Filter', the label created will appear.

Tuesday, August 2, 2011

Tester should have access to the code

Many developers and even testers believe that testers should not be able to look at the code. They have a belief that the tester might identify defects just by looking at the code. I very much agree with this. If I am not able to find defects during the system testing then people might get a perception that I am not a "good" tester or I am not dedicated to my job and I would certainly not want this perception about me. But this is not my point when I say that tester should have access to the code.

Both the tester and developer have one thing in common i.e. the functional specifications. However the developers also have the technical specification. Usually their main focus is on the technical part. This leads the developers to miss out on the whole purpose of what the code is actually supposed to do. This is one of the reasons why lots of defects are clustered over one area. It doesn't matter if the code is very much efficient if it was not processing what it should be.

There will be times when defects are found in one common area. This is because the dev has not understood the functionality correctly and has a different understanding.
At this stage, the tester should have a look into the code and tell the dev that this where you're going wrong and the logic should be this instead of increasing the bug count to his name.
But beware this will not at all go down well the developers !!!! Only tell them that the logic behind the code is incorrect.

Now one might say that how would the tester understand the written code??? My suggestion (and Cem Kaner's-Lessons Learnt in Software Testing) for such testers would be to first go and learn a programming language. I am not saying that become a master of that language, but just get used to the whole feel of coding. Writing code is not tough, it's the logic behind the code which is challenging. The logic is all what the tester needs to care about and not how efficiently the code has been written.