Testing in Real World !

Quite often I receive mails from friends asking for some testing exercises. According to me, if you are alert enough, you can find lots of testing exercises in your day to day life. Can’t agree with me? Read on…

Today I got an SMS (Forward of course) from a friend. The content of that message was as follows:

“If you are forced to draw money by a robber in an ATM, then just enter your PIN number in reverse order. By doing so, you will be allowed to withdraw money from you’re a/c and at the same time the cop will be informed! So the cop will reach the ATM in a short while and rescue you.”

At first sight, this might seem a very useful message. But a tester is always trained and taught to be skeptical about everything. And I am no exception. So how could I take this piece of information as true, without making further observations/investigations?

So I put on my tester’s shoes and tried to analyze it. And here are my observations:

1. If this was true, then I should have known this before. Because, if it was true, the bank should have informed me about this when I created my a/c and was given my ATM card. How could they miss to transfer such an important instruction?

2. There are hundreds of banks world over. But this SMS never told about the bank which provides this facility. That meant this information was surely incomplete (if not incorrect).

3. Now coming to the loose link of the message. At some point, the SMS tells about entering reverse PIN number in order to activate some security system. At first sight, this sounds like a brilliant method. Isn’t it? But just think for a while, and you will know this can’t be right. If this was true, then how about the PIN numbers like 1001, 2002, 1221, 2332, 1111, 2222 and so on… (Palindromic Numbers) (these are my test data). If my PIN is one of those palindromes, then how to activate that security mechanism? Then I thought one work around for this is to disallow Palindromic numbers as your PIN. But the idea itself sounded stupid. Simply because, there are lots of Palindromic numbers within 9999 (the maximum possible PIN Number). And I have never seen a message in an ATM machine restricting me from using a Palindromic number as my PIN. But I did not want to believe that argument of mine, without actually seeing (executing my test case with my pre-set test data) it. So I immediately rushed to my nearest ATM counter and tested this. And I found that there is no such restriction for such numbers (test case passed!). Then I checked the same test with two other bank a/c ATMs (regression testing!). And as expected here also my test cases passed! This test almost made me sure about the inaccuracy of the SMS message.

4. Just as some additional arguments to strengthen my point, I again looked at the SMS again. And there it is. If at all we accept this message to be true, still then do you think that “the cop will reach the ATM in a short while and rescue you”, keeping in mind that this is India?

There are still lots of information left in the message which prove that the information is a hoax. So I would like to leave them for my readers and would like to see, how they use their testing skills to find them out.

Hints: Always use the 3 basic weapons of a tester. i.e. Observe, Analyze and Skeptical.

There are lots of testing exercises lying around loosely in your own life too. Try to identify them and try to test them using your very own testing skills.

 

Contact:

Kamali Mukharjee

The A-Z of Usability

A is for Accessibility

Accessibility — designing products for disabled people — reminds us of two fundamental principles in usability. The first is the importance of “Knowing thy user” (and this is rarely the same as knowing thyself). The second is that management are more likely to take action on usability issues when they are backed up by legislation and standards.

B is for Blooper

Each user interface element (or “widget”) is designed for a particular purpose. For example, if you want users to select just one item from a short list, you use radio buttons; if they can select multiple items, checkboxes are the appropriate choice. Some developers continue to use basic HTML controls inappropriately and these user interface bloopers prevent people from building a mental model of how these controls behave.

C is for Content is (still) king

As Jakob Nielsen has said, “Ultimately, all users visit your Web site for its content. Everything else is just the backdrop.” Extending this principle to all interfaces, we could say that it is critical that your product allows people to achieve their key goals.

D is for Design patterns

Design patterns provide “best of breed” examples, showing how interfaces should be designed to carry out frequent and common tasks, like checking out at an e-commerce site. Following design patterns leads to a familiar consistency in user interaction and ensures your users won’t leave your site through surprise or confusion.

E is for Early prototyping

Usability techniques are really effective at detecting usability problems early in the development cycle, when they are easiest and least costly to fix. For example, early, low-fidelity prototypes (like paper prototypes) can be mocked up and tested with users before a line of code is written.

F is for Fitts’ Law

Fitts’ Law teaches us two things. First, it teaches us that the time to acquire a target is a function of the distance to and size of the target, which helps us design more usable interfaces. Second, it teaches us that we can derive a lot of practical design guidance from psychological research.

G is for Guidelines

Guidelines and standards have a long history in usability and HCI. By capturing best practice, standards help ensure consistency and hence usability for a wide range of users. The first national ergonomics standard was DIN 66-234 (published by the German Standards body), a multi-part ergonomics standard with a specific set of requirements for human-computer interaction. This landmark usability standard was followed by the hugely influential international usability standard, ISO 9241.

H is for Heuristic Evaluation

Heuristic evaluation is a key component of the “discount usability” movement introduced by Jacob Nielsen. The idea is that by assessing a product against a set of usability principles (Nielsen has 10), usability problems can be spotted cheaply and eradicated quickly. Several other sets of principles exist, including those in the standard ISO 9241-110.

I is for Iterative design

Rather than a “waterfall” approach to design, where a development team move inexorably from design concept through to implementation, usability professionals recommend an iterative design approach. With this technique, design concepts are developed, tested, re-designed and re-tested until usability objectives are met.

J is for Jakob Nielsen

Recently promoted from the “the king of usability” (Internet Magazine) to “the usability Pope” (Wirtschaftswoche Magazine, Germany), Jakob Nielsen has done more than any other person to popularise the field of usability and get it on the agenda of boardrooms across the World. As well as writing the best usability column on the internet, he’s also a very nice chap: he recently bought my lapsed domain name usabilitybook.com and when I pointed out my mistake to him he kindly repointed it to the E-Commerce Usability book web site.

K is for Keywords

In our web usability tests we find that the old adage, “A picture paints a thousand words”, just doesn’t apply to the way people use web sites. No amount of snazzy graphics or icons can beat a few well chosen trigger words as a call to action. Similarly, poor labelling sounds the death knell of a web site’s usability as reliably as any other measure.

L is for Layout

That’s not to say that good visual design doesn’t have a role to play in usability. A well designed visual layout helps people understand where they are meant to focus on a user interface, where they should look for navigation choices and how they should read the information.

M is for Metrics

Lots of people usability test but not many people set metrics prior to the test to determine success or failure. Products in usability tests should be measured against expected levels of task completion, the expected length of time on tasks and acceptable satisfaction ratings. You can then distinguish usability success from usability failure (it is a test after all).

N is for Navigation

The great challenge in user interface design is teaching people how your “stuff” is organised and how they can find it. This means you need to understand the mental models of your users (through activities like card sorting) build the information architecture for the site and use appropriate signposts and labels.

O is for Observation

Jerome K. Jerome once wrote, “I like work: it fascinates me. I can sit and look at it for hours.” To really understand how your users work you need to observe them in context using tools like contextual inquiry and ethnography. Direct observation allows you to see how your product is used in real life (our clients are continually astonished at how this differs from the way they thought their products would be used).

P is for Personas

A persona is a short description of a user group that you use to help guide decisions about product features, navigation, interactions, and visual design. Personas help you design for customer archetypes — neither an “average” nor a real customer, but a stereotypical one.

Q is for Questionnaires

Questionnaires and surveys allow you to collect data from large samples of users and so provide a statistically robust background to the small-sample data collected from activities like contextual inquiry and ethnography. Since people aren’t very good at introspecting into their behaviour, questionnaires are best used to ask “what”, “when” and “where” type questions, rather than “why” type questions.

R is for Red Route

Red Routes are the critical user journeys that your product or web site aims to support. Most products have a small number of red routes and they are directly linked to the customer’s key goal. For example, for a ticket machine at a railway station a red route would be, “buy a ticket”. For a digital camera, a red route would be “take a photo”.

S is for Screener

The results of user research are valid only if suitable participants are involved. This means deciding ahead of time the key characteristics of those users and developing a recruitment screener to ensure the right people are selected for the research. The screener should be included as an appendix in the usability test plan and circulated to stakeholders for approval. For more detailed guidance, read our article, “Writing the perfect participant screener”.

T is for Task scenarios

Task scenarios are narrative descriptions of what the user wants to do with your product or web site, phrased in the language of the user. For example, rather than “Create a personal signature” (a potential task for an e-mail package) we might write: “You want your name and address to appear on the bottom of all the messages you send. Use your e-mail program to achieve this.” Task scenarios are critical in the design phase because they help the design team focus on the customers and prospects that matter most and generate actionable results.

U is for Usability testing

A usability test is the acid test for a product or web site. Real users are asked to carry out real tasks and the test team measure usability metrics, like success rate. Unlike other consumer research methods, like focus groups, usability tests almost always focus on a single user at a time. Because a usability test uses a small number of participants (6-8 are typically enough to uncover 85% of usability problems) it is not suited to answering market research questions (such as how much participants would pay for a product or service), which typically need larger test samples.

V is for Verbal protocol

A verbal protocol is simply the words spoken by a participant in a “thinking aloud” usability test. Usability test administrators need to ensure that participants focus on so-called level 1 and level 2 verbalisations (a “stream of consciousness” with minor explication of the thought content) and avoid level 3 verbalisations (where participants try to explain the reasons behind their behaviour). In other words, usability tests should focus on what the participant attends to and in what order, not participant introspection, inference or opinion.

W is for Writing for the web

Writing for the web is fundamentally different to writing for print. Web content needs to be succinct (aim for half the word count of conventional writing), scannable (inverted pyramid writing style with meaningful sub-headings and bulleted lists) and objective (written in the active voice with no “marketeese”).

X is for Xenodochial

Xenodochial means friendly to strangers and this is a good way of capturing the notion that public user interfaces (like kiosk-based interfaces or indeed many web sites) may be used infrequently and so should immediately convey the key tasks that can be completed with the system.

Y is for Yardstick

Most people carry out usability tests to find usability problems but they can also be used to benchmark one product against another using statistics as a yardstick. The maths isn’t that complicated and there are calculators available. The biggest obstacle is convincing management that these measures need to be taken.

Z is for Zealots

With the advent of fundamentalism, zealots get a bad press these days. But to institutionalise usability, you need usability zealots within your team who will carry the torch for usability and demonstrate its importance and relevance to management and the design team.

 

Contact:

 Anupama Verma 

5 Steps of Web Accessibility Testing

Anyone can test a web page or even an entire site for accessibility. The necessary knowledge isn’t PhD level or even too vast. It does require familiarity with HTML and CSS, the ability to appreciate the unique challenges faced by users with various disabilities, and an understanding of the W3C Accessibility Guidelines. Beyond that, all you need is the desire and time.

Step 1 – Validate HTML and CSS
This step may come as a surprise to many. After all, wouldn’t invalid code either not work or leave a visible bug? Actually, the answer is not necessarily.

The reason can be that some WISIWIG editors generate invalid code and hard-core programmers who write their code by hand can easily omit some bit of HTML or CSS “grammar”. This doesn’t mean non-functioning code it just means it doesn’t meet the standards. I won’t go into specifics here, just think of it as sort of similar to formal collegiate writing. There is a particular standard which is expected. A paper could be written differently, more “free form”, it could contain all the ideas and arguments, and it could be just as well thought out – but because it doesn’t meet the standard it would not get a top grade.

Validating your code has a number of advantages. It decreases the probability of cross browser problems, it tends to eliminate or reduce so called code bloat, and valid code tends to be easier to maintain as well as being compatible with a broader range of assistive technologies used by people with disabilities.

Step 2 – Automated Accessibility Testing
Automated accessibility testing is an often misunderstood step in the overall process. To some it is everything that needs to be done. “My site is Bobby compliant. Doesn’t that mean it’s accessible?” To others it’s a red herring and should be avoided all together. My take is that it’s an invaluable step. When writing an article I rely on the spellchecker to catch my typos even though I know I still need to go through and check out the copy myself to make sure I have written “Dave” and not “Cave”, for instance. Automated testing finds many issues which could easily be missed by reading the code and so I always begin with it.

Depending on the scale of your project you might be able to use one of several free web based validators or you may opt to buy one of the testing packages available on the market.

The report you will get will include tests which cannot be run by the validator but which it flags for manual examination. Make sure to go through these as well. Most of the tools will describe the issue enough for someone with the above mentioned prerequisites to test.

And lastly, make sure to have any issues raised, fixed before continuing. Doing so will greatly reduce the time required for the remainder of testing.

And please, I cannot emphasize enough that automated testing alone cannot assure accessibility. Please continue with the steps below.

Step 3 – Keyboard Testing
This a simple but very important step. Hide you mouse and navigate your web site using only your keyboard. If you have never done this then you are likely to learn something.

Various groups of people can’t or don’t want to use a mouse. For some it’s just confusing or difficult, especially those with certain motor control problems or sometimes seniors. For others, like blind web users, it’s impossible. Making sure every link, form field, button, or any other functionality in the page is accessible via the keyboard is a basic necessity of web accessibility – but you may also find that to get to the main content or primary form on the page you need to click the Tab Key many times. Though technically accessible, this is extremely inconvenient.

Again, be sure to make any changes required which this phase of testing brought up before continuing.

Step 4 – Screen Reader Testing
To conduct screen reader testing you will need to install the necessary software . It will take some time to get used to and configure your screen reader so be patient. Begin by simply turning off your monitor and listening to your page. Does it make sense? Many web designs depend on visual cues and can get close to unintelligible when those cues aren’t available.

Next, try to carry out one or more of the tasks your website was built for. If it’s an online store, find a product and make a purchase. If it’s an informational site then find key information. Remember – this is the reason you built the site and it is the reason you are making it accessible. If it core functionality depends on a complex form, can you tell which fields are required? If it’s a shopping cart, can you see how much you have spent before making the purchase?

Step 5 – Target Audience Testing
Various conventions of web design have emerged in the course of the World Wide Web’s short existence which we have grown used too and even depend on to help us navigate a new site. Links appear in a different color (often blue) and underlined. Site wide or global navigation is usually found along the top of the page. Small pictures can often be clicked to get a bigger one. Similarly, there are conventions used in quality accessible design but naturally those of us who aren’t dependant on accessible design may not be aware of them. These might include links, sometimes invisible, along the top of a page which allow the user to skip to various parts of the page, colors with high contrast values or just consistent design throughout the site.

Web accessibility isn’t just fulfilling a set of requirements or validating against predefined checkpoints. It also means quality design. And just as it’s best to leave questions of browser based user interface design to an expert it’s best to have your site checked over by an expert in screen reader user interface design when considering accessibility. And though in theory, there is no reason a sighted specialist couldn’t become such an expert, one who is dependant on screen readers will more likely be intimate with their functions and use, the frustrations of poor web site design and solutions which ease or eliminate those frustrations in practice, not just in theory.

 

Contact:

Menakshi Kumari

Checklist for Web Application Testing

1. FUNCTIONALITY
1.1 LINKS

1.1.1 Check that the link takes you to the page it said it would.
1.1.2 Ensure to have no orphan pages (a page that has no links to it)
1.1.3 Check all of your links to other websites
1.1.4 Are all referenced web sites or email addresses hyperlinked?

1.1.5 If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists.
1.1.6 Check all mailto links and whether it reaches properly

1.2 FORMS

1.2.1 Acceptance of invalid input
1.2.2 Optional versus mandatory fields
1.2.3 Input longer than field allows
1.2.4 Radio buttons
1.2.5 Default values on page load/reload(Also terms and conditions should be disabled)
1.2.6 Is Command Button can be used for HyperLinks and Continue Links ?
1.2.6 Is all the datas inside combo/list box are arranged in chronolgical order?
1.2.7 Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the “right place?
1.2.8 Does a scrollbar appear if required?

1.3 DATA VERIFICATION AND VALIDATION

1.3.1 Is the Privacy Policy clearly defined and available for user access?
1.3.2 At no point of time the system should behave awkwardly when an invalid data is fed
1.3.3 Check to see what happens if a user deletes cookies while in site
1.3.4 Check to see what happens if a user deletes cookies after visiting a site

2. APPLICATION SPECIFIC FUNCTIONAL REQUIREMENTS

2.1 DATA INTEGRATION

2.1.1 Check the maximum field lengths to ensure that there are no truncated characters?
2.1.2 If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
2.1.3 If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.

2.2 DATE FIELD CHECKS

2.2.1 Assure that leap years are validated correctly & do not cause errors/miscalculations.
2.2.2 Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
2.2.3 Is copyright for all the sites includes Yahoo co-branded sites are updated

2.3 NUMERIC FIELDS

2.3.1 Assure that lowest and highest values are handled correctly.
2.3.2 Assure that numeric fields with a blank in position 1 are processed or reported as an error.
2.3.3 Assure that fields with a blank in the last position are processed or reported as an error an error.
2.3.4 Assure that both + and – values are correctly processed.
2.3.5 Assure that division by zero does not occur.
2.3.6 Include value zero in all calculations.
2.3.7 Assure that upper and lower values in ranges are handled correctly. (Using BVA)

2.4 ALPHANUMERIC FIELD CHECKS

2.4.1 Use blank and non-blank data.
2.4.2 Include lowest and highest values.
2.4.3 Include invalid characters & symbols.
2.4.4 Include valid characters.
2.4.5 Include data items with first position blank.
2.4.6 Include data items with last position blank.

3. INTERFACE AND ERROR HANDLING

3.1 SERVER INTERFACE

3.1.1 Verify that communication is done correctly, web server-application server, application server-database server and vice versa.
3.1.2 Compatibility of server software, hardware, network connections

3.2 EXTERNAL INTERFACE

3.2.1 Have all supported browsers been tested?
3.2.2 Have all error conditions related to external interfaces been tested when external application is unavailable or server inaccessible?

3.3 INTERNAL INTERFACE

3.3.1 If the site uses plug-ins, can the site still be used without them?
3.3.2 Can all linked documents be supported/opened on all platforms (i.e. can Microsoft Word be opened on Solaris)?
3.3.3 Are failures handled if there are errors in download?
3.3.4 Can users use copy/paste functionality?Does it allows in password/CVV/credit card no field?
3.3.5 Are you able to submit unencrypted form data?

3.4 INTERNAL INTERFACE

3.4.1 If the system does crash, are the re-start and recovery mechanisms efficient and reliable?
3.4.2 If we leave the site in the middle of a task does it cancel?
3.4.3 If we lose our Internet connection does the transaction cancel?
3.4.4 Does our solution handle browser crashes?
3.4.5 Does our solution handle network failures between Web site and application servers?
3.4.6 Have you implemented intelligent error handling (from disabling cookies, etc.)?

4. COMPATIBILITY

4.1 BROWSERS

4.1.1 Is the HTML version being used compatible with appropriate browser versions?
4.1.2 Do images display correctly with browsers under test?
4.1.3 Verify the fonts are usable on any of the browsers
4.1.4 Is Java Code/Scripts usable by the browsers under test?
4.1.5 Have you tested Animated GIFs across browsers?

4.2 VIDEO SETTINGS

4.2.1 Screen resolution (check that text and graphic alignment still work, font are readable etc.) like 1024 by 768, 600×800, 640 x 480 pixels etc
4.2.2 Colour depth (256, 16-bit, 32-bit)

4.3 CONNECTION SPEED

4.3.1 Does the site load quickly enough in the viewer’s browser within 8 Seconds?

4.4 PRINTERS

4.4.1 Text and image alignment
4.4.2 Colours of text, foreground and background
4.4.3 Scalability to fit paper size
4.4.4 Tables and borders
4.4.5 Do pages print legibly without cutting off text?

 

Contact:

Kamali Mukharjee

Advantage and Disadvantage of QTP over WinRunner

Hope you guys are familiar with new advanced product named QTP for Automation. Please find below few good comments on it.

I want to add some advantages and disadvantages on Quick Test Pro.

Advantages:
1. Lot easier than winrunner to record a script.
2. Records mouse over functionality.
3. Identifies double clicks
4. Uses programming language “VBScript”.
5. Check points and data driven tests can be implemented easily.
6. Can enhance the script without the Applicaion under test being opened using Active window functionlity.
7. Integrates with winrunner and testdirector.
8. Supports .NET environment.
9. Supports XML based web sites.

Disadvantages:
1. We do not have sufficient resources on QT pro.
2. Does not support mouse drag funcitonality as winrunner does.
3. Must know VBscipt in order to program.
4. In order to implement advanced futures of QT pro you must be a VBScript developer.
5. the “Object Repository” is not user friendly. You cannot work with object repository as you do with Winrunner.

Author:

Kiran

Winrunner Database Functions

1. Compares current database data to expected database data.

db_check (checklist, expected_results_file [ , max_rows [ , parameter_array ] ] );

Checklist: The name of the checklist specifying the checks to perform.
Expected_results_file: The name of the file storing the expected database data.
Max_rows: The maximum number of rows retrieved in a database. If no maximum is specified, then by default the number of rows is not limited. If you change this parameter in a db_check statement recorded in your test script, you must run the test in Update mode before you run it in Verify mode.

Parameter_array: The array of parameters for the SQL statement. For information on how to use this advanced feature, refer to the “Checking Databases” chapter in the WinRunner User’s Guide.

The db_check function captures and compares information about a database. It is inserted into your script when you create a database checkpoint. During a test run, WinRunner checks the query of the database with the checks specified in the checklist. WinRunner then checks the information obtained during the test run against the expected results contained in the expected_results_file. Note that when you use a Create > Database Checkpoint command to create a database checkpoint, only the first two (obligatory) parameters are included in the db_check statement (unless you parameterize the SQL statement from within Microsoft Query). You can use the max_rows parameter to specify the maximum number of rows retrieved in a database.

Note: If you change the max_row parameter in a db_check statement recorded in your test script, you must run the test in Update mode before you run it in Verify mode.

2. Creates a new database session and establishes a connection to an ODBC database.

db_connect (session_name, connection_string);

Session_name: The logical name or description of the database session.
Connection_string: The connection parameters to the ODBC database.

The db_connect function creates the new session_name database session and uses the connection_string to establish a connection to an ODBC database.

Notes: You can use the Function Generator to open an ODBC dialog box, in which you can create the connection string. If you try to use a session name that has already been used, WinRunner will delete the old session object and create a new one using the new connection string.

3. Disconnects from the database and ends the database session.

db_disconnect (session_name );

Session_name: The logical name or description of the database session.

The db_disconnect function disconnects from the session_name database session.
Note: You must use a db connect statement to connect to the database before you can use this function.

4. Executes the query based on the SQL statement and creates a record set.

db_execute_query ( session_name, SQL, record_number );

Session_name: The logical name or description of the database session.
SQL: The SQL statement. For information on this advanced feature, refer to the “Checking Databases” chapter in the WinRunner User’s Guide.
Record_number: An out parameter returning the number of records in the result query.

The db_execute_query function executes the query based on the SQL statement and creates a record set.
For information on this advanced feature, refer to the “Checking Databases” chapter in the WinRunner User’s Guide.
Note: You must use a db connect statement to connect to the database before you can use this function.

5. Returns the value of a single field in the database.

db_get_field_value ( session_name, row_index, column );

Session_name: The logical name or description of the database session.
Row_index: The index of the row written as a string: “# followed by the numeric index. (The first row is always numbered “#0”.)
Column: Either the name of the field in the column, or the index of the column within the database written as a string: “# followed by the numeric index. (The first column is always numbered “#0”.)

The db_get_field_value function returns the value of a single field in the specified row_index and column in the session_name database session.
In case of an error, an empty string will be returned.
Notes: You must use a db connect statement to connect to the database before you can use this function. You must use a db execute query statement to execute a query before you can use this function.

6. Returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.

db_get_headers (session_name, header_count, header_content);

Session_name: The logical name or description of the database session.
Header_count: The number of column headers in the query.
Header_content: The column headers concatenated and delimited by tabs. Note that if this string exceeds 1024 characters, it is truncated.

The db_get_headers function returns the header_count and the text in the column headers in the session_name database session.
Notes: You must use a db connect statement to connect to the database before you can use this function. You must use a db execute query statement to execute a query before you can use this function.

7. Returns the last error message of the last ODBC or Data Junction operation.

db_get_last_error (session_name, error );

Session_name: The logical name or description of the database session.
Error: The error message.

The db_get_last_error function returns the last error message of the last ODBC or Data Junction operation in the session_name database session.
Note: When working with Data Junction, the session_name parameter is ignored.
If there is no error message, an empty string will be returned.

Note: You must use a db connect statement to connect to the database before you can use this function.

8. Returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );

Session_name: The logical name or description of the database session.
Row_index: The numeric index of the row. (The first row is always numbered “0”.)
Row_content: The row content as a concatenation of the fields values, delimited by tabs.

The db_get_row function returns the row_content of the specified row_index, concatenated and delimited by tabs in the session_name database session.
Notes: You must use a db connect statement to connect to the database before you can use this function. You must use a db execute query statement to execute a query before you can use this function.

9. Compares information that appears in the application under test during a test run with the current values in the corresponding record(s) in your database.

db_record_check ( ChecklistFileName , SuccessConditions, RecordNumber );

ChecklistFileName: A file created by WinRunner and saved in the test’s checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Checkpoint wizard.
SuccessConditions: Contains one of the following values:
DVR_ONE_OR_MORE_MATCH – The checkpoint passes if one or more matching database records are found.

DVR_ONE_MATCH – The checkpoint passes if exactly one matching database record is found.

DVR_NO_MATCH – The checkpoint passes if no matching database records are found.

RecordNumber: An out parameter returning the number of records in the database.

The db_record_check function compares information that appears in the application under test during a test run with the current values in the corresponding record(s) in your database.
Note: You insert db_record_check statements by using the Runtime Record Checkpoint wizard. For more information, refer to the WinRunner User’s Guide.

10. Writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

Session_name: The logical name or description of the database session.
Output_file: The name of the text file in which the record set is written.
Headers: An optional Boolean parameter that will include or exclude the column headers from the record set written into the text file.
Record_limit: The maximum number of records in the record set to be written into the text file. A value of NO_LIMIT (the default value) indicates there is no maximum limit to the number of records in the record set.

The db_write_records writes the record set of the session_name into an output_file delimited by tabs.
Notes: You must use a db connect statement to connect to the database before you can use this function. You must use a db execute query statement to execute a query before you can use this function.

11. Sets a value in the current row of the data table.

ddt_set_val ( data_table_name, parameter, value );

Data_table_name: The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.The first row in the file contains the names of the parameters. This row is labeled row 0.
Parameter: The name of the column into which the value will be inserted.
Value: The value to be written into the table.

The ddt_set_val function sets a value in a cell of the current row of a database. To save the new or modified contents of the table, add a ddt save statement after the ddt_set_val statement. At the end of your test, use a ddt close statement to close the table.

Note: You can only use this function if the data table was opened in DDT_MODE_READWRITE (read or write mode).
Note: You must use a ddt open statement to open the data table before you can use any other ddt_ functions.

Note: To save the new or modified contents of the table, add a ddt save statement after the ddt_set_val statement. At the end of your test, use a ddt close statement to close the table.

12. Imports data from a database into a data table.

ddt_update_from_db ( data_table_name, file, out_row_count [ , max_rows ] );

Data_table_name: The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.
File: Either an *.sql file containing an ODBC query or a *.djs file containing a conversion defined by Data Junction.
Out_row_count: An out parameter containing the number of rows retrieved from the database.
Max_rows: An in parameter specifying the maximum number of rows to be retrieved from a database. If no maximum is specified, then by default the number of rows is not limited.

Author:

Jessica Newton

TSL Functions

1) set_window

This function specifies the active window in the AUT to recieve subsequent input.
arg1 is the description.
arg2 is the time.

2) button_check_info

This function checks the value of a button property.
arg1 is the button name.
arg2 is the property to check.
arg3 is the property expected value.

3) button_check_state

This function checks the value of a button property.
arg1 is the button name.
arg2 is the expected state: 0(OFF), 1(ON) or 2(DIMMED).

4) button_get_info

This function returns the value of a button property
arg1 is the button name
arg2 is the requested property
arg3 is the returned value.

5) button_get_state

This function returns the current state of a check or radio button
arg1 is the button name
arg2 is the returned state.

6) button_press

This function activates the specified push button
arg1 is the button name.

7) button_set

This function sets the specified radio or check button to the requested value
arg1 is the button name
arg2 is the value.

8) button_wait_info

This function waits for a specified value of a button property
arg1 is the button name
arg2 is the requested property
arg3 is the expected value
arg4 is the timeout (in seconds).

9) db_check

This function captures and compares data from a database.
Note that the checklist file (arg1) can be created only
during record.
arg1 – checklist file.
arg2 – name of file storing the captured data.
arg3 (optional) – max num of rows to retrieve (default – no limit).
arg4 (optional) – array of parameters for the SQL statement.
Notice: The order of the parameters (as in the Watch List) is important!

10) db_connect

This function creates a new connection session with a database.
arg1 – the session name (string)
arg2 – a connection string
for example “DSN=SQLServer_Source;UID=SA;PWD=abc123”

11) db_disconnect

This function disconnects from the database and deletes the session.
arg1 – the session name (string)

12) db_dj_convert

This function executes a Data Junction conversion export file (djs).
arg1 – the export file name (*.djs)
arg2 – an optional parameter to override the output file name
arg3 – a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 – an optional parameter to
limit the records number (-1 is no limit and is the default)

13) db_execute_query(“query1″,””,record_number);

This function executes an SQL statement.
Note that a db_connect for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – an SQL statement
arg3 – an out parameter to return the records number.

14) db_get_field_value(“query1″,”#0″,”#0”);

This function returns the value of a single item of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the row index number (zero based)
arg3 – the column index number (zero based) or the column name.

15) db_get_headers(“query1”,field_num,headers);

This function returns the fields headers and fields number of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – an out parameter to return the fields number
arg3 – an out parameter to return the concatenation
of the fields headers delimited by TAB.

16) db_get_last_error(“query1”,error);

This function returns the last error message of the last ODBC operation.
arg1 – the session name (string)
arg2 – an out parameter to return the last error.

17) db_get_row(“query1”,0,row_content);

This function returns a whole row of an executed query.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the row number (zero based)
arg3 – an out parameter to return the concatenation
of the fields values delimited by TAB.

18) db_record_check(“”,DVR_ONE_MATCH,record_number);

This function checks that the specified record exists in the
database.Note that the checklist file (arg1) can be created
only using the Database Record Verification Wizard.
arg1 – checklist file.
arg2 – success criteria.
arg3 – number of records found.

19) db_write_records(“query1″,”c:\\query1.txt”,TRUE,NO_LIMIT);

This function writes the records of an executed query into a file.
Note that a db_execute_query for (arg1) should be called before this function
arg1 – the session name (string)
arg2 – the output file name
arg3 – a boolean optional parameter whether to
include the headers (the default is TRUE)
arg4 – an optional parameter to
limit the records number (-1 is no limit and is the default).

20) dbl_click(“Left”,0);

This function depicts a mouse button double-click.
arg1 is the mouse button.
arg2 is the time.

21) ddt_export(table,new_table);

This function saves the table as a new file.
arg1 is the name of the existing table
arg2 is the name of the new file.

22) ddt_get_current_row(table,row);

This function retrieves the active row number.
arg1 is the table name.
arg2 is the active row.

23) ddt_get_parameters(table,params_list,params_num);

This function returns a list of all parameter in the table.
Table file – name of data table.
Params list (out) – list of parameters separated by TAB.
Params num (out) – number of parameters in list.

24) ddt_get_row_count(table,RowCount);

This function retrieves the number of rows in the table.
arg1 is the table name.
arg2 is the number of rows.

25) ddt_set_val_by_row(table,””,””,””);

This function sets a value for the cell indicated by its row
and column.
arg1 is the table name.
arg2 is the row number.
arg3 is the field.
arg4 is the value to be entered.

26) ddt_update_from_db(table,””,out_row_count,NO_LIMIT);

This function updates the table with data from database.
arg1 – table name.
arg2 – query or conversion file (*.sql ,*.djs).
arg3 (out) – num of rows actually retrieved.
arg4 (optional) – max num of rows to retrieve (default – no limit).

27) edit_check_info(“”,”value”,””);

This function checks the value of an edit property.
arg1 is the edit name.
arg2 is the property to check.
arg3 is the property expected value.

28) GUI_map_get_desc(“”,””,desc,buffer);

This function returns the description of an object in the GUI map
arg1 is name of the window containing the object
arg2 is the name of the object
arg3 is the returned description
arg4 is the output buffer containing the description

29) GUI_map_get_logical_name(“”,””,object,buffer);

This function returns the logical name of an object in the GUI map
arg1 is the object description
arg2 is name of the window containing the object
arg3 is the output object name
arg4 is the output buffer containing the description

30) obj_check_bitmap(“”,””,0,””,””,””,””);

This function captures and compares an object bitmap.
arg1 is the logical name of the object.
arg2 is a string that identifies the captured bitmap.
arg3 indicates the time.

31) obj_check_gui(“”,””,””,0);

This function captures and compares GUI data for an object.
arg1 is the logical name of the object.
arg2 is the name of the checklist for the captured GUI data.
arg3 is the name of the file storing the GUI data.
arg4 indicates the time.

32) obj_check_info(“”,”enabled”,””,10);

This function checks the value of an object property.
arg1 is the object name.
arg2 is the property to check.
arg3 is the property expected value
arg4 is the timeout (in seconds).

33) obj_click_on_text(“”,””,””,””,””,””,FALSE,LEFT);

This function moves the mouse pointer to the location of text in
the specified object and enters mouse button clicks.
arg1 is the object name
arg2 is the requested string expression
args 3,4 are object-relative x,y of the rectangle’s upper-left
corner (optional)
args 5,6 are object-relative x,y of the rectangle’s lower-right
corner (optional)
arg7 TRUE – search for any string, FALSE – search only for a
complete word.(optional)
arg8 is mouse button (optional)

34) obj_mouse_click(“”,””,””,LEFT);

This function performs a mouse click within the specified object
arg1 is the object name
arg2 is the x position
arg3 is the y position
arg4 is mouse button (optional)

35) obj_wait_bitmap(“”,””,0);

This function waits for a GUI object bitmap to be drawn.
arg1 is the logical name of the object.
arg2 is a string that identifies the captured bitmap.
arg3 indicates the time.

36) obj_wait_info(“”,”enabled”,””,10);

This function waits for a specified value of an object property
arg1 is the object name
arg2 is the requested property
arg3 is the expected value
arg4 is the timeout (in seconds).

37) set_class_map(“”,object);

This function associates a custom class with a standard class.
arg1 is the custom class.
arg2 is the standard class.

38) set_window(“”,0);

This function specifies the window that will receive subsequent input
arg1 is the window name

39) win_activate(“”);

This function activates a window
arg1 is the window

40) win_check_bitmap(“”,””,0,””,””,””,””);

This function captures and compares a window bitmap.
arg1 is the logical name of the window.
arg2 is a string that identifies the captured bitmap.
arg3 indicates the time.
args 4,5 are the coordinates or the upper left corner.
arg6 is the width of the bitmap.
arg7 is the height of the bitmap.

41) win_check_gui(“”,””,””,0);

This function captures and compares GUI data for an object.
arg1 is the logical name of the window.
arg2 is the name of the checklist for the captured GUI data.
arg3 is the name of the file storing the GUI data.
arg4 indicates the time.

42) win_check_info(“”,”label”,””,10);

This function checks the value of a window property.
arg1 is the window name.
arg2 is the property to check.
arg3 is the property expected value
arg4 is the timeout (in seconds).

43) win_mouse_click(“”,””,””,LEFT);

This function performs a mouse click within the specified window
arg1 is the window
arg2 is the x position
arg3 is the y position
arg4 is mouse button (optional)

44) win_wait_bitmap(“”,””,0,””,””,””,””);

This function waits for a window bitmap.
arg1 is the logical name of the window.
arg2 is a string that identifies the captured bitmap.
arg3 indicates the time.
args 4,5 are the coordinates or the upper left corner.
arg6 is the width of the bitmap.
arg7 is the height of the bitmap.

45) win_wait_info(“”,”enabled”,””,10);

This function waits for a specified value of a window property
arg1 is the window name
arg2 is the requested property
arg3 is the expected value
arg4 is the timeout (in seconds).

Author:

Jessica Newton

An Overview of Mobile Testing

Introduction

Most general software testing principals apply equally well to mobile solutions, although the number of tools available for mobile testing is much smaller, and there are a lot of extra potential problems your users can encounter that you have to test for.Many mobile solutions involve a significant hardware element in addition to the PDA, such as scanners, mobile telephony, GPS and position based devices, telemetry, etc. These extra hardware elements place additional demands on the tester, particularly in terms of isolating a bug to hardware or software.Mobile applications are often intended to be used by people with no technical or IT background, such as meter readers, milkmen, insurance sales people, on devices that have small screens, and either no keyboards or awkward keyboards. Good usability testing, carried out in conjunction with key users, in their own environment, is essential. I have seen a number of hand held projects fail for the reason that the end-user could not come to terms with the technology, even though the application was robust and met the functional spec. Many hand-held operating systems come in even more flavors than their desk top counterparts. I can think of seven flavors of Windows CE alone. Add to this that many enterprise PDA manufactures OEM the operating system, and update it regularly, you start to see the problems testing. Remember also that we don’t have our faithful automation tools for regression testing here.

Let’s start with some Mobile Testing Basics.

Mobile Testing Basics
Mobile Device Testing is the process to assure the quality of mobile devices, like mobile phone, PDA etc. The testing will be conducted on both hardware and software.

And from the view of different procedures, the testing comprises R&D Testing, Factory Testing and Certificate Testing.

R&D Testing:

R&D test is the main test phase for mobile device, and it happens during the developing phase of the mobile devices. It contains hardware testing, software testing, and mechanical testing.

Factory Testing:

Factory Testing is a kind of sanity check on mobile devices. It’s conducted automatically to verify that there are no defects brought by the manufacturing or assembling.

Certificate Testing:

Certificate Testing is the check before a mobile device goes to market. Many institutes or governments require mobile devices to conform it’s specifications and protocols to make sure the mobile device will not harm users’ health and have the compatibility with devices from other manufactures. Once the mobile device passes the checking, a certificate will be issued to it.

Unique Challenges in Testing

Unlike the PC based environment, the mobile environment is constituted by a plethora of devices with diverse hardware and software configurations and communication intricacies.

This diversity in mobile computing environments presents unique challenges in application development, quality assurance and deployment, requiring unique testing strategies.

Mobile Buisness Applications can be classified into stand-alone applications and enterprise applications. On the other hand, Enterprise applications are built to perform resource intensive transactions that are typical of corporate computing environments. Enterprise applications also interface with external systems through Wireless Application Protocol (WAP) or Hyper Text Transfer Protocol (HTTP). The unique challenges in testing mobile applications arising from diversity of the device environment, hardware and networking considerations and Rapid Application Development (RAD) methodologies are explained below:

Diversity of the Device Environment

The realm of mobile computing is composed of various types of mobile devices and underlying software (hundreds of device types, over 40 mobile browsers). Some of the unique challenges involved in mobile testing as a result of this condition are:

Rendering of images and positioning of positioning of elements on screen may be unsuitable in some devices due to the difference in display sizes across mobile devices and models.

Exhaustive testing of user interfaces is necessary to ensure compatibility of the application.

Mobile devices have different application runtimes- For example, Binary Runtime Environment for Wireless (BREW), Java, embedded visual basic runtime are some of the runtimes commonly available in mobile devices. Applications should be tested exhaustively for the variations specific to runtime.

Hardware Configuration; Network related challenges

Mobile environment offers lesser memory and processing power for computing when compared with the traditional PC environment. Unlike the network landscape of PC environment, the network landscape of a mobile device may have gateways (access points between wireless internet and the cable internet). Some of the drawbacks of diverse hardware configurations and network landscape of mobile devices are:

Limitations in processing speed and memory size of mobile devices lead to variations in performance of applications across different types of devices.

Testing programs should ensure that the applications deliver optimum performance for all desired configurations of hardware.

Some devices communicate through WAP while some others use HTTP to communicate. Applications should be tested for their compatibility with WAP enabled as well as HTTP enabled devices.

The network latency (time taken for data transfer) will be unpredictable when applications communicate over network boundaries, leading to inconsistent data transfer speeds. Testing should measure the performance of applications for various network bandwidths.

Gateways in a wireless network may act as data optimzers that deliver content more suitable for specific devices. This data optimization process of gateways may result in decreased performance for heavy traffic. Testing should determine the network traffic at which the gateway capabilities will impact performance of the mobile application.

Rapid Application Development (RAD)

In order to deliver the benefits of faster time to market, RAD environments are used for mobile application development. Since the time taken for development is reduced by the introduction of RAD tools, builds will be available for testing much earlier. Therefore, RAD methodology imposes an indirect pressure on testing teams to reduce the testing cycle time without compromising on the quality and coverage.

Critical Success Factors
The critical factors that determine the the success of mobile testing program are:

(i) Use of Test Automation

(ii) Use of emulators and actual devices

(iii) Testing for mobile environment and application complexity

Use of test automation

Testing of mobile applications is traditionally done by manual execution of test cases and visual verification of results. But it is an effort intensive and time consuming process. Automating the appropriate areas of a testing program can yield quantifiable benefits.

Use of emulators and actual devices
Emulators can be beneficial for testing features of the application that are device independent. However, actual devices should be usedfor validating the results.

Testing for mobile environment and application complexity
Due to diversity in mobile hardware and platforms, testing programs need to incorporate GUI and compatibility tests in addition to the standard functionality tests. Enterprise applications are more complex in both functionality and architecture. Such applications require performance testing, security testing and synchronization testing in addition to the standard functionality testing.

Guidelines for Testing Mobile Applications
(i) Understand the network landscape and device landscape before venturing into testing and identifying bottlenecks.
(ii) Conduct testing in uncontrolled real world test conditions (field based testing) is necessary, especially for a multitier mobile application.
(iii) Select the right automation test tools for the success of testing program.

Rules of thumb for ideal test tool are:

(i) One tool should support all desired platforms.
(ii) Tools should support for various screen types,resolutions and input mechanism such as touchpad and keypad.
(iii) The tools should be connected to the external system to carry out end to end testing.
(a) Check the end to end functional flow in all possible platforms atleast once.
(b) Conduct Performace testing,GUI testing and Compatibility testing using actual devices.Even though these
testing can be done using emulators, testing with actual devices is recommended.
(c) Measure performance in realistic conditions of wireless traffice and user load.

Author:

Kiran

About Telecom Testing?

Telecom domain is one of the hottest domain around.But Most of the domains possess similar testing culture and basics are very much handy. Testing in Telecom mostly revolves around the connections like IP based connections like FR, ATM, DSL, PL, IPL, Data transfers and their respective speeds, hardware devices etc. though you would not need to test all this. You may be assigned on one subject like FR, ATM etc and you need to work on it as per requirements. To be more specific, depending upon the protocols or the applications your company is into, you would probably be asked to master a particular feature/protocol etc.

In general, Telecom testing is an automated, controlled method of verifying operation of your products before they go to market. Any product that connects to the PSTN (public switched telephone network) or a telecom switch (PBX) can be tested with a telephone line simulator, bulk call generator, or similar telecom test platform. Telecom testing is ideal for all telephony applications and equipment, including:
a) IVR systems
b) Switching systems
c) CTI applications
d) VoIP gateways
e) IADs

Why use a telecom testing solution?
A telecom test platform minimizes costs and simplifies engineering, QA, and production testing, as well as integration and pre-installation testing. A test solution can simulate telephony protocols and functions for:
a) Feature and performance testing
b) Load and stress testing
c) Bulk call generation
d) Quality of service testing
e) Equipment demos and product training

An automated telecom test solution provides comprehensive, consistent testing that can be customized for your specific application. In addition, thorough testing will provide peace-of-mind for you and guaranteed reliability for your customers.

Why use an automated telecom testing solution?
A telecom test platform minimizes costs and simplifies engineering, QA, and production testing, as well as integration and pre-installation testing. A test solution can simulate telephony protocols and functions for:
a) Feature and performance testing
b) Load and stress testing
c) Bulk call generation
d) Quality of service testing
e) Equipment demos and product training

An automated telecom test solution provides comprehensive, consistent testing that can be customized for your specific application. In addition, thorough testing will provide peace-of-mind for you and guaranteed reliability for your customers.

Various type of telecom testing:
1) Conformance means ensuring that a product obeys the protocol (e.g. ITU-T or PNO-ISC) at the physical interface. Once this phase is passed, the product can go forward to interconnect testing.

2)Interconnection testing: Interconnect typically involves testing the connection of two separate entities, usually two networks or network elements. Interconnects in the fixed/mobile network environment will have regulatory requirements or standards if BT is involved. Basic interconnect is concerned with robustness and integrity of the interface.

3)Conformance Testing: Following testings are done here:
a) Electrical interface compatibility, e.g. (G703).
b) Conformance of protocol, e.g. ITU-T spec.
c) Conformance of transport layers (MTP2/3). It is important to ensure agreement of the relevant data standards for the two networks/elements and any differences in operating procedures, (e.g., disaster recovery etc.), which may differ.

4)IVR Testing
Test your IVR system to verify proper operation, voice and DTMF response, and eliminate dead-end menu branches.

An IVR (interactive voice response) system can be a complicated maze of menus, branches, and choices. Complex systems of this type require in-depth testing to ensure that customers are not confused or lost.

IVR manufacturers, systems integrators, and companies that own an IVR, all need to test the functionality of their system before it goes live to the outside world. An automated test platform enables you to verify IVR operations via:
DTMF entries, Detection of voice energy, Broadband audio tones, Extensive conditional branching sequences, Interactive test scenarios

Comprehensive testing ensures that your IVR system is ready for customer use. Testing provides peace-of-mind and reliable operation of your voice system. In addition, testing all IVR menu branches manually is time consuming, error prone, and inefficient.

What skill sets you need to have as a Telecom Tester?
1) Experience with testing VoIP line devices (SIP soft & hard clients, ATA)Experience with Nortel environment/tools
2) Experience testing telecom solutions (Nortel or other vendors)
3)Experience with testing VoIP line devices (SIP soft & hard clients, ATA)
4)Familiarity with traffic tool (Nortel in-house Hurricane tool, Ameritec Crescendo/Fortissimo, Navtel)
5)Automation skills (we have our our tools in Nortel, but someone with automation skills in telephony callP services and/or OAM)Experience with Solaris
6)LinuxExperience with PBXsSuccession CS2K knowledgeLarge system test experience
7)Experience with IP tools (sniffer, voice quality testing, automated fax/modem testing.
8)IP Telephony (VoIP) knowledge (SIP/H.323, MEGACO/H.248, NCS, MGCF)
9)IMS standards knowledge (802.11, Cable V2)
10)IMS architecture and network topologyIP Networking experience/understanding

Author:

Kiran

Risk-based testing

Risk-based testing (RBT) is a type of software testing that prioritizes the features and functions to be tested based on priority/importance and likelihood or impact of failure. In theory, since there is an infinite number of possible tests, any set of tests must be a subset of all paossible tests. Test techniques such as boundary value analysis and state transition testing aim to find the areas most likely to be defective. So by using test techniques, a software test engineer is already selecting tests based on risk.

Types of Risks
This section lists some common risks.
Business or Operational
* High use of a subsystem, function or feature
* Criticality of a subsystem, function or feature, including unacceptability of failure
Technical
* Geographic distribution of development team
* Complexity of a subsystem or function
External
* Sponsor or executive preference
* Regulatory requirements

Risk and Requirements Testing
There are four important principles about testing a product against requirements.
1. Without stated requirements, no testing is possible.
2. A software product must satisfy its stated requirements.
3. All test cases should be traceable to one or more stated requirements, and vice versa.
4. Requirements must be stated in testable terms.

When we think in terms of risk, however, a richer set of ideas emerges.

Testing in the absence stated requirements:
If it is very important to satisfy a requirement, and it is the job of the tester to evaluate the product against that requirement, then clearly the tester must be informed of that requirement. So there are situations where this statement is basically true.

The deeper truth is that stated requirements are not the only requirements. Because of incompleteness and ambiguity, testing should not be considered merely as an evaluative process. It is also a process of exploring the meaning and implications of requirements. Thus, testing is not only possible without stated requirements, it’s especially useful when they’re not stated. Tremendous value comes from testers and developers collaborating. Skilled testers evaluate the product against their understanding of unstated requirements and use their observations to challenge or question the project team’s shared understanding of quality.

A good tester stays alert for unintentional gaps in the stated requirements, and works to resolve them to the degree justified by the risks of the situation.

Testing and satisfying stated requirements:

The idea that a software product must satisfy its stated requirements is true if we define product quality as the extent to which we can reasonably claim that each stated requirement is a true statement about the product. But that depends on having a very clear and complete set of requirements. Otherwise, you’re locked in to a pretty thin idea of quality.

The deeper truth is that while quality is defined by requirements, it is not defined as the mere sum of “satisfied” stated requirements. There are many ways to satisfy or violate requirements. Requirements are not all equal in their importance, and often they are even in conflict with each other. It unnecessarily limits us to think about requirements as disconnected ideas, subject to a Boolean evaluation of true or false.

A broader way to think about satisfying requirements is to turn our thinking around and consider the risk associated with violating them. Good testers strive to answer the question, “What important problems are there in this product?”

Traceability test cases to the requirements
To the extent that requirements matter, there should be an association between testing and requirements. For each requirementsID, list the test case IDs that relate; for each test ID, list the requirement IDs that relate. The completeness of testing is then presumably evaluated by noting that at least one test is associated with each
requirement. This is a pretty idea, yet it is seen projects where this checkbox traceability was achieved by defining a set of test cases consisting of the text of each requirement preceded by the word “verify.”

If the intent of the traceability principle is to demonstrate that the test strategy has validated the product against requirements, then we have to go deeper than checkbox tracing. We should be ready for our clients to ask the question, “How do you know?” We should be able to explain the relationship between our tests and the
requirements. The fact that a requirement is merely associated with a test is not interesting in and of itself. The important thing is howit is associated, and that importance grows in pace with product risk.

Requirement specification in testable terms
It’s important that requirements be meaningful. However, “testable” in this context is usually defined as something like “conducive to a totally reliable, noncontroversial, and observer-independent measurement that results in a true-or-false determination of compliance.” Sometimes this point is emphasized with a comment that unless we are able to measure success, we will never know that we’ve achieved it.

To penetrate to the deeper truth, first recognize that testers, far from being drones, are blessed with normal human capabilities of discernment and inductive reasoning. A typical tester is capable of exploring the meaning and potential implications of requirements without necessarily being fed this information from an eyedropper like some endangered baby condor. In fact, attempts to save testers the trouble of interpreting requirements by simplifying requirement statements to a testable scale may make matters worse. Here’s a real-life example: “The screen control should respond to user input within 300 milliseconds.” Sometimes a test designer fret and ponder over this requirement. She thought she would need to purchase a special tool to measure the performance of the product down to the millisecond level. She worried about how transient processes in Windows could introduce spurious variation into her measurements. Then she realized something: With a little preparation, an unaided human can measure time on that scale to a resolution of plus or minus 50 milliseconds. Maybe that would be accurate enough. It further occurred to her that perhaps this requirement was specified in milliseconds not to make it more meaningful, but to make it more objectively measurable. When she asked the designer, it turned out that the real requirement was that the response time “not be as annoyingly slow as it is in the current version of this product.” Thus we see that the pragmatics of testing are not necessarily served by unambiguous specification, though testing is always served by meaningful communication.

Requirements, Testing and Challenging Software
Let’s reformulate the principles above into the following, less quotable but more
robust, guidelines:
1. Our ability to recognize problems in a product is limited and biased by our understanding of what problems there could be. A requirements document is one potential source of information about problems. There are others.
2. We incur risk to the extent that we deliver a product that has important problems in it. The true mission of testing is to bring that risk to light,not merely to demonstrate conformance to stated requirements.
3. Especially in high-risk situations, the test process will be more persuasive if we can articulate and justify how test strategy relates to the definition of quality. This goes beyond having at least one test for each stated
requirement.
4. The test process will be more effective if requirements are specified in terms that communicate the essence of what is desired, along with an idea of risks, benefits, and relative importance of each requirement. Objective
measureability may be necessary, in some cases, but is never enough to foster robust testing.

As risks and complexities increase, participation by testing in the requirements dialogue becomes more important if the test process is going to achieve its mission. More testing skill is needed, as is a better rapport with the development and user communities. In the dialogue about what we want, testers should seek multichannel
communication: multiple written sources, diagrams, demos, chalk talks, and use cases. In the dialogue about what can be built, testers should be familiar with the technologies being used, and work with development to build testability enhancing facilities into the product.

Throughout the process, the tester should raise an alarm if the risks and complexities of a project exceed his or her capability to test.

Author:

Lisa

1 28 29 30 31 32 34