About Telecom Testing?

Telecom domain is one of the hottest domain around.But Most of the domains possess similar testing culture and basics are very much handy. Testing in Telecom mostly revolves around the connections like IP based connections like FR, ATM, DSL, PL, IPL, Data transfers and their respective speeds, hardware devices etc. though you would not need to test all this. You may be assigned on one subject like FR, ATM etc and you need to work on it as per requirements. To be more specific, depending upon the protocols or the applications your company is into, you would probably be asked to master a particular feature/protocol etc.

In general, Telecom testing is an automated, controlled method of verifying operation of your products before they go to market. Any product that connects to the PSTN (public switched telephone network) or a telecom switch (PBX) can be tested with a telephone line simulator, bulk call generator, or similar telecom test platform. Telecom testing is ideal for all telephony applications and equipment, including:
a) IVR systems
b) Switching systems
c) CTI applications
d) VoIP gateways
e) IADs

Why use a telecom testing solution?
A telecom test platform minimizes costs and simplifies engineering, QA, and production testing, as well as integration and pre-installation testing. A test solution can simulate telephony protocols and functions for:
a) Feature and performance testing
b) Load and stress testing
c) Bulk call generation
d) Quality of service testing
e) Equipment demos and product training

An automated telecom test solution provides comprehensive, consistent testing that can be customized for your specific application. In addition, thorough testing will provide peace-of-mind for you and guaranteed reliability for your customers.

Why use an automated telecom testing solution?
A telecom test platform minimizes costs and simplifies engineering, QA, and production testing, as well as integration and pre-installation testing. A test solution can simulate telephony protocols and functions for:
a) Feature and performance testing
b) Load and stress testing
c) Bulk call generation
d) Quality of service testing
e) Equipment demos and product training

An automated telecom test solution provides comprehensive, consistent testing that can be customized for your specific application. In addition, thorough testing will provide peace-of-mind for you and guaranteed reliability for your customers.

Various type of telecom testing:
1) Conformance means ensuring that a product obeys the protocol (e.g. ITU-T or PNO-ISC) at the physical interface. Once this phase is passed, the product can go forward to interconnect testing.

2)Interconnection testing: Interconnect typically involves testing the connection of two separate entities, usually two networks or network elements. Interconnects in the fixed/mobile network environment will have regulatory requirements or standards if BT is involved. Basic interconnect is concerned with robustness and integrity of the interface.

3)Conformance Testing: Following testings are done here:
a) Electrical interface compatibility, e.g. (G703).
b) Conformance of protocol, e.g. ITU-T spec.
c) Conformance of transport layers (MTP2/3). It is important to ensure agreement of the relevant data standards for the two networks/elements and any differences in operating procedures, (e.g., disaster recovery etc.), which may differ.

4)IVR Testing
Test your IVR system to verify proper operation, voice and DTMF response, and eliminate dead-end menu branches.

An IVR (interactive voice response) system can be a complicated maze of menus, branches, and choices. Complex systems of this type require in-depth testing to ensure that customers are not confused or lost.

IVR manufacturers, systems integrators, and companies that own an IVR, all need to test the functionality of their system before it goes live to the outside world. An automated test platform enables you to verify IVR operations via:
DTMF entries, Detection of voice energy, Broadband audio tones, Extensive conditional branching sequences, Interactive test scenarios

Comprehensive testing ensures that your IVR system is ready for customer use. Testing provides peace-of-mind and reliable operation of your voice system. In addition, testing all IVR menu branches manually is time consuming, error prone, and inefficient.

What skill sets you need to have as a Telecom Tester?
1) Experience with testing VoIP line devices (SIP soft & hard clients, ATA)Experience with Nortel environment/tools
2) Experience testing telecom solutions (Nortel or other vendors)
3)Experience with testing VoIP line devices (SIP soft & hard clients, ATA)
4)Familiarity with traffic tool (Nortel in-house Hurricane tool, Ameritec Crescendo/Fortissimo, Navtel)
5)Automation skills (we have our our tools in Nortel, but someone with automation skills in telephony callP services and/or OAM)Experience with Solaris
6)LinuxExperience with PBXsSuccession CS2K knowledgeLarge system test experience
7)Experience with IP tools (sniffer, voice quality testing, automated fax/modem testing.
8)IP Telephony (VoIP) knowledge (SIP/H.323, MEGACO/H.248, NCS, MGCF)
9)IMS standards knowledge (802.11, Cable V2)
10)IMS architecture and network topologyIP Networking experience/understanding



Risk-based testing

Risk-based testing (RBT) is a type of software testing that prioritizes the features and functions to be tested based on priority/importance and likelihood or impact of failure. In theory, since there is an infinite number of possible tests, any set of tests must be a subset of all paossible tests. Test techniques such as boundary value analysis and state transition testing aim to find the areas most likely to be defective. So by using test techniques, a software test engineer is already selecting tests based on risk.

Types of Risks
This section lists some common risks.
Business or Operational
* High use of a subsystem, function or feature
* Criticality of a subsystem, function or feature, including unacceptability of failure
* Geographic distribution of development team
* Complexity of a subsystem or function
* Sponsor or executive preference
* Regulatory requirements

Risk and Requirements Testing
There are four important principles about testing a product against requirements.
1. Without stated requirements, no testing is possible.
2. A software product must satisfy its stated requirements.
3. All test cases should be traceable to one or more stated requirements, and vice versa.
4. Requirements must be stated in testable terms.

When we think in terms of risk, however, a richer set of ideas emerges.

Testing in the absence stated requirements:
If it is very important to satisfy a requirement, and it is the job of the tester to evaluate the product against that requirement, then clearly the tester must be informed of that requirement. So there are situations where this statement is basically true.

The deeper truth is that stated requirements are not the only requirements. Because of incompleteness and ambiguity, testing should not be considered merely as an evaluative process. It is also a process of exploring the meaning and implications of requirements. Thus, testing is not only possible without stated requirements, it’s especially useful when they’re not stated. Tremendous value comes from testers and developers collaborating. Skilled testers evaluate the product against their understanding of unstated requirements and use their observations to challenge or question the project team’s shared understanding of quality.

A good tester stays alert for unintentional gaps in the stated requirements, and works to resolve them to the degree justified by the risks of the situation.

Testing and satisfying stated requirements:

The idea that a software product must satisfy its stated requirements is true if we define product quality as the extent to which we can reasonably claim that each stated requirement is a true statement about the product. But that depends on having a very clear and complete set of requirements. Otherwise, you’re locked in to a pretty thin idea of quality.

The deeper truth is that while quality is defined by requirements, it is not defined as the mere sum of “satisfied” stated requirements. There are many ways to satisfy or violate requirements. Requirements are not all equal in their importance, and often they are even in conflict with each other. It unnecessarily limits us to think about requirements as disconnected ideas, subject to a Boolean evaluation of true or false.

A broader way to think about satisfying requirements is to turn our thinking around and consider the risk associated with violating them. Good testers strive to answer the question, “What important problems are there in this product?”

Traceability test cases to the requirements
To the extent that requirements matter, there should be an association between testing and requirements. For each requirementsID, list the test case IDs that relate; for each test ID, list the requirement IDs that relate. The completeness of testing is then presumably evaluated by noting that at least one test is associated with each
requirement. This is a pretty idea, yet it is seen projects where this checkbox traceability was achieved by defining a set of test cases consisting of the text of each requirement preceded by the word “verify.”

If the intent of the traceability principle is to demonstrate that the test strategy has validated the product against requirements, then we have to go deeper than checkbox tracing. We should be ready for our clients to ask the question, “How do you know?” We should be able to explain the relationship between our tests and the
requirements. The fact that a requirement is merely associated with a test is not interesting in and of itself. The important thing is howit is associated, and that importance grows in pace with product risk.

Requirement specification in testable terms
It’s important that requirements be meaningful. However, “testable” in this context is usually defined as something like “conducive to a totally reliable, noncontroversial, and observer-independent measurement that results in a true-or-false determination of compliance.” Sometimes this point is emphasized with a comment that unless we are able to measure success, we will never know that we’ve achieved it.

To penetrate to the deeper truth, first recognize that testers, far from being drones, are blessed with normal human capabilities of discernment and inductive reasoning. A typical tester is capable of exploring the meaning and potential implications of requirements without necessarily being fed this information from an eyedropper like some endangered baby condor. In fact, attempts to save testers the trouble of interpreting requirements by simplifying requirement statements to a testable scale may make matters worse. Here’s a real-life example: “The screen control should respond to user input within 300 milliseconds.” Sometimes a test designer fret and ponder over this requirement. She thought she would need to purchase a special tool to measure the performance of the product down to the millisecond level. She worried about how transient processes in Windows could introduce spurious variation into her measurements. Then she realized something: With a little preparation, an unaided human can measure time on that scale to a resolution of plus or minus 50 milliseconds. Maybe that would be accurate enough. It further occurred to her that perhaps this requirement was specified in milliseconds not to make it more meaningful, but to make it more objectively measurable. When she asked the designer, it turned out that the real requirement was that the response time “not be as annoyingly slow as it is in the current version of this product.” Thus we see that the pragmatics of testing are not necessarily served by unambiguous specification, though testing is always served by meaningful communication.

Requirements, Testing and Challenging Software
Let’s reformulate the principles above into the following, less quotable but more
robust, guidelines:
1. Our ability to recognize problems in a product is limited and biased by our understanding of what problems there could be. A requirements document is one potential source of information about problems. There are others.
2. We incur risk to the extent that we deliver a product that has important problems in it. The true mission of testing is to bring that risk to light,not merely to demonstrate conformance to stated requirements.
3. Especially in high-risk situations, the test process will be more persuasive if we can articulate and justify how test strategy relates to the definition of quality. This goes beyond having at least one test for each stated
4. The test process will be more effective if requirements are specified in terms that communicate the essence of what is desired, along with an idea of risks, benefits, and relative importance of each requirement. Objective
measureability may be necessary, in some cases, but is never enough to foster robust testing.

As risks and complexities increase, participation by testing in the requirements dialogue becomes more important if the test process is going to achieve its mission. More testing skill is needed, as is a better rapport with the development and user communities. In the dialogue about what we want, testers should seek multichannel
communication: multiple written sources, diagrams, demos, chalk talks, and use cases. In the dialogue about what can be built, testers should be familiar with the technologies being used, and work with development to build testability enhancing facilities into the product.

Throughout the process, the tester should raise an alarm if the risks and complexities of a project exceed his or her capability to test.



Loadrunner Interview Questions

1. What is load testing?
Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

2. What is Performance testing? –
Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

3. Did u use LoadRunner? What version? Yes.
Version 7.2.

4. Explain the Load testing process? –
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resources, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.

5. When do you do load and performance Testing? –
We perform load testing once we are done with interface (GUI) testing. Modern systemarchitectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

6. What are the components of LoadRunner? –
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

7. What Component of LoadRunner would you use to record a Script? –
The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

8. What Component of LoadRunner would you use to play Back the script in multi user mode? –
The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

9. What is a rendezvous point? –
You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

10. What is a scenario?
A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

11. Explain the recording mode for web Vuser script? – We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the databaseserver. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

12. Why do you create parameters? – Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

13. What is correlation? Explain the difference between automatic correlation and manual correlation? – Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

14. How do you find out where correlation is required? Give few examples from your projects? – Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.

15. Where do you set automatic correlation options? – Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

16. What is a function to capture dynamic values in the web Vuser script? – Web_reg_save_param function saves dynamic data information to a parameter.

17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? – Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

18. How do you debug a LoadRunner script? – VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? – Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.

20. What are the changes you can make in run-time settings? – The Run Time Settings that we make are: a) Pacing – It has iteration count. b) Log – Under this we have Disable Logging Standard Log and c) Extended Think Time – In think time we have two options like Ignore think time and Replay think time. d) General – Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

21. How do you perform functional testing under load? – Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.

22. What is Ramp up? How do you set this? – This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, go to ‘Scenario Scheduling Options’

23. What is the advantage of running the Vuser as thread? – VuGen provides the facility to use multithreading. This enables more Vusers to be run per generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

24. If you want to stop the execution of your script on error, how do you do that? – The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status “Stopped”. For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.

25. What is the relation between Response Time and Throughput? – The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.

26. Explain the Configuration of your systems? – The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

27. How do you identify the performance bottlenecks? – Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.

28. If web server, database and Network are all fine where could be the problem? – The problem could be in the system itself or in the application server or in the code written for the application.

29. How did you find web server related issues? – Using Web resources monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

30. How did you find database related issues? – By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues

31. How did you plan the Load? What are the Criteria? – Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

32. What does vuser_init action contain? – Vuser_init action contains procedures to login to a server.

33. What does vuser_end action contain? – Vuser_end section contains log off procedures.

34. What is think time? How do you change the threshold? – Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

35. What is the difference between standard log and extended log? – The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.

36. Explain the following functions: – lr_debug_message – The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message – The lr_output_message function sends notifications to the Controller Output window and the Vuser log file.lr_error_message – The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt – The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch – The lrd_fetch function fetches the next row from the result set.

37. Throughput – If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.

38. Types of Goals in Goal-Oriented Scenario – Load Runner provides you with five different types of goals in a goal oriented scenario:
1. The number of concurrent Vusers
2. The number of hits per second
3. The number of transactions per second
4. The number of pages per minute
5. The transaction response time that you want your scenario

39. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

40. What is correlation? Explain the difference between automatic correlation and manual correlation? – Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

41. Where do you set automatic correlation options? – Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

42. What is a function to capture dynamic values in the web vuser script? – Web_reg_save_param function saves dynamic data information to a parameter.



Learn QTP

1. What are the features and benefits of Quick Test Pro(QTP)?
1. Key word driven testing
2. Suitable for both client server and web based application
3. VB script as the script language
4. Better error handling mechanism
5. Excellent data driven testing features

2. How to handle the exceptions using recovery scenario manager in QTP?
You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps
1. Triggered Events
2. Recovery steps
3. Post Recovery Test-Run

3. What is the use of Text output value in QTP?
Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.

4. How to use the Object spy in QTP 8.0 version?
There are two ways to Spy the objects in QTP
1) Thru file toolbar: In the File ToolBar click on the last toolbar button (an icon showing a person with hat).
2) Thru Object repository Dialog: In Objectrepository dialog click on the button “object spy…” In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object. If at all the object is not visible or window is minimized then hold the Ctrl button and activate the required window to and release the Ctrl button.

5. What is the file extension of the code file and object repository file in QTP?
File extension of
Per test object rep: filename.mtr
Shared Object rep: filename.tsr
Code file extension id: script.mts

6. Explain the concept of object repository and how QTP recognizes objects?
Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected).
we can view or modify the test object description of any test object in the repository or to add new objects to the repository.
Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordianl identifier such as objects location on the page or in the source code.

7. What are the properties you would use for identifying a browser and page when using descriptive programming?
“name” would be another property apart from “title” that we can use. OR
We can also use the property “micClass”.
ex: Browser(”micClass:=browser”).page(”micClass:=page”)

8. What are the different scripting languages you could use when working with QTP?
You can write scripts using following languages:
Visual Basic (VB), XML, JavaScript, Java, HTML

9. Tell some commonly used Excel VBA functions.
Common functions are:
Coloring the cell, Auto fit cell, setting navigation from link in one cell to other saving

10. Explain the keyword createobject with an example.
Creates and returns a reference to an Automation object
syntax: CreateObject(servername.typename [, location])
servername:Required. The name of the application providing the object.
typename : Required. The type or class of the object to create.
location : Optional. The name of the network server where the object is to be created.

11. Explain in brief about the QTP Automation Object Model.
Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.

12. How to handle dynamic objects in QTP?
QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognize the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time.
Check out this:
If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object.
While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails.
The Smart Identification mechanism uses two types of properties:
Base filter properties – The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link’s tag was changed from to any other value, you could no longer call it the same object. Optional filter properties – Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.

13. What is a Run-Time Data Table? Where can I find and view this table?
In QTP, there is data table used, which is used at runtime.
-In QTP, select the option View->Data table.
-This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.

14. How does Parameterization and Data-Driving relate to each other in QTP?
To data driven we have to parameterize. i.e. we have to make the constant value as parameter, so that in each interaction(cycle) it takes a value that is supplied in run-time data table. Through parameterization only we can drive a transaction (action) with different sets of data. You know running the script with the same set of data several times is not suggested, and it’s also of no use.

15. What is the difference between Call to Action and Copy Action.?
Call to Action: The changes made in Call to Action, will be reflected in the original action (from where the script is called). But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)

16. Explain the concept of how QTP identifies object.
During recording qtp looks at the object and stores it as test object. For each test object QT learns a set of default properties called mandatory properties, and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run, QTP searches for the run time objects that matches with the test object it learned while recording.

17. Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being tested.
Types of object repository: Per action and shared repository.
In shared repository only one centralized repository for all the tests. where as in per action for each test a separate per action repository is created.

18. What the differences are and best practical application of Object Repository?
Per Action: For Each Action, one Object Repository is created.
Shared: One Object Repository is used by entire application

19. Explain what the difference between Shared Repository and Per Action Repository
Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner
Per Action: For each Action, one Object Repository is created, like GUI map file per test in WinRunner

20. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.
Sample answer (You can tell about modules you worked on. If your answer is Yes then You should expect more questions and should be able to explain those modules in later questions): I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.

21. Can you do more than just capture and playback?
Sample answer (Say Yes only if you worked on): I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL.
-It was done by the windows scripting using the DOM(Document Object Model) of the windows.

22. How to do the scripting. Are there any inbuilt functions in QTP? What is the difference between them? How to handle script issues?
Yes, there’s an in-built functionality called “Step Generator” in Insert->Step->Step Generator -F7, which will generate the scripts as you enter the appropriate steps.

23. What is the difference between check point and output value?
An output value is a value captured during the test run and entered in the run-time but to a specified location.
EX:-Location in Data Table[Global sheet / local sheet]

24. How many types of Actions are there in QTP?
There are three kinds of actions:
Non-reusable action – An action that can be called only in the test with which it is stored, and can be called only once.
Reusable action – An action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests.
External action – A reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.

25. I want to open a Notepad window without recording a test and I do not want to use System utility Run command as well. How do I do this?
You can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad “( i.e. where the notepad.exe is stored in the system) in the “Windows Applications Tab” of the “Record and Run Settings window.


Nirupama Raj

Some Basic Testing Concepts

Tests are Tools:

A test is simply a tool that is used to measure something. To narrow that definition a little – after all, we do measure stuff all the time without having any interest in testing it – a test is usually formal, in the sense that is it created and applied with a purpose and intentionally. I may measure a television set because I have an idle curiosity about its size, but if I’m in the market for a new television, and I have a specific space to put the set, then I’m measuring that TV for a very definite reason…and I’m therefore testing that television for its ability to meet my space restrictions.

The “something” that a test is measuring can often be summarized with a question:

1. What are the subject’s characteristics or properties? This kind of measurement looks at the test subject itself.

2. Does the test subject pass or fail the test? This kind of measurement compares the subject, or the subjects performance or behavior, against a concrete definition of what success means. The test evaluates the subject based on that definition; if that requirement is met, the subject passes the test.

3. How does the subject respond to the test? This kind of measurement evaluates some arena of performance or behavior, and is usually intended to improve understanding of the test subject.

4. How do multiple test subjects compare in characteristics, performance or behavior? This kind of measurement creates a matrix of compared elements, allowing comparisons across various axes and data points.
A test requires more than just asking one of these questions, however. Tests must be planned and thought out a head of time; you have to decide such things as what exactly you are testing and testing for, the way the test is going to be run and applied, what steps are required, etc. A test is usually based on some kind of understanding of what a good result would be, or a specific definition of what “good” means. Using the example above, say I find a television that will fit, so I based on the fact that it passed my measurement test. I get home and set it up and then realize – oops, it doesn’t come with a remote control – that I hadn’t specified all of my requirements, and as a result hadn’t tested for the all of the correct things that I needed.

A misunderstood or inadequately planned test can waste time and provide bad information, because the interpretation of the test results will be flawed and misleading. Oops again, I brought a metric tape measure, and I don’t know how to convert. Darn, was I supposed to measure with or without the antenna? Did my wife tell me to look for a projection TV, or a Web TV?

Before running any test, you should be able to answer the following rough questions:

1. What are you testing? Define the test subject, whether it is a thing, a process, a behavior, a threshold, etc. Define the scope of the test subject. For example, if you are testing a web site’s links, will you test every link, or only links to static pages, or only links to other pages as opposed to internal links, etc?

2. From what point-of-view are you testing? If your test is supposed to mimic the interaction of a specific agent or user, then you must have a strong understanding of that agent or user.

3. What are you testing for? Be as specific as possible. If you are going to test one aspect of the test subject, make that limitation clear.

4. How are you going to test? Define the test plan, the specific test(s), test methodologies, etc.

5. What are the limits to the test? Set expectations carefully, because if the test can only measure a part of the test subject or its behavior, the results must be interpreted with this limitation in mind.

Testing, Quality Control and Quality Assurance:

Testing is often confused with the processes of quality control and quality assurance. Testing is the process of creating, implementing and evaluating tests. If you are shopping for a new television, you can call that process “testing for the best TV for you”… it’s kind of pretentious, but that is what you’re doing as you compare prices and features to find what will work best for you. Testing usually has a limited scope and duration – you’re just looking at TVs, and only in your town, you’re not going to spend a year shopping, are you?

Testing is predicated on the use of a standard of quality: you have a specification of what’s allowable (no broken links? ALT tags have values? Maximum page weight is 10K?) and you compare the site to the standard, noting deviations and shortcomings. This seems simple, but your testing is only valuable if your standard of quality is comprehensive, well thought-out, and reasonable. If your standard has holes, then your testing process has blind spots.

Quality control is a refinement of testing, involving the formal and systematic use of testing and a precise definition of what quality means for the purposes of the test. You aren’t just testing, you are testing and then doing something with the results. Quality control is used for testing a product or output of a process, with the test measuring the subject’s ability to meet a certain benchmark or threshold of quality. The tests usually take the form of “does this product meet requirement X?”, and are often pass-fail.

Effective quality control testing requires some basic goals and understanding:

1. You must understand what you are testing; if you’re testing a specific functionality, you must know how it’s supposed to work, how the protocols behave, etc.

2. You should have a definition of what success and failure are. In other words, is close enough good enough?

3. You should have a good idea of a methodology for the test, the more formal a plan the better; you should design test cases.

4. You must understand the limits inherent in the tests themselves.

5. You must have a consistent schedule for testing; performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time.
Any true attempt at quality control requires a great deal of planning before any tests are ever applied, and extensive documentation of quality standards, test plans, test scenarios, test cases, test results — anything that goes into the testing must be carefully tracked and written down. In fact, for companies that manufacture products, as well as for software companies, a series of formal accreditation programs exist to measure and certify the company’s adherence to some very strict standards, for example the ISO 9000 series of rules. No such certification systems exist for web sites, perhaps because sites are more experiences and resources than products to buy.

The distinctions between testing and quality control are important for an understanding of the roles and purposes of testing, but they are especially important to anyone involved in testing or creating large a web site. Based on my own experiences, I strongly recommend that testing for site quality be a priority for anyone who

1. works as part of a team that is building and/or maintaining a big web site, and whose responsibility is for testing, quality control, or quality assurance

2. delivers a site to a customer

3. receives site code from a contractor, agency, or technology partner
Testing – and by extension quality control — is reactive; that is, you test to find deviations from a standard. If you systematically employ a formal battery of tests on a consistent schedule, you will be able to pass a product with fairly stable quality. The shortcoming here is that this kind of testing does nothing to improve the quality of output; as far as user-experience is concerned, you’re just running in place. Testing and quality control do nothing to raise the level of quality beyond perhaps tweaking the standard to “raise the bar”.

Quality assurance goes beyond quality control to examine the processes that create and shape the product: quality assurance looks at the quality of output, as well as at the quality of the inputs.



Test Case Design For Software Testing

The design of tests for software and other engineered products can be as challenging as the initial design of the product itself. Yet, software engineers often treat testing as an afterthought, developing test cases that may “feel right” but have little assurance of being complete. Recalling the objectives of testing, we must design tests that have the highest likelihood of finding the most errors with a minimum amount of time and effort.

A rich variety of test case design methods have evolved for software. These methods provide the developer with a systematic approach to testing. More important, methods provide a mechanism that can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors in software.

Any engineered product (and most other things) can be tested in one of two ways:

1. Knowing the specified function that a product has been designed to perform, tests can be conducted that demonstrate each function is fully operational while at the same time searching for errors in each function.

2. knowing the internal workings of a product, tests can be conducted to ensure that “all gears mesh,” that is, internal operations are performed according to specifications and all internal components have been adequately exercised. The first test approach is called black-box testing and the second, white-box testing.

When computer software is considered, black-box testing alludes to tests that are conducted at the software interface. Although they are designed to uncover errors, black-box tests are used to demonstrate that software functions are operational, that input is properly accepted and output is correctly produced, and that the integrity of external information (e.g., a database) is maintained. A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software.

White-box testing of software is predicated on close examination of procedural detail. Logical paths through the software are tested by providing test cases that exercise specific sets of conditions and/or loops. The “status of the program” may be examined at various points to determine if the expected or asserted status corresponds to the actual status.

At first glance it would seem that very thorough white-box testing would lead to “100 percent correct programs.” All we need do is define all logical paths, develop test cases to exercise them, and evaluate results, that is, generate test cases to exercise program logic exhaustively. Unfortunately, exhaustive testing presents certain logistical problems. For even small programs, the number of possible logical paths can be very large. For example, consider the 100 line program in the language C. After some basic data declaration, the program contains two nested loops that execute from 1 to 20 times each, depending on conditions specified at input. Inside the interior loop, four if-then-else constructs are required. There are approximately 1014 possible paths that may be executed in this program!

To put this number in perspective, we assume that a magic test processor (“magic” because no such processor exists) has been developed for exhaustive testing. The processor can develop a test case, execute it, and evaluate the results in one millisecond. Working 24 hours a day, 365 days a year, the processor would work for 3170 years to test the program. This would, undeniably, cause havoc in most development schedules. Exhaustive testing is impossible for large software systems.

White-box testing should not, however, be dismissed as impractical. A limited number of important logical paths can be selected and exercised. Important data structures can be probed for validity. The attributes of both black- and white-boxing can be combined to provide an approach that validates the software interface and selectively ensures that the internal workings of the software are correct.



What you need to know about BVT (Build Verification Testing)?

What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.

BVT is also called Smoke Testing or build acceptance testing (BAT)

New Build is checked mainly for two things:
• Build validation
• Build acceptance
Some BVT basics:
• It is a subset of tests that verify main functionalities.
• The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
• The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
• Design BVTs carefully enough to cover basic functionality.
• Typically BVT should not run more than 30 minutes.
• BVT is a type of regression Testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.

What is the main task in build release?
Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.
These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:
• Include only critical test cases in BVT.
• All test cases included in BVT should be stable.
• All the test cases should have known expected result.
• Make sure all included critical functionality test cases are sufficient for application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.

What happens when BVT suite run:
Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes. This process gets repeated for every new build.

Why BVT or build fails?
BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.

Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT result – automate everything.
5) Have some penalties for breaking the build some chocolates or team coffee party from developer who breaks the build will do.

BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, and resources and after all no frustration of test team for incomplete build.



Automation Testing – Capabilities and Areas

Why Automated Testing?

The purpose of automated testing is to increase the flexibility of time and resources, to avoid redundancy on test execution, increase test coverage, thus increasing the quality and reliability of the software. Software applications require complex pre-deployment testing of mission critical business processes.

Regression testing of mission critical applications can require thousands of test cases that need to be executed and re-executed on demand. Automated test scripts needs to be designed from the ground-up to address this demand by providing a maintainable automated testing solution for pre-release/post defect fixes of the product or application under test.

Automation Testing – Capabilities and Areas

Automation testing services consists of different categories wherein the following testing types and process are taken into consideration for automation
• Functionality Testing
• Regression Testing
• Performance Testing
• Requirements Management
• Test Management
• Test Case Preparation
• Test Execution
• Defect Tracking and management
• Accelerates test time.
• Relieves constraints on resources – particularly cost and time.
• Improves the reliability and quality of the mission critical Product / Application.
• Better Test path coverage in terms of length and breadth.
• Tried and tested onsite – offshore automation delivery model.
• Effective Iterative Regression testing during new releases/defect fixes.



Introduction to Test Case Writing

Test Case writing may be referred to the full process of case development from the decision to use a case to release of the case to its use in class. The entire sequence of steps of the process is set forth in Figure 1. However, the suggested activities for case writing that follow have been established to assist instructors or case writers in organizing and presenting information in the case format. The focus is on the writing process.

The Test Case Writing Process
Step 1: Case Origin : Identify the needs

Step 2: Establishing the needs: The search for a specific issue ideas and individuals or organizations that might supply the case information

Step 3: Initial Contact: The establishment of access to material on the case subject

Step 4: Data Collection: The gathering of the relevant information for the case

Step 5: The Writing Process: The organization and the presentation of the data and information

Step 6: Release: The obtaining of permission from the appropriate individuals to use the case for educational purposes.

1. A case should appear authentic and realistic. The case must develop the situation in real life terms. Reality must be brought into the case. Use as much factual information as possible. In the case, quotes, exhibits and pictures can be included to add realism and life to the case. The problem scenario in the case should be relevant to the real world so that students can experience and share the snapshot of reality.

2. Use an efficient and basic case structure in writing. First, open up the case with the broadest questions, and then face the specific situation. Close with a full development of the specific issues. The presentation of a case should be primarily in a narrative style, which is a story-telling format that gives details about actions and persons involved in a problem situation.

3. There must be a fit of the case with students’ educational needs, and the needs in practice. The topics and content of the case should be appropriate and important to the particular students in which the case is used. Moreover, case ideas should be relevant to the learning objectives

4. A case should not propound theories, but rather pose complex, controversial issues. There are no simple or clearly bounded issues. The controversy of a case can entail debate or contest. It creates learning at many levels – not only substantive learning, but learning also with respect to communication and persuading others. The relationship between issues and the theories should be dealt with through the discussion or instruction.
5. There should be sufficient background information to allow students to tackle the issue(s). Include not only the events that happened, but also how the people involved perceive them. There should be enough description in the prose of the case itself for students to be able to situate the case problem, understand the various issues that bear on the problem, and identify themselves with the decision-maker’s position. Also, good cases need descriptions of the people involved since understanding an individual’s predisposition, position, and values, is an important part of the decision making.

6. Write the case in a well-organized structure and in clear language. A case should be easy to read or access. Make sure that you prepare an outline of the case and use it to organize your materials. Also ensure the clarity and refinement of your presentation of the case.

Use cases are a popular way to express software requirements. They are popular because they are practical. A use case bridges the gap between user needs and system functionality by directly stating the user intention and system response for each step in a particular interaction

Step One: Identify Classes of Users
The first step in writing use cases is understanding the users, their goals, and their key needs. Not all users are alike. Some users will expect to walk up to the system and accomplish one goal as quickly as possible.

Step Two: Outline the Use Case Suite
The second step in our breadth-first approach to writing use cases is to outline the use case suite. A use case suite is an organized table of contents for your use cases: it simply lists the names of all use cases that you intend to write. The suite can be organized several different ways. For example, you can list all the classes of users, and then list use cases under each.

Step Three: List Use Case Names
If you did step two, this step will be much easier to do well. Having an organized use case suite makes it easier to name use cases because the task is broken down into much smaller subtasks, each of which is more specific and concrete.

Step Four: Write Some Use Case Descriptions
In step three, you may have generated ten to fifty use case names on your first pass. That number will grow as you continue to formalize the software requirements specification. That level completeness of the specification is very desirable because it gives more guidance in design and implementation planning, it can lead to more realistic schedules and release scoping decisions, and it can reduce requirements changes later.

Step Five: Write Steps for Selected Use Cases
1) Enable users to achieve the key benefits claimed for your product

2) Determine a user’s first impression of the product

3) Challenge the user’s knowledge or abilities

4) Affect work flows that involve multiple users

5)Explain the usage of novel or difficult-to-use features

Each use case step has two parts: a user intention and system response:

1. User Intention

The user intention is a phrase describing what the user intends to do in that step. Typical steps involve accessing information, providing input, or initiating commands. Usually the user intent clearly implies a UI action. For example, if I intend to save a file, then I could probably press Control-S. However, “press Control-S” is not written in use cases. In general, you should try not to mention specific UI details: they are too low-level and may change later.
2. System Response

The system response is a phrase describing the user-visible part of the system’s reaction to the user’s action. As above, it is best not to mention specific details that may change later. For example, the system’s response to the user saving a file might be “Report filename that was saved”. The system response should not describe an internal action. For example, it may be true that the system will “Update database record”, but unless that is something that the user can immediately see, it is not relevant to the use case.

Step Six: Evaluate Use Cases
An important goal of any requirements specification is to support validation of the requirements. There are two main ways to evaluate use cases:
1. Potential customers and users can read the use cases and provide feedback.

2. Software designers can review the use cases to find potential problems long before the system is implemented.

You can perform a more careful evaluation of your use cases and UI mockups with cognitive walk-throughs. In the cognitive walk-through method, you ask yourself these questions for each step:
• Will the user realize that he/she should have the intention listed in this step?

• Will the user notice the relevant UI affordance?

• Will the user associate the intention with the affordance?

• Does the system response clearly indicate that progress is being made toward the use case goal?



The Basics of Automated Testing

Test automation is the use of Software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Over the past few years, tools that help programmers quickly create applications with graphical user interfaces have dramatically improved programmer productivity.

This has increased the pressure on testers, who are often perceived as bottlenecks to the delivery of software products. Testers are being asked to test more and more code in less and less time. Test automation is one way to do this, as manual testing is time consuming.

As and when different versions of software are released, the new features will have to be tested manually time and again. But, now there are tools available that help the testers in the automation of the GUI which reduce the test time as well as the cost, other test automation tools support execution of performance tests.

Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back any number of times, comparing actual results to those expected.

Test Automation is an important subject since it’s the most important part of any tester’s job. It’s clearly not the best use of time to click on the same button looking for the same dialog box(expected result) every single day. Part of smart testing is delegating those kinds of tasks away so we can spend time on harder problems. And computers are a great place to delegate repetitive work.

That’s really what automated testing is about. We try to get computers to do our job for us. One of the ways a tester describes his goal is to put himself out of a job – meaning that automate the entire job. This is, of course, unachievable so we don’t worry about losing our jobs. But it’s a good vision!

Our short term goal should always be to automate the parts of our job we find most annoying with the selfish idea of not having to do annoying stuff any more!!!

With people new to automated testing, that’s always how we frame it. Start small, pick an easy task that you have to do all the time. Then figure out how to have a computer do it for you. This has a great effect on your work since after you get rid of the first one that will free up more of your time to automate more and more annoying, repetitive tasks. Now with all this time you can go and focus on testing more interesting parts of your software.

That last paragraph makes it sound like writing automated tests is easy, when in fact it’s typically quite hard!!!

There are some fundamentally hard problems in this space. There are a lot of test tools which try to help out with these problems in different ways. Hopefully it will be valuable as a way to better understand automated testing and as a way to help choose your test tools. As a side note, implementing automated tests for a text based or API based system is really pretty easy; Let us focus on a full UI application – which is where the interesting issues are.

Automated test can be broken into two big pieces:
• Running the automated tests
• Validating the results.

Running the Automated Tests

This concept is pretty basic, if you want to test the submit button a login page, you can override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. There is another much trickier way to do this. You can directly call the internal API that the button click event handler calls. Calling into the API is good because it’s easy. Calling an API function from your test code is a piece of cake, just add in a function call. But then you aren’t actually testing the UI of your application. Sure, you can call the API for functionality testing, then every now and then click the button manually to be sure the right dialog opens.

Rationally this really should work great, but a lot of testing exists outside the rational space. There might be lots of bugs that happen when the user goes through the button instead of directly calling the API. And here’s the critical part – almost all of your users will use your software through the UI, not the API. So those bugs you miss by just going through the API will be high exposure bugs. These won’t happen all the time, but they’re the kind of things you really don’t want to miss, especially if you were counting on your automation to be testing that part of the program.

If your automation is through the API, then this way you’re getting no testing coverage on your UI. And you’ll have to do that by hand.

Simulating the mouse is good because it’s working the UI the whole time, but it has its own set of problems. The real issue here is reliability. You have to know the coordinates that you’re trying to click before hand. This is doable, but lots of things can make those coordinated change at runtime. Is the window maximized? What’s the screen resolution? Is the start menu on the bottom or the left side of the screen? Did the last guy rearrange the toolbars? And what suppose the application is used by users of Arabic language, where the display is from right to left? These are all things that will change the absolute location of your UI.

The good news is there are tricks around a lot of these issues. The first key is to always run at the same screen resolution on all your automated test systems (note: there are bugs we could be missing here, but we won’t worry about that now – those are beyond the scope of our automation anyway.) We also like to have our first automated test action by maximizing the program. This takes care of most of the big issues, but small things can still come up.

The really sophisticated way to handle this is to use relative positioning. If your developers are nice they can build in some test hooks for you so you can ask the application where it is. This even works for child windows, you can ask a toolbar where it is. If you know that the ‘file -> new’ button is always at (x, y) inside the main toolbar it doesn’t matter if the application is maximized or if the last user moved all the toolbars around. Just ask the main toolbar where it is and tack on (x, y) and then click there.

So this has an advantage over just exercising the APIs since you’re using the UI too, but it has a disadvantage too – it involves a lot of work.

Results Verification

So we have figured out the right way to run the tests, and we have this great test case, but after we have told the program to do stuff, we need to have a way to know if it did the right thing. This is the verification step in our automation, and every automated script needs this.

We have many options
• Verify the results manually
• Verify the results programmatically
• Use some kind of visual comparison tool.

The first method is to do it ourselves, that is by manually verifying the results and see that it meets our expectations.

The second way is of course to verify it programmatically. In this method, we can have a predefined set of expected results (baseline), which can be compared with the obtained results. The output of this would be whether a test case passed or failed. There are many ways by which we can achieve this; we can hard code the expected results in the program/script. We can also store the expected the results in a particular file or a text file or a properties file or a xml file and read the expected results from this file and compare with the obtained result.

The other way is to just grab a bitmap of the screen and save it off somewhere. Then we can use a visual comparison tool to compare it with the expected bitmap files. Using a visual comparison tool is clearly the best, but also the hardest. Here your automated test gets to a place where it wants to check the state, looks at the screen, and compares that image to a master it has stored away some where.



1 30 31 32 33 34 35