Software Testing Dictionary

Acceptance Test Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Ad Hoc Testing: Testing carried out using no recognized test case design technique. [BCS]

Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer.

Assertion Testing. (NBS) A dynamic analysis technique, which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Background testing. Is the execution of normal functional testing while the SUT is exercised by a realistic workload? This workload is being processed “in the background” as far as the functional testing is concerned. [Load Testing Terminology by Scott Stirling]

Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, and elision. [B. Beizer, 1990], defect, issue, problem

Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks Programs that provide performance comparison for software, hardware, and systems.

Benchmarking is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Stirling]

Big-bang testing Integration testing where no incremental testing takes place prior to all the system’s components being combined to form the system. [BCS]

Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Breadth test. – A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail [Dorothy Graham, 1999]

Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2) A systematic method of generating test cases representing combinations of conditions. See: testing, functional. [G. Myers]

Clean test. A test who’s primary purpose is validation; that is, tests designed to demonstrate the software’s correct working. (Sync. positive test)[B. Beizer 1995]

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Sync: Fagan Inspection

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer’s logic and assumptions. [G.Myers/NBS] Contrast with code audit, code inspection, code review.

Coexistence Testing. Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It’s probably an exponentially hard problem rather than a square-law problem. [From Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R. V. Binder, 1999]

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Compos ability testing –testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, ‘Easy’ and other lies, week April 28, 2003]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]

Data-Driven testing An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. [Daniel J. Mosley, 2002]

Data flow testing Testing in which test cases are designed based on variable usage within the code. [BCS]

Database testing. Check the integrity of database field values. [William E. Lewis, 2000]

Defect The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

Defect Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures. [Robert M. Poston, 1996.]

Defect. A flaw in the software with potential to cause a failure. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. . [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Density. A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Masked. An existing defect that hasn’t yet caused a failure because another defect has prevented that part of the code from being executed. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Depth test. A test case that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Dirty testing Negative testing. [Beizer]

Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]

End-to-End testing. Similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests. [Robert M. Poston, 1996.]

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

Error Guessing: Another common approach to black-box validation. Black box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program. [R. V. Binder, 1999]

Exception Testing. Identify error messages and exception handling processes and conditions that trigger them. [William E. Lewis, 2000]

Exhaustive Testing. (NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. [James Bach]

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure. [Robert M. Poston, 1996]

Follow-up testing, we vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances. [Measuring the Effectiveness of Software Testers, Cem Kaner, STAR East 2003]

Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.

Gray box testing Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester. [Cam Kaner]

Gray box testing Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]

Gray box-testing Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:
§Ҩi a component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
§Ҩi The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
[Elfriede Dustin. “Quality Web Systems: Performance, Security & Usability.”]

High-level tests. These tests involve testing whole, complete products [Kit, 1995]

Inspection A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration The process of combining software components or hardware components or both into overall system.

Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) this method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

Interface Tests Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

Internationalization testing (I18N) – testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can’t be done. [From Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]

Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

Load testing: Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

Load §ҡ̳tress test. A test is design to determine how heavy a load the application can handle.

Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.

Load §ҡ̩solation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Monkey Testing. (Smart monkey testing) Inputs are generated from probability distributions that reflect actual expected usage statistics — e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distributions is taken into account. In all branches of smart monkey testing, the input is considered as a single event.

Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections, which the firewall or Web server is capable of handling.

Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is §Ҧamp; #8992; retired. Â§Ò¡í¹€í¹¦ undetected, the test suite must be revised. [R. V. Binder, 1999]

Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

Negative test. A test whose primary purpose is falsification; that is tests designed to break the software [B.Beizer1995]

Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Barman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [fromBS7925-1]

Parallel Testing: Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Sync: parallel run.[ISO]

Penetration testing The process of attacking a host from outside to ascertain remote security vulnerabilities.

Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Performance testing can be undertaken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system’s performance, and 4) project the system’s future load- handling capacity in order to schedule its replacements” [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]

Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.

Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test

Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing. – Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glen ford G.Myers, 1979]

Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]

Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R (t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in — the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed — the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]

Sanity Testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.

Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling]

Sensitive test. A test that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a “pretest” activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]

Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamers, 2002]

Specification-based test. A test, whose inputs are derived from a specification.

Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]

Standards This page lists many standards that can be related to software testing

STEP (Systematic Test and Evaluation Process) Software Quality Engineering’s copyrighted testing methodology.

State-based testing: Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]

State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

Static testing. Source code analysis. Analysis of source code to expose potential defects.

Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]

Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cam Kaner, 1999, p55]

Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Sync: white-box testing, glass-box testing, logic driven testing.

System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]

Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEEE 610].

Test Case. A set of test inputs, executions, and expected results developed for a particular objective.

Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]

Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.

Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.

Test data. The actual (set of) values used in the test or that are necessary to execute the test. [Daniel J. Mosley, 2002]

Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.

Test Item. A software item, which is the object of testing. [IEEE]

Test Log A chronological record of all relevant details about the execution of a test.[IEEE]

Test Plan. A high-level document that defines a testing projects so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements

Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called – a manual test script.

Test Rig A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Test strategy. Describes the general approach and objectives of the test activities. [Daniel J. Mosley, 2002]

Test Status. The assessment of the result of running tests on software.

Test Stub A dummy software component or object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.

Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test Tree. A physical implementation of Test Suite. [Dorothy Graham, 1999]

Testability. Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

(TPI) Test Process Improvement A method for base lining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.

Thread Testing A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black box testing methods are combined during unit testing.

Usability testing. Testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer.

Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics). Validation is checking that you have built the right system.

Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics. Verification is checking that we have built the system right.

Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-1]

Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object’s structure¡that is, the source code

Tips for Success in Interviews

Tips to Success in Interviews:

* First impression is the best impression. You will be judged by ; the way you dress, your educational qualification, work experience, body language, manners, ability to absorb the information and interpret it intelligently and clearly. So take care to be at your best.

* Carry your relevant documents in order – like certificates, copy of application sent, bio-data etc. in a folder so that it can be easily shown when asked. Take a pen also.

* Present the documents only if the interviewer ask for it.

* Never be late for an interview.

* Greet the interviewers as soon as you enter.

* Sit down only when you are asked to. It is better not to pull the chair, either lift it or move it and always enter from the right side of the chair.

* Say ‘please and thank you’ whenever required.

* Listen carefully and pay attention to the question. If the question is not clear to you ask politely for a repeat.

* Reply confidently and immediately to the point, keeping your answers short unless asked for a longer description.

* While answering, look directly at the person asking the questions and try to be pleasant.

* Replies connected to any details regarding your bio-data should be authentic.

* It is better to admit if you don’t know something.

* Remember to say ‘sorry’ if your opinions or answers are rejected.

* Avoid indulging in certain mannerisms in your speech or behavior.

* You can ask when you can expect to hear from them before you leave.

* Don’t forget to say “Thank you” at the end of an interview to every interviewer before leaving.

* Shake hands only if the interviewer initiates the gesture.

* Walk out confidently without looking back.

* Gently shut the door behind you as you leave.

General Interview Questions

Questions start the minute the interview does, and to show that you are an exceptional candidate, you need to be prepared to answer not only the typical questions, but also the unexpected. You can expect questions regarding your qualifications, your academic preparation, career interests, experience, and ones that assess your personality.

1. Tell me about yourself
The most often asked question in interviews. You need to have a short statement prepared in your mind. Be careful that it does not sound rehearsed. Limit it to work-related items unless instructed otherwise. Talk about things you have done and jobs you have held that relate to the position you are interviewing for. Start with the item farthest back and work up to the present.

2. Why did you leave your last job?
Stay positive regardless of the circumstances. Never refer to a major problem with management and never speak ill of supervisors, co-workers or the organization. If you do, you will be the one looking bad. Keep smiling and talk about leaving for a positive reason such as an opportunity, a chance to do something special or other forward-looking reasons.

3. What experience do you have in this field?
Speak about specifics that relate to the position you are applying for. If you do not have specific experience, get as close as you can.

4. Do you consider yourself successful?
You should always answer yes and briefly explain why. A good explanation is that you have set goals, and you have met some and are on track to achieve the others.

5. What do co-workers say about you?
Be prepared with a quote or two from co-workers. Either a specific statement or a paraphrase will work.

6. What do you know about this organization?
This question is one reason to do some research on the organization before the interview. Find out where they have been and where they are going. What are the current issues and who are the major players?

7. What have you done to improve your knowledge in the last year?
Try to include improvement activities that relate to the job. A wide variety of activities can be mentioned as positive self-improvement. Have some good ones handy to mention.

8. Are you applying for other jobs?
Be honest but do not spend a lot of time in this area. Keep the focus on this job and what you can do for this organization. Anything else is a distraction.

9. Why do you want to work for this organization?
This may take some thought and certainly, should be based on the research you have done on the organization. Sincerity is extremely important here and will easily be sensed. Relate it to your long-term career goals.

10. Do you know anyone who works for us?
Be aware of the policy on relatives working for the organization. This can affect your answer even though they asked about friends not relatives. Be careful to mention a friend only if they are well thought of.

11. What kind of salary do you need?
A loaded question. A nasty little game that you will probably lose if you answer first. So, do not answer it. Instead, say something like, That’s a tough question. Can you tell me the range for this position? In most cases, the interviewer, taken off guard, will tell you. If not, say that it can depend on the details of the job. Then give a wide range.

12. Are you a team player?
You are, of course, a team player. Be sure to have examples ready. Specifics that show you often perform for the good of the team rather than for yourself are good evidence of your team attitude. Do not brag, just say it in a matter-of-fact tone. This is a key point.

13. How long would you expect to work for us if hired?
Specifics here are not good. Something like this should work: I’d like it to be a long time. Or As long as we both feel I’m doing a good job.

14. Have you ever had to fire anyone? How did you feel about that?
This is serious. Do not make light of it or in any way seem like you like to fire people. At the same time, you will do it when it is the right thing to do. When it comes to the organization versus the individual who has created a harmful situation, you will protect the organization. Remember firing is not the same as layoff or reduction in force.

15. What is your philosophy towards work?
The interviewer is not looking for a long or flowery dissertation here. Do you have strong feelings that the job gets done? Yes. That’s the type of answer that works best here. Short and positive, showing a benefit to the organization.

16. If you had enough money to retire right now, would you?
Answer yes if you would. But since you need to work, this is the type of work you prefer. Do not say yes if you do not mean it.

17. Have you ever been asked to leave a position?
If you have not, say no. If you have, be honest, brief and avoid saying negative things about the people or organization involved.

18. Explain how you would be an asset to this organization
You should be anxious for this question. It gives you a chance to highlight your best points as they relate to the position being discussed. Give a little advance thought to this relationship.

19. Why should we hire you?
Point out how your assets meet what the organization needs. Do not mention any other candidates to make a comparison.

20. Tell me about a suggestion you have made
Have a good one ready. Be sure and use a suggestion that was accepted and was then considered successful. One related to the type of work applied for is a real plus.

21. What irritates you about co-workers?
This is a trap question. Think real hard but fail to come up with anything that irritates you. A short statement that you seem to get along with folks is great.

22. What is your greatest strength?
Numerous answers are good, just stay positive. A few good examples: Your ability to prioritize, Your problem-solving skills, Your ability to work under pressure, Your ability to focus on projects, Your professional expertise, Your leadership skills, Your positive attitude .

23. Tell me about your dream job.
Stay away from a specific job. You cannot win. If you say the job you are contending for is it, you strain credibility. If you say another job is it, you plant the suspicion that you will be dissatisfied with this position if hired. The best is to stay genetic and say something like: A job where I love the work, like the people, can contribute and can’t wait to get to work.

24. Why do you think you would do well at this job?
Give several reasons and include skills, experience and interest.

25. What are you looking for in a job?
See answer # 23

26. What kind of person would you refuse to work with?
Do not be trivial. It would take disloyalty to the organization, violence or lawbreaking to get you to object. Minor objections will label you as a whiner.

27. What is more important to you: the money or the work?
Money is always important, but the work is the most important. There is no better answer.

28. What would your previous supervisor say your strongest point is?
There are numerous good possibilities: Loyalty, Energy, Positive attitude, Leadership, Team player, Expertise, Initiative, Patience, Hard work, Creativity, Problem solver

29. Tell me about a problem you had with a supervisor
Biggest trap of all. This is a test to see if you will speak ill of your boss. If you fall for it and tell about a problem with a former boss, you may well below the interview right there. Stay positive and develop a poor memory about any trouble with a supervisor.

30. What has disappointed you about a job?
Don’t get trivial or negative. Safe areas are few but can include: Not enough of a challenge. You were laid off in a reduction Company did not win a contract, which would have given you more responsibility.

31. Tell me about your ability to work under pressure.
You may say that you thrive under certain types of pressure. Give an example that relates to the type of position applied for.

32. Do your skills match this job or another job more closely?
Probably this one. Do not give fuel to the suspicion that you may want another job more than this one.

33. What motivates you to do your best on the job?
This is a personal trait that only you can say, but good examples are: Challenge, Achievement, Recognition

34. Are you willing to work overtime? Nights? Weekends?
This is up to you. Be totally honest.

35. How would you know you were successful on this job?
Several ways are good measures: You set high standards for yourself and meet them. Your outcomes are a success.Your boss tell you that you are successful

36. Would you be willing to relocate if required?
You should be clear on this with your family prior to the interview if you think there is a chance it may come up. Do not say yes just to get the job if the real answer is no. This can create a lot of problems later on in your career. Be honest at this point and save yourself future grief.

37. Are you willing to put the interests of the organization ahead of your own?
This is a straight loyalty and dedication question. Do not worry about the deep ethical and philosophical implications. Just say yes.

38. Describe your management style.
Try to avoid labels. Some of the more common labels, like progressive, salesman or consensus, can have several meanings or descriptions depending on which management expert you listen to. The situational style is safe, because it says you will manage according to the situation, instead of one size fits all.

39. What have you learned from mistakes on the job?
Here you have to come up with something or you strain credibility. Make it small, well intentioned mistake with a positive lesson learned. An example would be working too far ahead of colleagues on a project and thus throwing coordination off.

40. Do you have any blind spots?
Trick question. If you know about blind spots, they are no longer blind spots. Do not reveal any personal areas of concern here. Let them do their own discovery on your bad points. Do not hand it to them.

41. If you were hiring a person for this job, what would you look for?
Be careful to mention traits that are needed and that you have.

42. Do you think you are overqualified for this position?
Regardless of your qualifications, state that you are very well qualified for the position.

43. How do you propose to compensate for your lack of experience?
First, if you have experience that the interviewer does not know about, bring that up: Then, point out (if true) that you are a hard working quick learner.

44. What qualities do you look for in a boss?
Be generic and positive. Safe qualities are knowledgeable, a sense of humor, fair, loyal to subordinates and holder of high standards. All bosses think they have these traits.

45. Tell me about a time when you helped resolve a dispute between others.
Pick a specific incident. Concentrate on your problem solving technique and not the dispute you settled.

46. What position do you prefer on a team working on a project?
Be honest. If you are comfortable in different roles, point that out.

47. Describe your work ethic.
Emphasize benefits to the organization. Things like, determination to get the job done and work hard but enjoy your work are good.

48. What has been your biggest professional disappointment?
Be sure that you refer to something that was beyond your control. Show acceptance and no negative feelings.

49. Tell me about the most fun you have had on the job.
Talk about having fun by accomplishing something for the organization.

50. Do you have any questions for me?
Always have some questions prepared. Questions prepared where you will be an asset to the organization are good. How soon will I be able to be productive? and What type of projects will I be able to assist on? are examples.

Sanity and Smoke Testing

Smoke Testing:
1. Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.

2. A smoke test is scripted–either using a written set of tests or an automated test.

3. A Smoke test is designed to touch every part of the application in a cursory way. It’s is shallow and wide.

4. Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).

5. Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

Sanity Testing:
1. A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.

2. A sanity test is usually unscripted.

3. A Sanity test is used to determine a small section of the application is still working after a minor change.

4. Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.

5. sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Testing Process

Test Process Management (TPM) is concerned with controlling all the testing activities and the automated support tools used in a project, within a dedicated management environment. TPM is based upon a professionally recognised industry standard -‘The V Model’, which supports each stage of the system development life cycle. The V-Model demonstrates the complexity of relationships between each stage of the development life cycle and acknowledges that for every stage within the development there is an associated stage of testing.

It shows that testing does not have to occur once the ‘code’ has been delivered which is what you need to have to begin test execution. The testing can start early with analysing the requirements and creating test criteria of ‘What’ you need to test.

Quality control points exist for every stage within the life cycle. Once the test preparation has been completed the quality control point would normally take the form of a review providing sign-off for that stage. The system should only be ready to go live once ‘All high level requirements have been met’, or put another way ‘When we have successfully tested the exit criteria’.

Faults found earliest in this process are least costly to correct, generally under 20% of the cost of correcting the same error post implementation. Therefore there is significant financial benefit from monitoring and managing testing to identify and perform corrections at the least costly opportunity.

The objective is to ensure every element of the system is validated at the earliest possible stage, to the quality criteria set out by the business managers, providing a comprehensible and manageable audit trail of the systems actual capabilities.

The benefits of this level of control and management over quality assurance are compounded when changes are introduced. With a manual system, checking the impact of even minor changes is complex, time consuming and prone to human error. With T-Plan this is a simple ‘What if?’ function of reporting impacts of the proposed changes at both technical and business levels. This allows the risk of potential error/fault correction to be identified, together with potential for ‘knock-on’ or efficiency costs to the business.

Skill Set for Test Engineer

1. Know Programming. Might as well start out with the most controversial one. There’s a popular myth that testing can be staffed with people who have little or no programming knowledge. It doesn’t work, even though it is an unfortunately common approach. There are two main reasons why it doesn’t work.

(A) They’re testing software. Without knowing programming, they can’t have any real insights into the kinds of bugs that come into software and the likeliest place to find them. There’s never enough time to test “completely”, so all software testing is a compromise between available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don’t know programming, you’re unlikely to have useful intuition about where to look.

(B) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The tools, both as testing products and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable. The tester who doesn’t know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools.

Does this mean that testers must have formal programmer training, or have worked as programmers? Formal training and experience is usually the easiest way to meet the “know programming” requirement, but it is not absolutely essential. I met a superb tester whose only training was as a telephone operator. She was testing a telephony application and doing a great job. But, despite the lack of formal training, she had a deep, valid, intuition about programming and had even tried a little of it herself. Sure she’s good-good, hell! She was great. How much better would she have been and how much earlier would she have achieved her expertise if she had had the benefits of formal training and working experience? She would have been a lot better a lot earlier.

I like to see formal training in programming such as a university degree in Computer Science or Software Engineering, followed by two to three years of working as a programmer in an industrial setting. A stint on the customer-service hot line is also good training.

I don’t like the idea of taking entry-level programmers and putting them into a test organization because:

(A)Loser Image.
Few universities offer undergraduate training in testing beyond “Be sure to test thoroughly.” Entry-level people expect to get a job as a programmer and if they’re offered a job in a test group, they’ll often look upon it as a failure on their part: they believe that they didn’t have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly.

(B) Credibility With Programmers.
Independent testers often have to deal with programmers far more senior than themselves. Unless they’ve been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterpart who can sluff off their concerns with “Look, kid. You just don’t understand how programming is done here, or anywhere else, for that matter.” It is setting up the novice tester for failure.

(C) Just Plain Know-How.
The Programmers right. The kid doesn’t know how programming is really done. If the novice is a “real” programmer (as contrasted to a “mere tester”) then the senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive “leech” from the test group? Never! It’s easiest for the novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have to learn it, without actually doing it, as an entry-level tester.

2. Know the Application.
That’s the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program’s features and the kinds of cockpit errors that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood chemistry; testing an aircraft’s flight control system requires control theory and systems engineering, and being a pilot doesn’t hurt; testing a geological application demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer into the application. Here again, paralleling the programmer’s qualification, I’d like to see a university degree in the relevant discipline followed by a few years of working practice before coming into the test group.

3. Intelligence.
Back in the 60’s, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for trainees. The most infamous of these was IBM’s programmers’ Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really smart people-and so are good testers.

4. Hyper-Sensitivity to Little Things.
Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn’t a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use them as an entree to finding a closely-related set of inputs that will cause a catastrophic failure and therefore get the programmers’ attention. Luckily, this attribute can be learned through training.

5. Tolerance for Chaos.
People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or test.

Author:

Saravanan

 

1 28 29 30