Total Pageviews

Wednesday, September 26, 2007

Manual Testing


CMM (Capability Maturity Model):

It is an industry-standard model for defining and measuring the "maturity" of a software company's development process and for providing direction on what they can do to improve their software quality. CMM is a software model proposed by Carnegie Mellon University. In short it is a collection of practices to achieve a benchmark in product development.

CMM software Maturity Levels:

Level1: Initial: The s/w development process at this level are adhoc and often chaotic. The project's success depends on heroes and luck. There are no general practices for planning, monitoring, or controlling the process. It's impossible to predict the time and cost to develop the software. The test process is just as adhoc as the rest of the process.

Level2: Repeatable: This maturity level is best described as project level thinking. Basic project management processes are in place to track the cost, schedule, functionality, and quality of the product. Lessons learned from previous similar projects are applied. There is a scene of discipline. Basic software testing practices, such as test plans and test cases are used.

Level3: Defined: Organizational, not just project specific, thinking comes in to play at this level. Common management and engineering activities are standardized and documented. These standards are adapted and approved for use on different projects. The rules are not thrown out when things get stressful. Test documents and plans are reviewed and approved before testing begins. The test group is independent form developers. The test results are used to determine when the s/w is ready.

Level4: Managed: At this maturity level, the organizations process is under statistical control. Product quality is specified quantitatively beforehand (for example, this product wont release until it has fewer than 0.5 defects per 1,000 lines of code) and the s/w isn't released until that goal is met. Details of the development process and the s/w quality are collected over the projects development, and adjustments are made to correct deviations and to keep the project on plan.

Level5: Optimizing: This level is called "optimizing"(not "optimized") because it's continually improving from level 4. New technologies and processes are attempted, the results are measured, and both incremental and revolutionary changes are instituted to achieve even better quality levels. Just when everyone thinks the best has been obtained. The crank is turned one more time, and the next level of improvement is obtained.

Quick glance for CMM

1) Initial: Here concentrate on the team, team should be very strong.
2) Repeatable: Here concentrate on the repeatability and used well defined guide lines.
3) Measure: Concentrate on the measures and metrics.
4) Documentation: Concentrate on documentation.
5) Optimization: Concentrate on the research and development activities.



CMMI: Capability Maturity Model for Integration, if the companies are producing IT and Non-IT products then it is CMMI.

CMMP: Capability Maturity Model for People, these companies give more benefits to people so people will work with more satisfaction.

Pair-wise testing is a ad-hoc/monkey testing techniques used in case of lack of time, lack of documentation, lack of skills, lack of resources etc. The test lead/PM will use this technique due to lack of time. They will make a tester as a pair with developer. They both continue the coding and testing parallel.

Take an e.g.
you have to test an S/W on English/French/Japanese and German Languages with OS as VISTA/MAC PPC/IMAC/WINXP. Testing all will take 16 Language-OS Combination. But as French/English/German are Roman Lang and Japanese is double byte language so u can omit some of the combinations here. Similarly IMAC and MAC PPc can be grouped in one pair. VISTAand WIN XP in other pair. Now depending upon time , you can select the number of combinations u want to test
e.g.
IMAC- GERman
MAC PPC-- French
IMAC- Japanese
VISTA- Japanese
WINXP - German
VISTA- English

What is the difference between use case and test case?

· A use case describes an entire flow of interaction that the user has with the system/application. E.g. a user logging into the system and searching for a flight, booking it and then logging out is a use case.

· A use case is document which is written from BRS.

· It is prepared by the business analyst and project manager.

· It describes how the system should function from starting to the end.

· How the system should be when the customer uses.


· Test cases are written on the basis of use cases.

· The test cases check if the various functionalities that the user uses to interact with the system are working fine or not.

· Test cases are prepared from SRS.

· Test cases are written by the tester.

· Test case specifies the functionality or behavior of an application.



What sort of testing will be exactly covered under system testing?

The techniques covered during system testing are:

1: Usability testing

2: Functionality testing

3: Performance testing

4: Security testing

What are the criteria/inputs we take for system testing?

Generally system testing is done after completion of unit testing and integration testing.
This testing is done by Test Engineer in the company with user specified environment in order to check the performance, functionality of the application.
As it comes under one of the testing techniques, BDD (Business Design Document) and UCD (User Case Document) are obvious inputs to this type of testing.

What is the difference between test matrix and test metrix?

Test matrix: Tester will write test matrix in test specification document which keep track of testing flow, testing type and test cases activities etc.

Test Metrix: This will define the scale up to what level of testing can be achieved by doing particular testing on application in the scale of 100% testing meter?

What is QA Life Cycle?

QA Life Cycle: 1: Test initiation,

2: Test planning,

3: Test design,

4: Test execution,

5: Defect reporting,

6: Documentation,

7: Signoff

Goals of testing

· Finding undiscovered errors with minimum amount of time and effort.

· Testing ensure that software appears to be working as specified in the customers requirements.

· Bug Prevention

· Bug Discovery

· Identifying the defects

· Preventing the defects

· To check whether the customer requirements criterion is met.

· To measure the quality of the Product.

What is Negative Testing?

Negative Testing is simple testing the application beyond and below of its limits. For ex:

1) We want to enter a name for that negative test can be first we enter numbers.

2) We enter some ASCII characters and we will check.

3) First numbers and characters we will check.

4) Name should have some minimum length below that we will check.

What is the difference between the web testing and GUI testing?

GUI testing is the part of web testing as well as desktop testing.

In GUI testing, we check the graphical user interface that is Font size, font colour, links, labels etc.

Web testing means it is 3 tier architecture,
here we check the performance of the application (volume, load, stress). Here we do the compatibility testing, user interface testing etc.

What are the points to check in web testing?

Specific to web based application:

1. GUI (Font, Control Alignment, Control Size, Spelling, resolution etc.)

2. Performance-response time (Page should be displayed/refreshed within certain period of time)

3. HTML tags in text box / text area.

4. Security Testing (e.g. Save the URL, Clear cookies, paste the URL in new browser window, it should not allowed to open, should redirect to login/default page, also Cookies verification), Anonymous Access (Hacking)

5. Link redirections or dead link verification.

What is the difference between adhoc testing, monkey testing and exploratory testing?

Adhoc testing: A tester who has little idea about the application, such idea using the system.

Monkey Testing: Monkey Testing refers broadly to any form of automated testing done randomly and without any "typical user" bias. Calling such tools monkeys derives from variations of this popular aphorism:

Six monkeys pounding on six typewriters at random for a million years will recreate all the works of Isaac Asimov.

The use of monkey testing is to simulate how your customers will use your software in real time.

Exploratory Testing:

How do you test software with no specifications and not much time? The answer is exploratory testing!

Exploratory software testing is a powerful and fun approach to testing. Exploratory testing is especially useful in complex testing situations, when little is known about the product, or as part of preparing a set of scripted tests. The basic rule is this: exploratory testing is called for any time the next test you should perform is not obvious, or when you want to go beyond the obvious.

In exploratory testing, tests are designed and executed at the same time, and they often not recorded. Exploratory testing emphasizes adaptability and learning.

Explain V model in detail?

V-Model is a Testing Life Cycle Model which is entirely different from Software Development Life Cycle.
V-Model is used for tester's where as SDLC we have developers and testers.

SDLC models are

1. Waterfall model
2.Prototype Model
3. Spiral Model

TLC Models are

1. Fish Model and
2. V-Model

 

SRS User Acceptance

Design System Testing

HLD Integration Testing

SDLC LLD Unit Testing STLC

Coding

------------------------------------------------------------------------------------

QTP

· Latest version of mercury interactive

· It supports web applications

· TSL as VB script which supports .net platform

· Supports xml pages, html pages

· Better version than Win Runner

· Dynamic pages

· It supports c++ like polymorphism, exception handling

How to develop a test plan?

Test Plan is done depending up the project.

Phase 1: Project Manager, clients together create a plan for the whole project.
Phase 2: When the project is broken in to Features/ modules, an individual plan is created.

However, both consist of the same contents.

Test plan has No. of resources working, Time taken, Dependences, Risk areas, Compatibilities (H/w and S/W), Type of testing, How many test cases can be written, (in some organizations test cases are also included in the plan. But in some of them scenarios are added in to the plan with a break of Usability, User interface, Functionality, Integration points etc.), entry criteria, exit criteria.

What is the difference between testing and debugging?

Debugging is a process of line by line execution (White Box Testing) of the code/ script with the intent of finding errors/ fixing the defects. Testing is a process of finding the defects from a user perspective (Black Box Testing).

What is the difference between structural and functional testing?

Structural Testing:

1. This is done by Developers

2. It also called as White Box testing, glass box testing etc

3. It focuses on the structural part means in coding/programming part

Functional Testing:

1. This is done by Testers

2. It also called as Black Box testing, behavioral testing etc

3. It focuses on the functional part of the application. It is to test the application against the functionality. It does not concentrate on the Structural part.

What is a bug?

A bug is an error found during the testing phase of a program \ application. Bugs can be classified as

Logical Bugs: Where the functionality asked for does not complete the task.

GUI Related Bugs: relates to the Design of the Interface. It is either related to the Application Form \ Reports of the application

Database Related: Data does not get refreshed \ updated \ deleted \ edited

System Related: - The bug appears if the program is not compatible with the operating system

Software Service Pack: Sometimes latest updates are available and the program may not be compatible with the same.

Browser Related: Sometimes latest updates are available and the program may not be compatible with the same.

 

No comments: