PPP ST
|
|
Título del Test:
![]() PPP ST Descripción: Coso de introduction para el pepepe de software testing |



| Comentarios |
|---|
NO HAY REGISTROS |
|
The "exhaustive testing" is a. Testing type. Testing level consisting of full requirements analysis and some check list creation. Test approach in which the test suite comprises all combinations of input values and preconditions. Test approach in which the application is tested in all supported environments. Which concept in software testing was the earliest one?. Agile Testing. Positive and negative testing. Test automation. DevTestOPS. Was software testing closely integrated with software development from the very beginning?. Yes, it was a natural state of things. No, there is no way to integrate these activities. No, because the software testing and software development are rarely performed at the same time. No, the integration become visible with agile approach only. How can software testing make a software product more attractive to end users?. It improves usability and user experience. It helps to provide users with the results they want with the product. It provides more alternatives (competitive products). It forces a user to make more effort to learn the product. Hiring cone. Why is the quality important for enterprise users?. It calms. It decreases risk. It helps to make business. It is a form of charity (helps with taxes decrease). Are there some life-critical areas where software testing may help to avoid a disaster?. no, any life-critical area of human activities works without software. yes, there are lot (like planes, trains, power plants, etc). it's hard to find out. there were a lot in the past, but there is none now. What soft skills are most needed for a tester. Good sense of humor, positive thinkin. Responsibility, communication skills. Observation and attentiveness to details. Aggression, tendency to violent behavior. Are presentation skills important for a tester. No. Yes, these skills help with a lot of everyday activities. Yes, these skills help to "be invisible" (to work without interference). No, because no tester has such a skill. What skills are absolutely required to start a testers career?. Good english, general IT knowledge. Programming (including web and mobile develeopment), basics, OS administration basics. Good abstract and analytical thinking. Virtualization and cloud computing basics. Quality Management is... Coordinated activities to direct and control an organization with regard to quality. Set of actions to verify the product against the requirements. The operational techniques and activities that are focused on fulfilling quality requirements. The process concerned with planning, preparation and evaluation of software products. Testing is... The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. The process concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements. Coordinated activities to direct and control an organization with regard to quality. The behavior produced/observed when a component or system is tested. Quality is... The degree to which a component, system or process meets the specified requirements and/or user/customer needs and expectations. Coordinated activities to direct and control an organization with regard to quality. The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. The predicted observable behavior of a component or system executing under specified conditions, based on its specification or another source. Quality assurance is... The degree to which a component, system or process meets the specified requirements and/or user/customer needs and expectations. Coordinated activities to direct and control an organization with regard to quality. The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. Part of quality management focused on providing confidence that quality requirements will be fulfilled. Quality control is... The degree to which a component, system or process meets the specified requirements and/or user/customer needs and expectations. Coordinated activities to direct and control an organization with regard to quality. The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. Part of quality management focused on providing confidence that quality requirements will be fulfilled. Defect is... An imperfection or deficiency in a work product where it does not meet its requirements or specifications. Coordinated activities to direct and control an organization with regard to quality. The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. The behavior produced/observed when a component or system is tested. Expected result is is... An imperfection or deficiency in a work product where it does not meet its requirements or specifications. The predicted observable behavior of a component or system executing under specified conditions, based on its specification or another source. The operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. The behavior produced/observed when a component or system is tested. Actual result result is is... Part of quality management focused on providing confidence that quality requirements will be fullfilled. The predicted observable behavior of a component or system executing under specified conditions, based on its specification or another source. A set of ideas. The behavior produced/observed when a component or system is tested. Checklist is... A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions. An imperfection or deficiency in a work product where it does not meet its requirements or specifications. A set of ideas. The behavior produced/observed when a component or system is tested. Test case is... A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions. Part of quality management focused on providing confidence that quality requirements will be fullfilled. A set of ideas. The behavior produced/observed when a component or system is tested. Test suite is... A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions. A set of test cases or test procedures to be executed in a specific test cycle. A set of ideas. The behavior produced/observed when a component or system is tested. In waterfall model... Each phase must be completed fully before the next phase can begin. Each separate phase repeats multiple times. There is a fixed set of iterations repeating each phase of the entire model. There is some product increment at the end of each iteration. The v-model... Requires that there is no product increment at the end of each iteration. Is a framework to describe the software development lifecycle activities from requirements specification to maintenance. Requires restart the project from the beginning if quality check fails at any stage. Provides variety of tools to integrate unit-testing to each activity. In iterative/incremental model. For each testing attempt a complete system is compiled and linked, so that a consistent system is available including all latest changes. A project is broken into a usually large number of iterations. There is a fixed set of iterations repeating each phase of the entire model. Any stage may be omitted without any affect on the final product quality. Agile model. Provides variety of tools to integrate unit-testing to each activity. Requires restart the project from the beginning if quality check fails at any stage. Is a synonym to the spiral model. Is a group of software development methodologies based on iterative incremental development. Advantages of waterfall model. This model avoids the downward flow of the defects. This model is very rigid and inflexible. This model is simple and easy to understand and use. This model phases are processed and completed one at a time. Phases do not overlap. The advantages of V-model are. High ammounts of risk and uncertainty. Proactive defect tracking - that is defects are found in early stage. If any changes happen in midway, then the test documents along with requirement documents has to be updated. Testing activities like planning, test designing happens well before coding. Advantages of agile. High ammounts of risk and uncertainty. Customer satisfaction by rapid, continuous delivery of useful software. Close, daily cooperation between business people and developers. There is a lack of emphasis on necessary designing and documentations. The disadvantages of waterfall model are. This model is simple and easy to understand and use. No working software is produced untill late during the life cycle. This is a poor model for long and ongoing projects. This model works well for smaller projects where requirements are clearly defined and very well understood. The disadvantages of v-model are. This model is simple and easy to understand and use. Software is developed during the implementation phase, so no early prototypes of the software are produced. This model works well for small projects where requirements are easily understood. Software is developed during the implementation phase, so no early prototypes of the software are produced. The disadvantages of agile model are. There is a fixed set of iterations repeating each phase of the entire model. This model is very rigid and inflexible. The project can easily get taken off track if the customer representative is not clear what final outcome that they want. There is lack of emphasis on necessary designing and documentation. The STLC begins with. Test Strategy establishment. General planning and requirements analysis. Test cases creation. Defect reporting. At acceptance criteria establishment stage we... Return to planning to find out how shall we achieve all those goals and criteria from the previous step. analyze if we managed to achieve the goals set during the planning stage. analyze results and write reports. have to establish or adjust metrics and criteria for test process to start, pause, resume, complete or abort. At test strategy establishment stage we. define approaches, tools, schedule, roles, responsibility and so on. Analyze if we managed to achieve the goals set during the planning stage. Create, review, adjust, rework (and so on) checklist. Look for defects and write defect reports. At test cases creation phase we. Define approaches, tools, schedule, roles, responsibility and so on. Analyze if we managed to achieve the goals set during the planning stage. Create, review, adjust, rework (and soon) checklists, test cases, test scenarios and other similar artifacts. Look for defects and write defect-reports. During test execution and defect reporting stages we. Have to find out what to test, how much work is ahead, what difficulties we may face. Analyze if we managed to achieve the goals set during the planning stage. Establish or adjust metrics and criteria for test process to start, pause, resume, complete or abort. Report defects during test cases execution and defects detection. During test results analysis and test results reporting stages we. Analyze if we managed to achieve the goals set during the planning stage. Create, review, adjust, rework (and so on) checklists, test cases, test scenarios and other similar artifacts. lok for defects and wirte defect-reports. return to planning to find out how shall we achieve all those goals and criteria from the previous step. Functional testing is. Testing conducted to evaluate the compliance of a component or system with functional requirements. Testing conducted to evaluate the compliance of a component or system with non-functional requirements. Testing that involves the execution of the software of a component or system. Testing based on an analysis of the internal structure of the component or system. Non-functional testing is. Testing conducted to evaluate the compliance of a component or system with functional requirements. Testing a work product without code being executed. A subset of all defined test cases that cover the main functionality of a component or system, the most crucial functions. Testing process where the system validated against the valid input data and valid set of actions. Black-box testing is. Testing conducted to evaluate the compliance of a component or system with functional requirements. Testing, either functional or non-functional, without reference to the internal structure of the component or system. Testing of individual hardware or software components. Testing a work product without code being executed. gray-box testing is. Testing, which is a combination of black-box and white box. Tests aimed at showing that a component or system does not work. Testing of individual hardware or software components. Testing, either functional or non-functional, without reference to the internal structure of the component or system. unit testing is. Testing conducted to evaluate the compliance of a component or system with functional requirements. Testing a work product without code being executed. Testing of individual hardware or software components. Testing, either functional or non-functional, without reference to the internal structure of the component or system. integration testing is. Testing performed by the tester who carries out all the actions on the tested application manually. Testing a work product without code being executed. Testing performed to expose defects in the interfaces and interactions beetween integrated components or systems. Testing, either functional or non-functional, without reference to the internal structure of the component or system. system testing is. Testing performed by the tester who carries out all the actions on the tested application manually. Testing a work product without code being executed. Testing an integrated system to verify it meets specified requirements. Testing, either functional or non-functional, without reference to the internal structure of the component or system. Manual testing is. Testing performed by the tester who carries out all the actions on the tested application manually. Testing a work product without code being executed. Testing performed to expose defects in the interfaces and interactions between integrated components or systems. Testing process where the system validated against the valid input data and valid set of actions. Automated testing is. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Testing a work product without code being executed. Testing an integrated system to verify that it meets specified requirements. Testing process where the system validated against the valid input data and valid set of actions. Smoke test is. A subset of all defined test cases that cover the main functionality of a component or system, the most crucial functions. Testing approach based on an analysis of the internal structure of the component or system. Testing approach to analyze a work product without code being executed. Testing strategy, either functinoal or non-functional, without reference to the internal structure of the component or system. Critical path test is. Test cases that cover the functionality used most of the time by the majority of users. Testing approach based on an analysis of the internal structure of the component or system. Testing approach to analyze a work product without code being executed. Testing strategy, either functional or non-functional, without reference to the internal structure of the component or system. extended test is. Test cases that cover the "nice-to-have" functionality. Testing approach based on an analysis of the internal structure of the component or system. Testing approach to analyze a work product without code being executed. Testing strategy, either functional or non-functional, without reference to the internal structure of the component or system. positive testing is. Tests aimed at showing that a component or system does not work. Testing approach based on an analysis of the internal structure of the component or system. Testing approach to analyze a work product without code being executed. Testing process where the system validated against the valid input data and valid set of actions. negative testing is. Tests aimed at showing that a component or system does not work. Testing approach based on an analysis of the internal structure of the component or system. Testing approach to analyze a work product without code being executed. Testing process where the system validated against the valid input data and valid set of actions. The "test planning" is. Testing type. Activity of establishing or updating a test plan. Test approach which the test suite comprises all combinations of input values and preconditions. Test approach in which the application is tested in all supported environments. Planning tasks and goalds include. Estimation of work scope and complexity. Distribution of duties and responsiblities. Set of actions to verify the product against the requirements. DevTestOps tasks. Planning task and goals include. Some imperfection or defficiency in a work product where. Risk mitigation and countermeasures definition. Designation of resources and ways to acquire them. The behavior produced/observed when a component or system is tested. Is it true that test plan is a map to the defined destination, a starting point to the reporting and so on and so forth. yes, but in agile approach only. yes, but in waterfall approach only. yes. no. Test plan helps to. Get better results with less effort. Get better understanding between people. find better architectural approach for the developing system. find the person to blame for some mistake. Test plan includes the following sections. testing types. testing levels. test strategy and approach. risk evaluation. Test plan includes the following sections. defect reports summary. requirements to be tested. schedule. steps to reproduce some issue. the list of miscellaneous testing criteria, such as acceptance criteria, entry criteria, suspension criteria, etc. Is included in the following test plan section. criteria. resources. roles and responsibilities. requirements not to be tested. The 'Project scope and main goals' test plan section. Emphasizes on the most important task. Contains the list of roles and responsibilities. Represents the main milestones from the 'Schedule' section. allows to reproduce critical defects with less effort. The 'Risk evaluation' test plan section contains. Specific financial data to mitigate possible force-majors. Risk probability evaluation and some options for resolving the situation. The list of test documentation with details on who should prepare it, when, how, etc. The list of numerical characteristics of quality indicators, methods for their evaluation, formulas, etc. The "requirement" is. Something needed by a user to solve problem or achieve an objective that must be met or possessed by a system. testing type. test approach in which the test suite comprises all combinations of input values and preconditions. activity of establishing or updating a test plan. how many project problems are originated from bad documentation. None or just a few. 3-5%. 60-80%. 100%. Why detecting a problem in requirements helps to eliminate the issue with minimal cost?. Because working with requirements is one of earliest project activities, so there is amolst nothing to re-work. Becayse the customer will never know that the problem existed. Because developers can easily re-write some code at every moment they get new requirements. Because this is a commonly accepted practice, there is no other particular reason. One of the most common way of identifying requirements (with two main roles involved- the source of information and the receiver of information) is. Prototyping. work with focus groups. interview. documentation analysis. The way of identifying requirements involving several people to share information, discuss issues and be on the same page immediately is. prototyping. interview. work with focus groups. meetings and brainstorming. the way of identifying requirements that allows gathering and aggregation of information from thousands of respondents is. prototyping. interview. questioning. meetings and brainstorming. the way of identifying requirements that allows one to get hidden information and see things he would never hear about is. prototyping. interview. questioning. observation. the way of identifying requirements that uses "something real" (not imaginary) both as the source of the new information and real material thing to discuss is. prototyping. interview. questioning. observation. What is the difference between modelling and prototyping in the context of identifying requirements. modelling works with "real things", while prototyping works with mathematical abstractions. Prototyping works with "real things", while modelling works with mathematical abstractions. Prototyping is always performed by developers, while modelling may be performed by testers as well. Prototyping is the second stage of investigation, it comes right after modelling. Which requirement level express the purpose for which the product is developed?. user requirements. business requirements. detailed specification. non-functional requirements. which requirements level describes the system response to user actions?. user requirements. business requirements. detailed specification. non-functional requirements. which requirements type describes the properties of the system that it must possess when implementing its behavior?. user requirements. business requirements. detailed specification. non-functional requirements. A requirement is considered atomic if it. cannot be broken down into separate requirements without loss of completeness. provides all the necessary information, if nothing is missed or left out. does not contain internal contradictions and/or contradictions with other requirements and documents. is described without the use of jargon, non-obvious abbreviations and vague wording. A requirement is considered complete if it. cannot be broken down into separate requirements without loss of completeness. provides all the necessary information, if nothing is missed or left out. does not contain internal contradictions and/or contradictions with other requirements and documents. is described without the use of jargon, non-obvious abbreviations and vague wording. A requirement is considered consistent if it. cannot be broken down into separate requirements without loss of completeness. provides all the necessary information, if nothing is missed or left out. does not contain internal contradictions and/or contradictions with other requirements and documents. is described without the use of jargon, non-obvious abbreviations and vague wording. A requirement is considered unambiguous if it. cannot be broken down into separate requirements without loss of completeness. provides all the necessary information, if nothing is missed or left out. does not contain internal contradictions and/or contradictions with other requirements and documents. is described without the use of jargon, non-obvious abbreviations and vague wording. A requirement is considered feasible if it... cannot be broken down into separate requirements without loss of completeness. is technologically achievable and may be implemented within the project budget and schedule. does not contain internal contradictions and/or contradictions with the other requirements and documents. it is described without the use of jargon, non obvious abbreviations and vague wording. what is the difference between vertical traceability and horizontal traceability?. vertical traceability shows the connection between levels of requirements, horizontal traceability shows the connection between a requirement and a test plan paragraph, test cases, architectural solutions, etc. Horizontal traceability shows the connection between levels of requirements, vertical traceability shows the connection between a requirement and a test plan paragraph, test cases, architectural solutions, etc. vertical traceability is applicable to business requirements, while horizontal traceability is applicable to user requirements. vertical traceability is useful at the beginning of the project, while horizontal traceability works well at any project stage. Does traceability somehow help to increase modifiability?. no. yes, these are synonyms. traceability helps to maintain artefacts interconnections. traceability allows us to neglect requirements priorization. if a requirement is correct and verifiable it means that. it is non-atomic and/or untraceable, therefore changes lead to incosistency. it is possible to create an objective test case (test cases) that clearly shows that the requirement is implemented correctly. no tools and/or techniques of requirements mangament were used during the development of requirements. the requirement was added for "why not" reason although there is no real need for it. Which peer review level involves the most difficult and sophisticated tasks?. walkthrough. technical review. formal inspection. expert evaluation. which peer review level may be considered a simple everyday activity?. walkthrough. technical review. formal inspection. expert evaluation. why asking questions is one of the most unified yet effective technique?. it allows to switch from primary tasks. it is simple and provides a lot of information. it is rare and therefore the most valuable. it allows to communicate with millions of end-users at once. How does visualization help with requirements testing?. it allows to see the overall big picture. it helps to drill into details. it is rare and therefore the most valuable. it is always performed by a group of experts. Which requirements testing technique usually requires special tools?. asking questions. technical review. modelling. discussion. Checklist is. a set of preconditions, inputs, actions, expected results and postconditions, developed based on test conditions. a set of ideas. an imperfection or deficiency in a work product where it does not meet its requirements or specifications. the behavior produced/observed when a component or system is dead. what statements about checklist are true?. writing and re-writing checklists is relatively fast and simple. checklist may become a source for one or several test cases. checklist have to follow specific complex template. checklist are never created before test cases. is it recommended to write checklists down or to keep them in your memory. to keep in memory, as it is much faster. to keep in memory for security reasons. to write down in order not to forget something. to write down to share with colleagues. when creating some checklist is it OK to start with complex non-trivial checks?. it is ok to keep such checks once they came to your mind, but still we need some simple checks to start with. no, we should always start with simple and trivial checks, so the complex ones should be abandoned. it depends on the subject matter. there are no specific recommendations on this subject. An equivalence class consists of... a set of data that is treated the same by a module or that should produce the same result. a set of equal input and output parameters. a subset of more generalized unified set of data. exactly one input and corresponding output value. the boundaries of equivalence class are. all valid inputs and corresponding outputs for the equivalence class. values that separate one equivalence class from another one. all invalid inputs and corresponding error messages for the equivalence class. both valid and invalid inputs and corresponding outputs or error messages for the equivalence class. The functional approach to generating test ideas is based on... Exploratory testing and critical path test. examining all valid inputs and corresponding outputs for the equivalence class. stric standards. analysis of software functions: what should it do in some situation. a test case is. a set of data that is treated the same by a module or that should produce the same result. a set of preconditions, inputs, actions, expected results and postconditions, developed based on test conditions. an imperfection or deficiency in a work product where it does not meets its requirements or specifications. the behavior produced/observed when a component or system is tested. should a test case always serve some purpose. yes, always. no, never. yes, if there are some preconditions and postconditions. it depends on testing type and testing level. the "priority" test case property. does not exist. shows how fast we should execute the test case. shows how important this test case is. only applicable to test cases inside test scenarios. the "related requirement" test case property. does not exist. facilitates traceability between requirements and test cases. helps with coverage metrics solution. only applicable to test cases inside test scenarios. the "severity" test case property. does not exist. shows how fast we should execute the test case. shows how important this test case is. only applicable to test cases inside test scenarios. The "module and submodule" test case property. shows the application (sub) parts covered by the test case. simplifies understanding of test case purpose. shows how important this test case is. only applicable to test cases on the smoke test level. The "title" test case property. shows the application (sub) parts covered by the test case. shows the main idea of the test case. is one of the main data sources when searching for a particular test case. only applicable to test cases inside test scenarios. the "preparations" test case property. Describes actions and conditions to be done or met before the test case itself begins. shows the main idea of the test case. is one of the main data sources when searching for a particular test case. only applicable to test cases inside test scenarios. a good test case should be. neither too specific nor too general. as specific as possible. as general as possible. both specific and general at the same time. a good test case should be. neither too specific nor too complex. as specific as possible. as general as possible. both specific and complex at the same time. a good test case should be. either independent or reasonably linked with other test cases. always independent. always linked with other test cases. both independent and linked with other test cases at the same time. a simple test case is one that. operates with single object. contains a small number of trivial actions. always linked with other test cases. is as simple as possible. a complex test case is one that. is as complex as possible. contains a small number of trivial actions. operates with several equal objects and/or contains many nontrivial actions. both indepentend and linked with other test cases at the same time. the main disadvantage of "extreme simplicity" of test case is that. such test cases operate with several equal objects and/or contains many nontrivial actions. such test cases make found defects obvious. it is hard to understand such test cases. too simple test cases are nothing more than a step of more complex test cases. The main disadvantages of "extreme complexity" of test case is that such test cases. take a lot of time to write and maintain. often need huge maintenance with each application change. operate with several equal objects and/or contains many nontrivial actions. make found defects obvious. the main advantage of independent test cases is that. it is easy to execute such test cases in any order or set. it takes time to write and maintain such test cases. such test cases often need huge maintenance with each application change. such test cases operates with several equal objects and/or contains many nontrivial actions. the main disadvantage of linked together test cases is that. if previous test case fails, all next-in-chain test cases fail too. it takes time to write and maintain such test cases. such test cases often need huge maintenance with each application change. such test cases operates with several equal objects and/or contains many nontrivial actions. a test suite is. a set of data that is treated the same by a module or that should produce the same result. a set of test cases or test procedures to be executed in a specific test cycle. examining all valid inputs and corresponding outputs for the equivalence class. all valid inputs and corresponding outputs for the equivalence class. the advantages of free test suites are that. it is easy to compose such suites. if previous test case fails, all next-in-chain test cases fail too. it is easy to change execution order of test cases. test cases in free suites need less preparations and steps. the advantages of linked test suites are that. it is easy to compose such suites. if previous test case fails, all next-in-chain test cases fail too. the next test case continues work from the point where previous test case ends. test cases in linked suites need less preparations and steps. Defect is. an imperfection or deficiency in a work product where it does not meet its requirements or specifications. coordinated activities to direct and control an organization with regard to quality. the behavior produced/observed when a component or system is tested. the operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. expected result is. an imperfection or deficiency in a work product where it does not meet its requirements or specifications. the predicted observable behavior of a component or system executing under specificed conditions, based on its specification or another source. the behavior produced/observed when a component or system is tested. the operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. expected result is. the behavior produced when a component or system is developed. the predicted observable behavior of a component or system executing under specificed conditions, based on its specification or another source. the behavior produced/observed when a component or system is tested. part of quality management focused on providing confidence that quality requirements will be fulfilled. Defect report is. the behavior produced when a component or system is developed. the predicted observable behavior of a component or system executing under specificed conditions, based on its specification or another source. the behavior produced/observed when a component or system is tested. documentation of the occurrence, nature, and status of a defect. What statements are true about good defect report?. it lack significant details. it provides significant details. it takes time to understand. it facilitates quick and easy solutions. What statements are true about baddefect report?. it lack significant details. it provides significant details. it takes time to understand. it forces the developer to re-do testers work. The main goal of a defect report is to. get the defect fixed. influence system requirements. stop developer from making mistakes. enforce more strict quality rules on the project. the following points are the stages of defect lifecycle. assigned. evaluated. declined. accepted. the following points are NOT the stages of defect lifecycle. assigned. approved. declined. accepted. The "summary" field of a defect report should. be as verbose as possible. be as short as possible while providing as much information as possible. never describe the essence of the defect. be easily distinguishable from the other summaries. The "description" field of a defect report should. should be as short as possible while providing as much information as possible. unlike summary, may be long and verbose enough. never describe the essence of the defect. contains detailed defect description. The "steps to reproduce" field of a defect report. contains detailed description of actions to be done to reproduce the defect. contains questions to the customer. is usually left empty. should be as short as possible while providing as much information as possible. Which field of a defect report shows if the defect appear each time we follow steps to reproduce. summary. description. reproducibility. steps to reproduce. typical values of "severity" defect report field are. critical. major. normal. low. typical values of "priority" defect report field are. critical. major. normal. low. which field of a defect report shows if there is a way to achieve the desired result without being interrupted by the defect?. description. reproducibility. steps to reproduce. workaround. test result report is usually used as. basis for the planning of the next project iteration. the list of coordinated activities to direct and control an organization with regard to quality. reliable source of project information for stakeholders. the description of operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. What statements about test result report are true. if needed, the test result report is discussed on a metting. the test result report is kept in secret from the most of project members. there is no schedule for TRR creation it is created per request only. the TRR is created based on some template. What statements about test result report are true. project manager uses test result report to make managerial decisions. stakeholders and customer use test result report as a part of financial reporting. development team leader uses test result report as evidence for internal investigation. testing team leader uses test result report to summarize knowledge. The summary of a TRR is. the basis for the planning of the next project iteration. a very brief and meaningful description of the main results, achievements, problems, etc. the list of coordinated activities to direct and control an organization with regard to quality. the description of operational techniques and activities, part of quality management, that are focused on fulfilling quality requirements. The test team section of a TRR contains. the list of testers involved in the project during the reporting time. a very brief and meaningful description of the main results, achievements, problems, etc. useful ideas for the planning of the next project iteration. the list of coordinated activities to direct and control an organization with regard to quality. The testing process description section of a TRR contains. the list of testers involved in the project during the reporting time. a very brief and meaningful description of the main results, achievements, problems, etc. useful ideas for the planning of the next project iteration. the description of what and how the test team has been doing. The new defects statistics section of a TRR contains. the list of testers involved in the project during the reporting time. a very brief and meaningful description of the main results, achievements, problems, etc. the number of defects found/fixed/etc during the reporting time. the description of what and how the test team has been doing. The attachments section of a TRR contains. the list of testers involved in the project during the reporting time. a very brief and meaningful description of the main results, achievements, problems, etc. the number of defects found/fixed/etc during the reporting time. any useful data referenced from the report text or self-meaningful. |





