Outcomes of Requirements Elicitation The tangible result of requirements elicitation is a set of requirements that can be used by the software development team. However, there are many other intangible outcomes of the process that can affect the overall success of the project. Those outcomes differ, depending on whether the elicitation process was conducted well or poorly. Outcomes of a Good Process Users of a software system often come to the requirements elicitation process with only a vague idea of what they really need and with little idea of what software technology might offer. A good elicitation process helps them explore and fully understand their requirements, especially in the separation of what they want and what they need. Their interactions with the software engineer help them understand the constraints that might be imposed on the system by: Technology Organizational practices, Government regulations. They understand alternatives, both technological and procedural, that might be considered in the proposed system. They come to understand the tradeoffs that might need to be made when two requirements cannot both be satisfied fully. Overall, the users have a good understanding of the implications of the decisions they have made in developing the requirements. This results in fewer surprises when the system is built and delivered. The customer and users share with the software engineer a vision of the problems they are trying to solve and the kinds of solutions that are feasible. They feel a sense of ownership of the products of the elicitation process. They are satisfied with the process, feel informed and educated, believe their risk is minimized, and are committed to the success of the project. Similarly, the software engineers and developers who have participated in the requirements technology, organizational practices, or government regulations. They understand alternatives, both technological and procedural, that might be considered in the proposed system. They come to understand the tradeoffs that might need to be made when two requirements cannot both be satisfied fully. Overall, the users have a good understanding of the implications of the decisions they have made in developing the requirements. This results in fewer surprises when the system is built and delivered. The customer and users share with the software engineer a vision of the problems they are trying to solve and the kinds of solutions that are feasible. They feel a sense of ownership of the products of the elicitation process. They are satisfied with the process, feel informed and educated, believe their risk is minimized, and are committed to the success of the project. Similarly, the software engineers and developers who have participated in the requirements elicitation process is solving the right problem for the users.
These documentations are maintained by the developers and actual coders. These documents, as a whole, represent information about the code. While writing the code, the programmers also mention objective of the code, who wrote it, where will it be required, what it does and how it does, what other resources the code uses, etc.The technical documentation increases the understanding between various programmers working on the same code. It enhances re-use capability of the code. It makes debugging easy and traceable.There are various automated tools available and some comes with the programming language itself. For example java comes Java Doc tool to generate technical documentation of code.
There are some challenges faced by the development team while implementing the software. Some of them are mentioned below:
Code-reuse - Programming interfaces of present-day languages are very sophisticated and are equipped huge library functions. Still, to bring the cost down of end product, the organization management prefers to re-use the code, which was created earlier for some other software. There are huge issues faced by programmers for compatibility checks and deciding how much code to re-use.
Version Management - Every time a new software is issued to the customer, developers have to maintain version and configuration related documentation. This documentation needs to be highly accurate and available on time.
Target-Host - The software program, which is being developed in the organization, needs to be designed for host machines at the customers end. But at times, it is impossible to design a software that works on the target machines.
- Validation ensures the product under development is as per the user requirements.
- Validation answers the question – "Are we developing the product which attempts all that user needs from this software ?".
- Validation emphasizes on user requirements.
- Verification ensures the product being developed is according to design specifications.
- Verification answers the question– "Are we developing this product by firmly following all design specifications ?"
- Verifications concentrates on the design and system specifications.
Targets of the test are :
Errors - These are actual coding mistakes made by developers. In addition, there is a difference in output of software and desired output, is considered as an error.
Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an error which can cause system to fail.
Failure - failure is said to be the inability of the system to perform the desired task. Failure occurs when fault exists in the system
Manual - This testing is performed without taking help of automated testing tools. The software tester prepares test cases for different sections and levels of the code, executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or not right test cases are used. Major portion of testing involves manual testing.
Automated This testing is a testing procedure done with aid of automated testing tools. The limitations with manual testing can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be easily done with manual testing. But to check if the web-server can take the load of 1 million users, it is quite impossible to test manually.There are software and hardware tools which helps tester in conducting load testing, stress testing, regression testing
In requirements engineering, requirements elicitation is the practice of collecting the requirements of
a system from users, customers, and other stakeholders. The practice is also sometimes referred to as
The term elicitation is used in books and research to raise the fact that good requirements cannot just be
collected from the customer, as would be indicated by the name requirements gathering. Requirements
elicitation is non-trivial because you can never be sure you get all requirements from the user and customer by just asking them what the system should do. Requirements elicitation practices include interviews:
2. User observation
Before requirements can be analyzed, modeled, or specified they must be gathered through an elicitation
process. Requirements elicitation is a part of the requirements engineering process, usually followed by
analysis and specification of the requirements. Commonly used elicitation processes are the stakeholder meetings or interviews. For example, an important first meeting could be among software engineers and customers where they discuss their perspective of the requirements.
3 Code-Based Techniques
a. Control Flow-Based Criteria
Control flow-based coverage criteria are aimed at covering all the statements, blocks of statements, or specified combinations of statements in a program. The strongest of the control flowbased criteria is path testing, which aims to execute all entry-to-exit control flow paths in a program’s control flow graph. Since exhaustive path testing is generally not feasible because of loops, other less stringent criteria focus on coverage of paths that limit loop iterations such as statement coverage, branch coverage, and condition/ decision testing. The adequacy of such tests is measured in percentages; for example, when all branches have been executed at least once by the tests, 100% branch coverage has been achieved.
b. Data Flow-Based Criteria
In data flow-based testing, the control flow graph is annotated with information about how the program variables are defined, used, and killed (undefined). The strongest criterion, all definition- use paths, requires that, for each variable, every control flow path segment from a definition of that variable to a use of that definition is executed. In order to reduce the number of paths required, weaker strategies such as all-definitions and all-uses are employed.
c. Reference Models for Code-Based Testing
Although not a technique in itself, the control structure of a program can be graphically represented using a flow graph to visualize codebased testing techniques. A flow graph is a directed graph, the nodes and arcs of which correspond to program elements (see Graphs and Trees in the Mathematical Foundations KA). For instance, nodes may represent statements or uninterrupted sequences of statements, and arcs may represent the transfer of control between nodes.
Resource : http://swebokwiki.org/Chapter_4:_Software_Testing
2. Input Domain-Based Techniques
a. Equivalence Partitioning
Equivalence partitioning involves partitioning the input domain into a collection of subsets (or equivalent classes) based on a specified criterion or relation. This criterion or relation may be different computational results, a relation based on control flow or data flow, or a distinction made between valid inputs that are accepted and processed by the system and invalid inputs, such as out of range values, that are not accepted and should generate an error message or initiate error processing. A representative set of tests (sometimes only one) is usually taken from each equivalency class.
b. Pairwise Testing
Test cases are derived by combining interesting values for every pair of a set of input variables instead of considering all possible combinations. Pairwise testing belongs to combinatorial testing, which in general also includes higher-level combinations than pairs: these techniques are referred to as t-wise, whereby every possible combination of t input variables is considered.
c. Boundary-Value Analysis
Test cases are chosen on or near the boundaries of the input domain of variables, with the underlying rationale that many faults tend to concentrate near the extreme values of inputs. An extension of this technique is robustness testing, wherein test cases are also chosen outside the input domain of variables to test program robustness in processing unexpected or erroneous inputs.
d. Random Testing
Tests are generated purely at random (not to be confused with statistical testing from the operational profile, as described in Operational Profile in section 3.5). This form of testing falls under the heading of input domain testing since the input domain must be known in order to be able to pick random points within it. Random testing provides a relatively simple approach for test automation; recently, enhanced forms of random testing have been proposed in which the random input sampling is directed by other input selection criteria . Fuzz testing or fuzzing is a special form of random testing aimed at breaking the software; it is most often used for security testing.
Resource : http://swebokwiki.org/Chapter_4:_Software_Testing