Site blog

Page: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 42 ()
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:54 PM
Anyone in the world

White-box testing is a verification technique software engineers can use to examine if their code works as expected. White-box testing is testing that takes into account the internal mechanism of a system or component (IEEE, 1990). White-box testing is also known as structural testing, clear box testing, and glass box testing (Beizer, 1995). The connotations of “clear box” and “glass box” appropriately indicate that you have full visibility of the internal workings of the software product, specifically, the logic and the structure of the code.

Using the white-box testing techniques outlined in this chapter, a software engineer can design test cases that (1) exercise independent paths within a module or unit; (2) exercise logical decisions on both their true and false side; (3) execute loops at their boundaries and within their operational bounds; and (4) exercise internal data structures to ensure their validity (Pressman, 2001).

There are six basic types of testing: unit, integration, function/system, acceptance, regression, and beta. White-box testing is used for three of these six types:

  • Unit testing, which is testing of individual hardware or software units or groups of related units (IEEE, 1990). A unit is a software component that cannot be subdivided into other components (IEEE, 1990). Software engineers write white-box test cases to examine whether the unit is coded correctly. Unit testing is important for ensuring the code is solid before it is integrated with other code. Once the code is integrated into the code base, the cause of an observed failure is more difficult to find. Also, since the software engineer writes and runs unit tests him or herself, companies often do not track the unit test failures that are observed– making these types of defects the most “private” to the software engineer. We all prefer to find our own mistakes and to have the opportunity to fix them without others knowing. Approximately 65% of all bugs can be caught in unit testing (Beizer, 1990).
  • Integration testing, which is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them (IEEE, 1990). Test cases are written which explicitly examine the interfaces between the various units. These test cases can be black box test cases, whereby the tester understands that a test case requires multiple program units to interact. Alternatively, white-box test cases are written which explicitly exercise the interfaces that are known to the tester.
  • Regression testing, which is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (IEEE, 1990). As with integration testing, regression testing can be done via black-box test cases, white-box test cases, or a combination of the two. White-box unit and integration test cases can be saved and rerun as part of regression testing

White Box Testing Techniques:

  • Statement Coverage - This technique is aimed at exercising all programming statements with minimal tests.
  • Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.
  • Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered.

Advantages of White Box Testing:

  • Forces test developer to reason carefully about implementation.
  • Reveals errors in "hidden" code.
  • Spots the Dead Code or other issues with respect to best programming practices.

Disadvantages of White Box Testing:

  • Expensive as one has to spend both time and money to perform white box testing.
  • Every possibility that few lines of code are missed accidentally.
  • In-depth knowledge about the programming language is necessary to perform white box testing.

 

Reference

Laurie Williams. “White-Box Testing. 2006

https://www.tutorialspoint.com/software_testing_dictionary/white_box_testing.htm

Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:54 PM
Anyone in the world

Often software engineering projects and products are not precise about the targets that should be achieved. Software requirements are stated, but the marginal value of adding a bit more functionality cannot be measured. The result could be late delivery or too-high cost. The “good enough” principle relates marginal value to marginal cost and provides guidance to determine criteria when a deliverable is “good enough” to be delivered.

These criteria depend on business objectives and on prioritization of different alternatives, such as ranking software requirements, measurable quality attributes, or relating schedule to product content and cost. The RACE principle (reduce accidents and control essence) is a popular rule towards good enough software. Accidents imply unnecessary overheads such as gold-plating and rework due to late defect removal or too many requirements changes. Essence is what customers pay for. Software engineering economics provides the mechanisms to define criteria that determine when a deliverable is “good enough” to be delivered. It also highlights that both words are relevant: “good” and “enough.” Insufficient quality or insufficient quantity is not good enough.

Agile methods are examples of “good enough” that try to optimize value by reducing the overhead of delayed rework and the gold plating that Software Engineering Economics 12-15 results from adding features that have low marginal value for the users (see Agile Methods in the Software Engineering Models and Methods KA and Software Life Cycle Models in the Software Engineering Process KA). In agile methods, detailed planning and lengthy development phases are replaced by incremental planning and frequent delivery of small increments of a deliverable product that is tested and evaluated by user representatives

The five key process ideas (KPIs) of good enough software :

1. Utilitarian Strategy

The utilitarian strategy applies to problem, projects, and products. The term is one that I've coined out of necessity (or possibly ignorance, as I just haven't found a suitable alternative). It refers to the art of qualitatively analyzing and maximizing net positive consequences in an ambiguous situation. It encompasses ideas from systems thinking, risk management, economics, decision theory, game theory, control theory, and fuzzy logic.

2. Evolutionary strategy

An evolutionary strategy, applied either to problems, projects, or products, alternates observation with action to effect ongoing improvement. On the project level, this means ongoing process education, experimentation and adjustment, rather than clinging to a notion of the One Right Way to develop software.

On the problem level, it means keeping track of history, and learning about failure and success over time. Here are some of the elements of using the evolutionary approach:

  • Don't even try to plan everything up front.
  • Converge on good enough in successive, self-contained stages.
  • Integrate early and often.
  • Encourage disciplined evolution of feature set and schedule over the course of the project.
  • Salvage, reuse, or purchase components where feasible.
  • Record and review your experience.

3. Heroic Teams

For some reason, the most fundamental key to good enough development also seems to be the most controversial. There is a strong disdain, among many methodologists, for the very word "hero". I'm not sure why that is, since evidence supporting the role of heroes in computing is just a shade less compelling than evidence supporting the role of electricity. I think it's because there are several definitions of hero.

4. Dynamic Infrastructure

Dynamic infrastructure means that the company rapidly responds to the needs of the project. It backs up responsibility with authority and resources. Dynamic infrastructure provides life support for the other four key process ideas. Some of its elements are:

  • Upper management pays attention to projects.
  • Upper management pays attention to the market.
  • The organization identifies and resolves conflicts between projects.
  • In conflicts between projects and organizational bureaucracy, projects win.
  • Project experience is incorporated into the organizational memory

5. Dynamic Processes

Three other important dynamic process attributes are portability, scalability, and durability. Portability is how the process lends itself to being carried into meetings, shared with others, and applied to new problems. Scalability is how readily the process may be expanded or contracted in scope. A highly scalable process is one that can be operated by one person, manually, or by a hundred people, with tool support, without dramatic redesign. Durability is how well the process tolerates neglect and misuse.

Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:52 PM
Anyone in the world

SOFTWARE CONSTRUCTION

Istilah software construction didasarkan pada rincian pengerjaannya,yang berarti software melalui kombinasi dari koding,verifikasi,unit testing,testing terintregasi dan debugging. Pengetahuan software construction terhubung dengan dengan pengetahuan yang lain:

  • • Software Design
  • • Software Testing
  • • Software Engineering
  • • Software Project
  • • Software Quality

Proses Software Construction melibatkan desain software yang signifikan dan aktivitas testing software.Software Consntruction juga menggunakan output design dan menghasilkan salah satu input untuk testing,baik dalam aktivitas desain dan testing. batasan antara desain,konstruksi dan testing akan bervariasi tergantung dalam proses software life cycle yang di gunakan dalam proyek.


Software Construction Fundamental

1. Minimizing Complexity

Faktor kendala utama menggunakan komputer adalah kemampuan terbatas. Hal ini mendorong pembuatan ke sebuah driver yang paling handal dalam SC, yaitu: mengurangi kompleksitas. Dalam SC pengurangan kompleksitasnya pada proses verifikasi dan testing (pembuatan kode yang simple dan mudah dibaca)

2. Anticipating Change

Software merupakan bagian yang tidak mungkin terhindar dari sebuah perubahan lingkungan luar. Perubahan dari lingkungan luar akan mempengaruhi software dalam banyak hal.

3. Constructing for Verification

Perancangan dalam rangka verifikasi berarti membangun software dengan mencari suatu kesalahan (error) yang dapat terbaca dengan mudah oleh Software Engineer yang menulis (kode) dari software. Teknik – teknik yang spesifik yang mendukung dalam perancangan untuk verifikasi, termasuk diantaranya yaitu :

  • - standar pembuatan kode untuk mendukung referensi dari kode, unit testing,
  • - pengaturan kode untuk mendukung testing secara otomatis,
  • - menghindari penggunaan struktur bahasa yang kompleks dan sulit dimengerti orang lain.

4. Standards in Construction

Standar2 yang mempengaruhi perancangan meliputi :

  • - Bahasa Pemrogaman
  • - Metode Komunikasi
  • - Platform
  • - Tool
Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:51 PM
Anyone in the world

In requirements engineering, requirements elicitation is the practice of collecting the requirements of a system from users, customers and other stakeholders. The practice is also sometimes referred to as "requirement gathering". The term elicitation is used in books and research to raise the fact that good requirements cannot just be collected from the customer, as would be indicated by the name requirements gathering. Requirements elicitation is non-trivial because you can never be sure you get all requirements from the user and customer by just asking them what the system should do OR NOT do (for Safety and Reliability). Requirements elicitation practices include interviews, questionnaires, user observation, workshops, brainstorming, use cases, role playing and prototyping.

Before requirements can be analyzed, modeled, or specified they must be gathered through an elicitation process. Requirements elicitation is a part of the requirements engineering process, usually followed by analysis and specification of the requirements. Commonly used elicitation processes are the stakeholder meetings or interviews. For example, an important first meeting could be between software engineers and customers where they discuss their perspective of the requirements.

Prepare for Elicitation

  1. The first step in requirements elicitation is gleaning a comprehensive and accurate understanding of the project’s business need. During the elicitation process, an analyst’s strong understanding of the business need will help her guard against scope creep and gold plating, as well as select the proper stakeholders and elicitation techniques.
  2. An analyst’s next step in eliciting requirements is ensuring that an adequate amount and mix of stakeholders are secured for the project’s duration. For, as BABOK 2.0 (Business Analysis Body of Knowledge, the definitive guide to all things related to business analysis) notes, a good analyst must “actively engage stakeholders in defining requirements.” According to BABOK, a project’s stakeholders may include customers/end users, suppliers, the project manager, quality analysis, regulators, project sponsors, operational support, domain subject matter experts, and implementation subject matter experts. An analyst must recruit the participation of appropriate stakeholders based on the unique business needs of her project. After an analyst has identified and recruited her stakeholders, and chosen the method(s) by which she will elicit requirements (outlined below), it is advisable for her to schedule the time for conducting those methods with stakeholders as far in advance as possible to ensure adequate participation on their parts.

 

Elicitation Techniques

After securing the proper stakeholders, an analyst must determine the best techniques for eliciting requirements. Commonly used requirements elicitation methods (as identified by BABOK) include:

  • Brainstorming – The purpose of gathering your stakeholders for brainstorming is “to produce numerous new ideas, and to derive from them themes for further analysis [from BABOK].” An analyst should try to secure a representative from each participating stakeholder group in the brainstorming session. If an analyst serves as facilitator of a brainstorming session, she must ensure that while participants feel free to propose new ideas and solutions, they remain focused on the business need at hand and do not engage in scope creep, gold plating, or become distracted with other business issues. All ideas must be recorded so that they are not lost. According to BABOK, the brainstorming method is particularly useful if your project has no clear winning choice for a solution, or if existing proposed solutions are deemed inadequate. The brainstorming process and the resulting follow-up analysis will help ensure that the best possible solution is reached for any business objective.
  • Document analysis – Document analysis involves gathering and reviewing all existing documentation that is pertinent to your business objective or that may hold data related to a relevant solution. According to BABOK, such documentation may include, “business plans, market studies, contracts, requests for proposal, statements of work, memos, existing guidelines, procedures, training guides, competing product literature, published comparative product reviews, problem reports, customer suggestion logs, and existing system specifications, among others.” In other words, virtually anything that is written related to the project may be useful. This type of elicitation is especially useful when the goal is to update an existing system or when the understanding of an existing system will enhance a new system. However, document analysis alone is rarely enough to thoroughly extract all of the requirements for any given project.
  • Focus Group – Focus groups consist of a mix of pre-qualified stakeholders who gather to offer input on the business need at hand and its potential solutions. Focus groups are particularly helpful when key stakeholders are not particularly imaginative or forthcoming; a few more vocal stakeholders may help them think through and articulate solutions. Focus groups are also a good way for time-pressed analysts to get a lot of information at once. They may be conducted in person or virtually. (Key project sponsors or business owners should still be interviewed individually for thorough discovery.)
  • Interface Analysis – An interface analysis carefully analyzes and deconstructs the way that a user interacts with an application, or the way one application interacts with another. According to BABOK, a thorough interface analysis will describe the purpose of each interface involved and elicit high-level details about it, including outlining its content. This type of elicitation is essential for software solutions, which almost always involve applications interacting with one another and/or users interacting with applications. But, according to BABOK, interface analysis can also be useful for non-software solutions (such as defining deliverables by third parties).
  • Interviews – One-on-one interviews are among the most popular types of requirements elicitation, and for good reason: they give an analyst the opportunity to discuss in-depth a stakeholder’s thoughts and get his or her perspective on the business need and the feasibility of potential solutions. “Research has found that interviews . . . are the most effective way of eliciting requirements.” Whether an analyst chooses to have a structured (with predefined questions) or unstructured interview (with free-flowing, back-and-forth conversation), she must fully understand the business need in order to conduct a successful interview. It is a good practice for an analyst to share her interview notes with the interviewee afterward to ensure there were no misunderstandings and to jog the interviewee’s thoughts for any further insights.
  • Observation (job shadowing) – Observation is quite helpful when considering a project that will change or enhance current processes. According to BABOK, two basic types of observation are available to an analyst: (1) passive observation, where the analyst merely watches someone working but does not interrupt or engage the worker in any way, and (2) active observation, where an analyst asks questions throughout the process to be sure she understands and even attempts portions of the work. The nature of an analyst’s project will dictate the level of detail an observation should encompass. As with interviews, it is a good practice for an analyst to provide notes from her observations and/or a verbal description of her understanding of the work for the worker to review in order to be sure that there were no misunderstandings of the process.
  • Prototyping (storyboarding, navigation flow, paper prototyping, screen flows) – Prototyping is especially valuable for stakeholders such as business owners and end users who may not understand all of the technical aspects of requirements, but will better relate to a visual representation of the end product. To quote BABOK, “Stakeholders often find prototyping to be a concrete means of identifying, describing and validating their interface needs.” The prototyping process is normally iterative, improving as more input and evaluation are gleaned from stakeholders. Prototyping may be an interactive screen (normally consisting of hypertext only with no real data behind it), a mock-up (such as a PowerPoint), a navigation flow (such as a Visio diagram), or a storyboard. Simple, throwaway prototypes (such as pencil sketches) may be done in the initial stages of discovery, and more detailed, interactive prototypes may be done once business requirements have been identified. At the latter, more detailed prototype stage,prototype features must fulfill previously identified business needs as outlined in the requirements.
  • Requirements workshops – A requirements workshop involves gathering a previously identified stakeholders in a structured setting for a defined amount of time in order to elicit, refine, and/or edit requirements. To be successful, requirements workshops must include a recorder (or scribe) to record participants’ input, and a facilitator to direct the discussion. Because participants’ may also brainstorm together and listen to each others’ input, they can provide immediate feedback and refinements to identified business needs, which can ensure a fast, effective elicitation of requirements.
  • Survey/questionnaire – While they preclude the opportunity for in-person, ad hoc conversations, surveys are useful for quickly gathering data from a large group of participants. Because free online survey software is readily available, surveys are an inexpensive way to gather objective input from customers or potential end users. As with selecting stakeholders, a successful survey or questionnaire must have well-chosen participants. As one researcher notes, questionnaires “can be useful when the population is large enough, and the issues addressed are clear enough to all concerned.” Surveys can be structured to offer a series of finite choices for feedback, or they can offer open-ended input, depending on the needs of the project at hand. Open-ended surveys are useful for a broader discovery of business needs; however, the larger the number of participants in open-ended surveys, the more prohibitive they are to analyze. Survey wording must be unambiguous and precise. It is good practice for an analyst to politely request that survey participants respond by a reasonable deadline and that they keep any proprietary business information contained within the survey confidential.

 

Reference

https://en.wikipedia.org/wiki/Requirements_elicitation

http://www.modernanalyst.com/Resources/Articles/tabid/115/ID/1427/An-Overview-of-Requirements-Elicitation.aspx

Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:50 PM
Anyone in the world

Multidimensional scaling (MDS) is a classical approach to the problem of finding underlying attributes or dimensions, which influence how subjects evaluate a given set of objects or stimuli. Multidimensional scaling (MDS) has become more and more popular as a technique for both multivariate and exploratory data analysis. MDS is a set of data analysis methods, which allow one to infer the dimensions of the perceptual space of subjects. The raw data entering into an MDS analysis are typically a measure of the global similarity or dissimilarity of the stimuli or objects under investigation. The primary outcome of an MDS analysis is a spatial configuration, in which the objects are represented as points.

The goal of an MDS analysis is to find a spatial configuration of objects when all that is known is some measure of their general (dis)similarity. The spatial configuration should provide some insight into how the subject(s) evaluate the stimuli in terms of a (small) number of potentially unknown dimensions. Once the proximities are derived (cf. section 1) the data collection is concluded, and the MDS solution has to be determined using a computer program.

Many MDS programs make a distinction between classical and nonmetric MDS. Classical MDS assumes that the data, the proximity matrix, say, display metric properties, like distances as measured from a map. Thus, the distances in a classical MDS space preserve the intervals and ratios between the proximities as good as possible. For a data matrix consisting of human dissimilarity ratings such a metric assumption will often be too strong. Nonmetric MDS therefore only assumes that the order of the proximities is meaningful. The order of the distances in a nonmetric MDS configuration reflects the order of the proximities as good as possible while interval and ratio information is of no relevance. In order to gain a better understanding of the MDS outcome, a brief introduction to the basic mechanisms of the two MDS procedures, classical und nonmetric MDS, might be helpful.

Classical MDS

The classical MDS algorithm rests on the fact that the coordinate matrix X can be derived by eigenvalue decomposition from the scalar product matrix B = XX0. The problem of constructing B from the proximity matrix P is solved by multiplying the squared proximities with the matrix J = I − n−1110. This procedure is called double centering.

 

The following steps summarize the algorithm of classical MDS:

  1. Set up the matrix of squared proximities P(2) = [p2].
  2. Apply the double centering:  using the matrix J = I − n−111’, where n is the number of objects.
  3. Extract the m largest positive eigenvalues λ1 . . . λm of B and the corresponding m eigenvectors e1 . . . em.
  4. A m-dimensional spatial configuration of the n objects is derived from the coordinate matrix , where Em is the matrix of m eigenvectors and Λ m is the diagonal matrix of m eigenvalues of B, respectively

Nonmetrics MDS

The core of a nonmetric MDS algorithm is a twofold optimization process. First the optimal monotonic transformation of the proximities has to be found. Secondly, the points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible. The basic steps in a nonmetric MDS algorithm are:

  1. Find a random configuration of points, e. g. by sampling from a normal distribution.
  2. Calculate the distances d between the points.
  3. Find the optimal monotonic transformation of the proximities, in order to obtain optimally scaled data f(p).
  4. Minimize the stress between the optimally scaled data and the distances by finding a new configuration of points.
  5. Compare the stress to some criterion. If the stress is small enough then exit the algorithm else return to 2
Associated Course: KI142303BKI142303B
 
Anyone in the world

Construction Models

Berbagai macam model yang telah dibuat  untuk membangun software, beberapa diantaranya lebih menekankan pada perancangan daripada (model) yang lainnya

Construktion Planning

Pemilihan dari metode construction adalah sebuah aspek terpenting (key aspect) dalam aktivitas construction planning.

Pendekatan construction mempengaruhi kemampuan proyek untuk mengurangi kerumitan, mengantisipasi perubahan, dan untuk verifikasi.

Construction planning juga menjelaskan susunan komponen yang dibuat dan digabungkan, kualitas manajemen proses dari software, alokasi dari pengerjaan tugas pada software teknik yang spesifik, dan tugas lainnya, termasuk pada metode yang dipilih.

Construction measurement

Banyak aktivitas construction dan buatan manusia (artifact) dapat ditentukan/dipilih, termasuk pengembangan code, pengubahan code, penggunaan kembali code (code reuse), penghapusan code, kerumitan code, statistika pemeriksaan code, perbandingan fault-fix dan fault-find, usaha, dan penjadwalan.

Measurement dapat menjadi sangat berguna untuk tujuan pengontrolan construction, menjamin  kualitas selama construction, memperbaiki proses construction

Practical Considerations

1. Construction Design

Beberapa proyek mengalokasikan lebih banyak pada aktivitas desain daripada construction. Sebagai pekerja construction, membangun struktur fisik  harus beradaptasi dengan membuat modifikasi small-scale sebagai laporan untuk rencana pembangunan (builder’s plan). Pekerja construction software harus membuat modifikasi dalam skala yang lebih besar atau skala yang lebih kecil untuk detail desain software selama construction.

2. Construction Languages

Linguistic notations dibedakan dalam hal penggunaan kata atau string dari text untuk menjelaskan software construction yang komplek, dan gabungan kata / string ke dalam pola – pola yang mempunyai sintax kalimat. Visual notation mengandalkan banyak kekurangan pada text oriented notation dari kedua bahasa dan penyusun formal, dan mengandalkan secara langsung interprestasi visual dan penempatan dari entitas visual yang merepresentasikan pokok software.

Penyusunan secara visual cenderung agak terbatas karena hanya menggunakan pergerakan dari entitas visual pada sebuah display.

3. Coding

Beberapa pertimbangan yang dipergunakan pada proses penyusunan pengkodingan software :

  • Teknik untuk pembuatan source code yang dapat dipahami, termasuk penamaan dan susunan source code.
  • Mempergunakan pengkelasan, enumerated types, variable, penamaan konstanta dan yang lain entitas yang mirip.
  • Mempergunakan struktur control.
  • Penanganan kondisi error
  • Pencegahan pelanggaran sekuriti level kode (Sebagai contoh : buffer yang melebihi indek sebuah array).
  • Penggunaan sebuah resource melalui penggunaan mekanisme pengeluaran dan tata cara pengaksesan penggunaan resourse yang dapat dipergunakan kembali (termasuk peng-lock-an thread atau database).
  • Pengorganisasian source code ( menjadi statement-statement, fungsi-fungsi, kelas-kelasm paket-paket atau struktur yang lainnya).
  • Dokumentasi kode.
  • Perbaikan kode

4. Construction Testing

Proses penyusunan memiliki dua bentuk untuk pengetesan, dimana sering dipergunakan oleh software engineer yang membuat kode tersebut :

  • Unit testing,
  • Integration testing.
  • Tujuan penyusunan proses pengetesan ini untuk mengurangi batasan diantara waktu adanya kesalahan pada code dan waktu kesalahan dideteksi.
  • Dalam banyak kasus, proses pengetesan ini dicoba setelah proses penulisan kode selesai. Dalam kasus yang lain, pengetesan dibuat sebelum kode ditulis.

5. Reuse

Tugas-tugas yang terkait dalam penggunan kembali penyusunan konstruksi sebelum pengkodingan dan pengetesan adalah:

  • Seleksi unit, database, atau tes data reusability,
  • Evaluasi kode atau tes reusability,
  • Laporan informasi yang dipergunakan dalam kode baru, test prosedur atau test data.

 

Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:48 PM
Anyone in the world

Data-flow testing is a white box testing technique that can be used to detect improper use of data values due to coding errors. Errors are inadvertently introduced in a program by programmers. For instance, a software programmer might use a variable without defining it. Additionally, he/she may define a variable, but not initialize it and then use that variable in a predicate.

In data-flow testing, the first step is to model the program as a control flow graph. This helps to identify the control flow information in the program. In step 2, the associations between the definitions and uses of the variables that is needed to be covered in a given coverage criterion is established. In step 3, the test suite is created using a finite number of paths from step 2.

Data-flow testing monitors the lifecycle of a piece of data and looks out for inappropriate usage of data during definition, use in predicates, computations and termination (killing). It identifies potential bugs by examining the patterns in which that piece of data is used. For example, A pattern which indicates usage of data in a calculation after it has been killed is certainly a bug which needs to be addressed.

To examine the patterns, we need to construct a control flow graph of the code. A control flow graph is a directed graph where the nodes represent the processing statements like definition, computation and predicates while the edges represent the flow of control between processing statements. Since data-flow testing closely examines the state of the data in the control flow graph, it results in a richer test suite than the one obtained from traditional control flow graph testing strategies like all branch coverage, all statement coverage, etc.

Anomalies.PNG

 

Data flow anomalies are represented using two characters based on the sequence of actions. They are defined (d), killed (k), and used (u). There are nine possible combinations based on these 3 sequence of actions which are dd, dk, du, kd, kk, ku, ud, uk, uu.

Anomaliees.PNG

In addition to the above two-letter situations, there are six single letter situations with preceding dash or succeeding dash. Preceding dash with letters d, k, u indicate that nothing special occurs prior to the action along the entry-exit path considered. Succeeding dash with letters d, k, u indicate that nothing special occurs after the action along the entry-exit path considered. Meaning of six single letter situations with preceding dash or succeeding dash are explained below. 

Anomalieees.PNG

 

Reference

Janvi Badlaney, Rohit Ghatol, Romit Jadhwani. “An Introduction to Data-Flow Testing”. 2006. NCSU CSC TR-2006-22

http://khannur.com/stb6.5.htm

Associated Course: KI142303BKI142303B
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:44 PM
Anyone in the world

Black Box Testing, also known as Behavioral Testing, is a software testing method in which the internal structure/ design/ implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a black box; inside which one cannot see. This method attempts to find errors in the following categories:

  • Incorrect or missing functions
  • Interface errors
  • Errors in data structures or external database access
  • Behavior or performance errors
  • Initialization and termination errors

 BlackBoxTesting.png

 

BLACK BOX TESTING TECHNIQUES

Following are some techniques that can be used for designing black box tests.

  • Equivalence partitioning: It is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data.
  • Boundary Value Analysis: It is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/ outside of the boundaries as test data.
  • Cause Effect Graphing: It is a software test design technique that involves identifying the cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph, and generating test cases accordingly.

BLACK BOX TESTING ADVANTAGES

  • Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications.
  • Tester need not know programming languages or how the software has been implemented.
  • Tests can be conducted by a body independent from the developers, allowing for an objective perspective and the avoidance of developer-bias.
  • Test cases can be designed as soon as the specifications are complete.

 

BLACK BOX TESTING DISADVANTAGES

  • Only a small number of possible inputs can be tested and many program paths will be left untested.
  • Without clear specifications, which is the situation in many projects, test cases will be difficult to design.
  • Tests can be redundant if the software designer/ developer has already run a test case.
  • Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in Black Box Testing.

 

Reference

Wikipedia

http://softwaretestingfundamentals.com/black-box-testing/

Associated Course: KI142303BKI142303B
[ Modified: Thursday, 22 December 2016, 5:45 PM ]
 
Picture of OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 5:43 PM
Anyone in the world

Back-to-Back testing ensures that two different instances of an implementation have the same structural behavior. A typical use case in a model-based development process is comparing the model (MIL) which is considered to be an “executable specification” against the production code (SIL).

backToback.PNG

 

Scenario of Back-to-Back Testing

  1. The aim of testing is defined. Test cases are designed.
  2. The checking is performed with the usage of test cases. Specialists launch applications or systems and record the results of their work.
  3. Obtained results are automatically compared.
  4. Testers make the difference report that contains the results of comparison.

 

There can be some misinterpretations of the notion ‘the difference report’. Actually, everything is very simple. The difference report contains necessary data to demonstrate the problems that may happen among the various system versions.

Here is one more explanation of back-to-back testing. This checking type is executed in the presence of two identical transformers. One of these transformers remains open and the other one – loaded.

It is cheaper to perform back-to-back testing when the system or application have some modifications. There is no need to execute usability testing or performance testing one more time. Tester may just compare the work of system versions.

In other words, back-to-back testing is more effective during mobile application testing and website testing.

 

Reference

http://blog.qatestlab.com/2015/11/10/back-back-testing/

Associated Course: KI142303BKI142303B
 
Picture of NAHYA NUR 5116201035
by NAHYA NUR 5116201035 - Thursday, 22 December 2016, 2:41 PM
Anyone in the world

Requirements traceability matrix (RTM)  merupakan alat yang digunakan untuk mengetahui kebutuhan pada pengembangan perangkat lunak pada fase testing. Disini RTM berguna melakukan verifikasi apakah kebutuhan tersebut sudah terpenuhi atau belum. RTM ini berupa daftar-daftar kebutuhan yang nantinya dapat memudahkan dalam melakukan testing. Matrix ini menghubungkan antara kebutuhan pada tingkat yang paling tinggi, spesifikasi desain, kebutuhan testing, dan coding. Karena matrix ini menyediakan link yang diperlukan berguna menentukan informasi yang dibutuhkan. Dengan adanya RTM juga sebagai alat untuk memastikan adanya penjaminan kualitas software, karena RTM memastikan bahwa kebutuhan yang diinginkan customers telah sesuai. Reqirement Treacibility Matrix digunakan dalam Quality Assurance sehingga dapat memastikan bahwa kebutuhan klien terpenuhi, dan perangkat lunak sesuai dengan yang diminta.

 

Kesuksesan project tidak akan tercapai tanpa manager proyek yang memiliki kemampuan organisasai yang baik. Dokumentasi requirement klien yang baik akan sangat membantu pengerjaan proyek. Manager yang baik harus mempu mengidentifikasikan requirement yang berhasil ataupun berpotensi gagal. Requirement treacibility matrix merupakan salah satu tools yang berguna untuk menelusuri informasi requirement.

 

Tujuan dari RTM yaitu ;

  1. Memastikan bahwa seluruh test case yang ada harus sesuai dengan kebutuhan.
  2. Memastikan bahwa ketentuan yang telah disetujui sudah mencakup semua pada fase pengembangan. Dari spesifikasi kebutuhan, pengembangan perangkat lunak, testing, sampai perangkat lunak itu jadi.

 

 

Bagaimana Kriteria RTM yang baik?

  1. Buat sebuah template RTM yang mudah dipahami oleh seluruh anggota proyek
  2. Usahakan kolom-kolom yang ada pada RTM urut sesuai dengan fase proyek yang ada. Contoh: Biasanya testing dilakukan di akhir, maka kolom untuk tescase diletakkan di akhir tabel.
  3. Berikan ID untuk setiap requirement
  4. Untuk proses selanjutnya, usahakan ada keterkaitan dalam penamaan ID, hal ini akan memudahkan kita dalam melakukan trace RTM

 

Referensi

http://celz-adrian.blogspot.co.id/2012/06/requirements-traceability-matrix.html

http://riskhadwianggraeni.blogspot.co.id/2012/05/mengenal-requirement-traceability.html

Associated Course: KI142303BKI142303B
 
Page: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 42 ()