Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()
Anyone in the world

The notions of separation and independence are fundamental to architectural design because they allow changes to be localized. The MVC pattern, shown in Picture 1., separates elements of a system, allowing them to change independently. For example, adding a new view or changing an existing view can be done without any changes to the underlying data in the model. The layered architecture pattern is another way of achieving separation and independence. This pattern is shown in Picture 4.. Here, the system functionality is organized into separate layers, and each layer only relies on the facilities and services offered by the layer immediately beneath it.


Picture 1. The Model-View-Controller (MVC) Pattern


Picture 2. The Organization of The MVC


Picture 3. Web Application Architecture using the MVC Pattern

This layered approach supports the incremental development of systems. As a layer is developed, some of the services provided by that layer may be made available to users. The architecture is also changeable and portable. So long as its interface is unchanged, a layer can be replaced by another, equivalent layer. Furthermore, when layer interfaces change or new facilities are added to a layer, only the adjacent layer is affected. As layered systems localize machine dependencies in inner layers, this makes it easier to provide multi-platform implementations of an application system. Only the inner, machine-dependent layers need be re-implemented to take account of the facilities of a different operating system or database.

Picture 4. The Layered Architecture Pattern

Picture 5. is an example of a layered architecture with four layers. The lowest layer includes system support software—typically database and operating system support. The next layer is the application layer that includes the components concerned with the application functionality and utility components that are used by other application components. The third layer is concerned with user interface management and providing user authentication and authorization, with the top layer providing user interface facilities. Of course, the number of layers is arbitrary. Any of the layers in Picture 5. could be split into two or more layers. Picture 6. is an example of how this layered architecture pattern can be applied to a library system called LIBSYS, which allows controlled electronic access to copyright material from a group of university libraries. This has a five-layer architecture, with the bottom layer being the individual databases in each library. This shows the organization of the system for mental healthcare (MHC-PMS) that I have discussed in earlier chapters.



Picture 5. A Generic Layered Architecture


Picture 6. The Architecture of The LIBSYS System


Source : Sommervile, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 09:03 ]
Gambar dari VYNSKA AMALIA PERMADI 5116201002
by VYNSKA AMALIA PERMADI 5116201002 - Friday, 23 December 2016, 08:44
Anyone in the world

A Fresh Look at User Acceptance Testing

User Acceptance Testing is universally recognised as an important element in the delivery of stable software that meets the business requirements. However there are significant challenges in executing the UAT phase in accordance with best industry practices. Some of these challenges are based on a fundamental misunderstanding of the importance of UAT.

Perhaps it is because the User Acceptance Testing is the last significant activity in the project. Perhaps it is because the subject matter experts (SMEs) necessary to its success are rarely dedicated to the activity. Or perhaps there is too much reliance on the quality of testing that has occurred prior to the user acceptance testing phase.

It is surely time to reconsider its status as the ugly duckling of software quality and give it the prominence it deserves. Let me explain why UAT is so fundamental to a successful project.


The UAT Quality Gate

It doesn’t matter whether you have adopted a version of an agile testing methodology where UAT is seen as part of a hardening sprint or as the final activity in a waterfall project. What is common to all methodologies is that you to ensure that the delivered software meets the business objectives. It is the vital Go/No Go quality gate.

Search the internet and you will find a vast number of articles about how UAT should be tackled, typically linked to the development methodology employed. But this totally ignores the majority of applications which are not developed in house. Instead they are purchased or consumed in the cloud and often support the key business processes. For these applications user acceptance testing is the only activity to ensure that the business needs will be met.


The UAT Challenges

Once the importance of UAT is understood, you will hopefully agree that its inherent challenges must be solved rather accepted as part of an imperfect world. So let’s look at some of those challenges but let’s start by looking at the much touted pre-requisite before UAT can commence.

In the world where which many internet authors appear to inhabit, all the mechanics of an application have been tested and operate perfectly before UAT starts. The UAT best practice is simply to ensure that the delivered capabilities match the business requirements. In the messy real world which most of us inhabit this is absolute rubbish.


UAT for ERP applications

Take ERP applications for example which are dominated by SAP and Oracle. One of our customers implemented a new version from one of these major software companies and found over 400 functional errors. Admittedly they were one of the early adopters of the release but if major software companies struggle to deliver code to UAT that is functionally perfect, it will be wise to assume the same for in-house developments and plan accordingly.
That means there will be multiple iterations of the UAT cycle (or conference room pilot in the vernacular of some vendors). So we must manage user fatigue as well as the general availability of SMEs – business users will run with two UAT cycles perhaps but beyond that it is not only their availability but their effectiveness that must be considered as repetition jaundice sets in. Some companies get the UAT test plan initially executed by business aware members of the QA test team and thus protect the scarce availability of business users.
In other companies we find dedicated UAT teams but that may be deemed a luxury by others. Even a dedicated team is not perfect. If UAT is their full time job, how do you ensure that their knowledge of an evolving business remains current?


The importance of UAT

So we have established that UAT is a vital activity, but that the delivered software will not function perfectly and that there will challenges getting sufficient qualified resources to undertake the work. Balancing these factors requires two things. You need buy in from senior management and you need a user acceptance test plan that is realistic in its time-scales and has sufficient contingency to deal with functional defects and the response speed of internal or external development teams. That may mean that the planned implementation date needs to move but far better to get that agreed in advance – no-one will lose their jobs if the application is delivered early!


UAT testing tools

Lastly let’s consider whether there are tools that can help the user acceptance testing process. The challenges are planning, activity tracking, status and communication. You can try and use scraps of paper, Microsoft Excel/Word and email but in reality for anything beyond a very small project these are not fit for purpose. There are many products available for the planning and management of the user acceptance test process and in Qualify we offer one that can be exactly configured to meet your needs but while valuable, these products don’t tackle the fundamental problem – how do you know what testing was actually performed and how do you create the reproduction steps so that defects can be rapidly resolved? Our TestDrive-UAT represents a new class of products specifically designed to tackle this challenge.


Associated Kursus: KI142303BKI142303B
Anyone in the world

Software maintenance is a major issue for most CIOs because over 50% of IT time is spent on it — a daunting figure that hasn't changed much though the years. It's also one of the least favorite topics that CEOs and other C-level executives want to hear about.

It's easy to avoid the topic of software maintenance, given the focus on today's problems and pressures that are brought on by a constantly changing business environment. In this high-pressure situation, which relentlessly continues its advance, the idea of going back to "fix things" or to even see why they haven't been implemented, seems pointless.

Nevertheless, CIOs have to care about this, especially when IT budgets continue to remain flat and when half of IT staff is deployed on maintenance every day. These CIOs see their project loads burgeoning, knowing that only half of their staff is free to work on them.

The vicious circle of continuous software maintenance is fueled by various factors, which include the following:

  • In many cases, age-old legacy systems that are characterized by difficult to maintain (and to diagnose) "spaghetti code" that was written in the days when code was free-flowing and unstructured and yet continue to run mission-critical systems. It takes time to untangle this code and to fix or embellish it. The task is rendered more difficult because the code is usually undocumented, and the original writers have long since retired.
  • New code is not as technically solid as it should be — the reason is enterprise pressure to deploy the code, even if it is imperfect. Consequently, the organization lives with the imperfections until they become so overwhelming that the software maintenance team has to go in and fix it so it can get back into production.

CIOs cannot buck these circumstances, but some are beginning to take steps to reduce the amount of IT time spends on maintaining imperfect and broken systems. Here are five best practices to consider.


1: Use the cloud version of software to sidestep a legacy system

Some enterprise have actively deployed cloud versions of internal systems (like their enterprise resource planning systems) when they bring on new companies through acquisition. The reason is simple: By moving a new organization to the cloud, personnel in this business at least get used to using the same software that the acquiring enterprise uses. Over time, a decision can be made to transfer the acquired organization into the in-house enterprise system.

However, as more enterprises use this strategy, more are rethinking their approach. The result has been a change in thinking to where the ultimate goal becomes moving everyone (including the enterprise) to the cloud-based system. The idea is to push software maintenance to the cloud provider, thereby eliminating most of the time that internal IT has to spend on it.



2: Replace a custom system with a generic package


It sometimes makes sense to replace a custom system in favor of a third-party generic package that has more contemporary capabilities. In a situation like this, IT can also eliminate most of the software maintenance it incurs with the old software. The key is getting users — and the business — on board. Many times the customization that has been built into a system over decades can't be replaced with a more generic solution because of the competitive advantage the custom solution provides.



3: Invest in more quality assurance (QA) test bench automation

QA is one of the functions that many organization shortcut in the interests of getting software into production quickly. This isn't likely to change, but new automated testing tools in QA that run automated scripts and check for software deficiencies can change how well software runs and reduce software fix time.



4: Retrain and redeploy software maintenance personnel

As much as CIOs don't like to admit it, there is a pecking order in IT. The employees who often get placed on software maintenance teams are older IT programmers, new employees, or programmers who do not demonstrate proficiency in new app development. If software maintenance is to be reduced, these workers will need to be retrained and redeployed. Despite budget limitations and work commitments, CIOs must demonstrate a commitment to adopt these measures.



5: Set a metric for percent of personnel engaged in new projects

CEOs and other C-level executives might not want to hear about software maintenance, but if the CIO presents (and starts measuring against) a metric that shows the percent of IT staff dedicated to new projects and explains how software maintenance can negatively impact this, others are bound to take notice and view the effort more strategically.





Associated Kursus: KI142303BKI142303B
Anyone in the world

Architectural design is a creative process where you design a system organization that will satisfy the functional and non-functional requirements of a system. Because it is a creative process, the activities within the process depend on the type of system being developed, the background and experience of the system architect, and the specific requirements for the system. It is therefore useful to think of architectural design as a series of decisions to be made rather than a sequence of activities. During the architectural design process, system architects have to make a number of structural decisions that profoundly affect the system and its development process. Based on their knowledge and experience, they have to consider the following fundamental questions about the system:

  1. Is there a generic application architecture that can act as a template for the system that is being designed?
  2. How will the system be distributed across a number of cores or processors?
  3. What architectural patterns or styles might be used?
  4. What will be the fundamental approach used to structure the system?
  5. How will the structural components in the system be decomposed into sub-components?
  6. What strategy will be used to control the operation of the components in the system?
  7. What architectural organization is best for delivering the non-functional requirements of the system?
  8. How will the architectural design be evaluated?
  9. How should the architecture of the system be documented?


Although each software system is unique, systems in the same application domain often have similar architectures that reflect the fundamental concepts of the domain. For example, application product lines are applications that are built around a core architecture with variants that satisfy specific customer requirements. When designing a system architecture, you have to decide what your system and broader application classes have in common, and decide how much knowledge from these application architectures you can reuse.

For embedded systems and systems designed for personal computers, there is usually only a single processor and you will not have to design a distributed architecture for the system. However, most large systems are now distributed systems in which the system software is distributed across many different computers. The choice of distribution architecture is a key decision that affects the performance and reliability of the system.

The architecture of a software system may be based on a particular architectural pattern or style. An architectural pattern is a description of a system organization (Garlan and Shaw, 1993), such as a client–server organization or a layered architecture. Architectural patterns capture the essence of an architecture that has been used in different software systems. You should be aware of common patterns, where they can be used, and their strengths and weaknesses when making decisions about the architecture of a system.

Garlan and Shaw’s notion of an architectural style (style and pattern have come to mean the same thing) covers questions 4 to 6 in the previous list. You have to choose the most appropriate structure, such as client–server or layered structuring, that will enable you to meet the system requirements. To decompose structural system units, you decide on the strategy for decomposing components into sub-components. The approaches that you can use allow different types of architecture to be implemented. Finally, in the control modeling process, you make decisions about how the execution of components is controlled. You develop a general model of the control relationships between the various parts of the system.

Because of the close relationship between non-functional requirements and software
architecture, the particular architectural style and structure that you choose for a system should depend on the non-functional system requirements:

  1. Performance. If performance is a critical requirement, the architecture should be designed to localize critical operations within a small number of components, with these components all deployed on the same computer rather than distributed across the network. This may mean using a few relatively large components rather than small, fine-grain components, which reduces the number of component communications. You may also consider run-time system organizations that allow the system to be replicated and executed on different processors.
  2. Security. If security is a critical requirement, a layered structure for the architecture should be used, with the most critical assets protected in the innermost layers, with a high level of security validation applied to these layers. 
  3. Safety. If safety is a critical requirement, the architecture should be designed so that safety-related operations are all located in either a single component or in a small number of components. This reduces the costs and problems of safety validation and makes it possible to provide related protection systems that can safely shut down the system in the event of failure.
  4. Availability. If availability is a critical requirement, the architecture should be designed to include redundant components so that it is possible to replace and update components without stopping the system.
  5. Maintainability If maintainability is a critical requirement, the system architecture should be designed using fine-grain, self-contained components that may readily be changed. Producers of data should be separated from consumers and shared data structures should be avoided.


Obviously there is potential conflict between some of these architectures. For example, using large components improves performance and using small, fine-grain components improves maintainability. If both performance and maintainability are important system requirements, then some compromise must be found. This can sometimes be achieved by using different architectural patterns or styles for different parts of the system.

Evaluating an architectural design is difficult because the true test of an architecture is how well the system meets its functional and non-functional requirements when it is in use. However, you can do some evaluation by comparing your design against reference architectures or generic architectural patterns. Bosch’s (2000) description of the non-functional characteristics of architectural patterns can also be used to help with architectural evaluation.


Reference : Sommervile, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
Anyone in the world

You can improve the quality of your Software Architecture Design by using the following 10 tips. Describing your software architecture design is useful for any type of project, it will share the design of the system among your stakeholder.


  1. Based on non functional requirements
  2. Rationale, rationale, rationale
  3. Don’t Repeat Yourself
  4. Slice the cake
  5. Prototype
  6. Quantify
  7. Get it working, Get it right, Get it optimized
  8. Focus on the boundaries and interfaces
  9. The Perfect is the enemy of the Good
  10. Align with your stakeholders


1. Based on requirements

You should base your software architecture design on the requirements of your stakeholders. An architecture focuses on the non-functional requirements. I see many software architecture designs based on purely technical motives. Each part of your design should be based on business requirements. You as an architect should translate these requirements into the right architectural design decisions. If the stakeholder values maintainability, you could use the layer pattern to separate several parts of the application. If performance is important, maybe layering is not a good solution. An exhaustive list of non-functional requirements can be found at ISO 9126 and at QUINT. If you do not use Non-functional requirements in your organization but want to introduce them,  take a look at this post.


From the Non-functional requirements or quality attributes you have to create the right design. While you could create this from scratch there are many examples in the form of design patterns or architectural patterns. A design or architectural pattern expresses a relation between a problem and a solution. Although we often think that our problem is unique this is often not the case. If you take a step back you will see that many of our problems already have been solved using existing patterns. Two books that I can recommend are “Pattern-Oriented Software Architecture” and “Design Patterns”. Both books contain a catalog patterns. Each pattern describes the problem it solves and in which context it can be used. There are also many online pattern source on the web such as this one on Wikipedia and this from The OpenGROUP


2. Rationale, rationale, rationale

The most important aspect of your architecture description is the recording of your rationale behind your design decisions. It is important for a reader of the architecture description to understand the reason why you made a specific decision. Make your assumptions explicit and add them to the description. Assumptions may be invalid now or later but at least it will be clear how you came to that decision. It make communicating with your team (you do communicate do you?) that much easier if you share you rationale.


Note that recording your rationale become much easier if your non-functional requirements are explicit. It will be much clearer if you describe that you created several components to increase the testability because testability is the most important requirements. Do describe the Why and How in your software architecture design!


3. Don’t Repeat Yourself (DRY)

Don’t Repeat Yourself (DRY) or Duplication Is Evil (DIE) come from software-engineering in general. The DRY principle is stated as “Every piece of knowledge must have a single, unambiguous, authoritative representation within a system”. You can apply this principle on many levels; Architecture, Design, Testing, Source Code and Data. For me this is one of the most difficult things to uphold. You have to fight the repetition because it will slow you and your project down. The difficult part of this Repetition Creep as I call it is that it is introduces very slowly. The first repetition won’t hurt you directly, it will even gain some time. You are able to release the first version of the application somewhat quicker, but as I found it always shows up later and makes something else more difficult. At that moment you regret the decision to introduce repetition.


If you absolutely must add another copy of information make sure that you automatically generate that copy of the information. It will make your live so much easier in the future. One thing that helps to fight repetition is to store the data where it belongs. This seems logical and is the basis of object oriented design but I often see this violated with regards to system architectures. For example take packaging an application for deployment. The process in which you filter the build of your software to include the components that are necessary in a package. Where would you store the information which component should be included in the package? You could create a list that includes the names of the components that should be packaged. That means you introduce your first repetition! You now have two places where component names are mentioned. A better solution would be to add that information to the component itself.

When the first list in any format shows up in or around an application, alarm bells should sound and you should be on the lookout for repetition!


4. Slice the cake

I struggled with naming this, but found Slicing the cake as it is called in Agile development the best description. By slicing the cake I mean that you design your architecture iterative in vertical slices. An architect implements or prototypes each vertical slice to confirm if it actually works. You should do this because architectures cannot be created on paper. It does not mean that you cannot use horizontal layering or any other pattern in your architecture. In the case of layering the horizontal layers are smaller. The picture below shows the principle.


Say you use layering in your architecture design because your stakeholders expect that the components that you develop for this system will be used in other systems as well. During the first iteration you design a small part of the User Interface (UI), a small part of the Business Layer (BL) and a small part of the Data Layer (DL). You make sure that this works as expected by proving it with a prototype or by actually implementing it. In the second iteration you add new functionality and expand each layer horizontally with the needed functionality. The existing UI, BL and DL are combined with the new UI, BL and DL to form the new layers.

The difficulty with slicing is how to slice the cake so that the next slice will properly align with the previous.


5. Prototype

When creating a software architecture design make sure that you prototype your design. Validate your assumptions, do that performance test and make sure that the security architecture is valid. A prototype will give you the opportunity to fail fast which is a good thing.


6. Quantify

This principle extends the first principle “Based on Requirements”. To be able to create a proper software architecture design you need to quantify your Non-functional requirements. It should be “fast” cannot be a requirement neither is maintainable or testable. How will you know if you have met these requirements? You won’t.


ISO 9126 and QUINT both describe ways to quantify the non-functional requirements. For example testability specifies an indicator “number of test cases per unit volume”. QUINT also specifies how you can actually measure an indicator for example the indicator “Ratio Reused Parts” from the quality attribute Reusability which you can measure using the following protocol:

  1. Measure the size of each reused part;
  2. Measure the size of the entire software product;
  3. Calculate the ratio of reused parts, which is the sum of reused parts divided by (2).


7. Get it working, Get it right, Get it optimized

In many projects I have seen architects and developers design software architectures that focus on creating general purpose libraries, services or infrastructure. These are created without a direct reference to a concrete application. Instead they are designing for tomorrow. This for me is like walking backwards, generality cannot be designed up-front. If you think, well… stop! you actually can’t. Today’s businesses change way too fast to design for generality up-front.

You should always start with a concrete implementation for a specific problem. At the time you start working on the next application and find similarities, that’s the time to think about generalizing. This makes the first solution simpler, which should be your design goal.


8. Focus on the Boundaries and Interfaces

When creating your software architecture design you should focus on the boundaries of your system and components. When starting blank you should think about separation of concerns. What component or system has which responsibility? Between the components or system design explicit interfaces. Don’t separate a system of component when a lot of communication is necessary between these components or systems.


9. The Perfect is the enemy of the Good

The phrase “The perfect is the enemy of the good” from Voltaire is also valid for software architecture design. How many times have you started a new project and thought I want this project to be perfect? And how many times have you actually found out that the project wasn’t perfect. Well, guess what – a project will never be perfect. There will always be problems or forgotten requirements.


Perfection is never possible. However you are able to create a good software architecture design. Do not try to analyze everything during the start of the project it will slow you down. Watch out for Analysis Paralysis.


10. Align with your stakeholders

Before you can create any type of system you need to identify your stakeholders. Each stakeholder has different needs of your software architecture and may require a different view. Software developers may need descriptions using Unified Modeling Language (UML) while business sponsors need a description in natural language. Operations and support staff for example may need other view such as context diagrams.


There is a tension between creating all these views for stakeholders and principle 3. Don’t Repeat Yourself. Each view essentially describe the same system and adds repetition. Therefore you should only add those descriptions that adds value for a specific stakeholder.


Associated Kursus: KI142303BKI142303B
Anyone in the world


The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors.


In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011.

Results and discussion

Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. “Project” was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.

Key words: Health information systems, Cost, Effective factors, Software maintenance, AHP model


Software production and maintenance issues, costs estimation, project schedule and knowledge of the process have always been complicated cases in software engineering (1-8). Cost depends on the creation and maintenance of the software. Thus, continuous monitoring and control of maintenance costs, and software optimization, are really important. Taking into account this statistic, also leads to careful software maintenance to reduce costs. Software maintenance costs are rising and based on the estimations about 90% of the cost related to the software life is in the maintenance phase. The estimations show 50 percent increase over the past two decades (56). This increase is shown in the Figure 1.

Figure 1.
Development of Software maintenance costs as percentage of total cost [Floris and Harald, 2010]

In another study, the relative costs of maintenance and software development management were estimated more than 90% of the total cost of the software life (910).

Floris and Harald, in their study introduced incomplete documentation and low maintenance as the factor to increase the cost. Therefore the defect makes it difficult for the maintenance team to expand or rebuild the product. Because the production team members may have left the company, be retired or replaced by another person who are unaware of the production process (2).

Since quality improvement and reduced software lifecycle time are among rapid application development techniques, the use of common-sense approach in the production shows that using individual techniques is not a threat to high availability, acceptable performance and quality of projects (4).

In a study researchers introduced support and maintenance software to estimate the maintenance effort. In these researchers’ point of view support and maintenance software were a set of activities to support IT. Magne Jorgensen came to this conclusion that 43 to 44% of the estimations are mentally done by the experts and using such models results in the estimations complexity (10).

Therefore in this research a software is introduced that due to the simplicity and ease of use is a replacement for the estimation models and experts’ mental estimations.

Because the design and implementation of medical software is growing in Iran, and today most medical centers and health centers like to set up this system, it seems to be a growing and effective trend in automation of hospitals and medical and healthcare centers.

Mr. Boehm studied the various cost factors in the simple or complex public systems (1). The results of his research are published in details in the book (Software Architectures: Critical Success Factors and Cost Drivers) (14). Many researchers focused on models and different methods of cost estimation. But what is important is to update and review each model factors. These models include analog models such as the Delphi method or estimations based on professional experience, models such as analysis of performance indicators and models of machine learning algorithms including neural networks, genetic programming, fuzzy logic, and many other models (111213).

Henry Raymond (2007) in a study used the estimation techniques along with the knowledge of the project team, project manager and the president to design a predictive model for estimating the software. This model suggests that the maintenance plays an important role in the success of IT projects. Though the effective use of technology for estimating the time and cost is necessary but is not sufficient. To predict the exact time and cost, the management needs the knowledge, knowledge integration and sharing it.



A thesis of the University of California, with the aim of improving the volume and effort estimation models for software maintenance (12).

A study by Magne Jørgensen considering results of Simula Research Laboratory with an overview of studies in estimating software development effort (3).

Studies evaluation of Jyvaskyla Research Institute and University by Jussi Koskinena et al to estimate the costs of software, support modernization, repair and maintenance (5). Václav Macinka thesis from the University of Brno, with the aim to provide methods for determining the cost of software projects (8).

Taking into account the importance of software maintenance costs, Isfahan University of Medical Sciences, is pursuing the following objectives in this paper:

  • Identify the factors affecting the cost of software maintenance.
  • Prioritize each of the factors affecting the cost of software maintenance.
  • Provide solutions to reduce the maintenance costs of medical software


The scope of this study is all the software produced in the years 2010 to 2011. 15 maintenance team members were kept as a community in this study. After sampling 40 members were selected randomly. In this study, a checklist- designed based on software engineering standards, researcher’s experiences and experts’ confirmation- was used for data collection. SPSS software and Expert Choice software were used for data analysis.



Environments to run 14 software were Windows and 1 other one was DOS. System operational dates were from 2000 until 2011 and the operational period has been variable from 20 months to 102 months. Current status of all systems was active, and only one of them had been disabled.

The results of the first research objectives are as follow:

* Based on studies from reputable books and literature in the field of software engineering, well-known sites and interviews with informatics experts, 32 effective factors were obtained and examined in the software maintenance cost estimations.

Cost factors were classified in 6 groups, which are as follow:

In line with the second goal (to prioritize each of the factors affecting the cost of medical software maintenance) the following results were obtained:

* For prioritization of the factors, at first and before modeling, the measurement criteria are needed to be identified. Then six found characteristics and their measurement criteria should be estimated and finally entered into the EC application.

To achieve this goal, the first measurement criteria (32 factors) were determined based on the importance in the software maintenance.

This questionnaire was prepared for a five-degree Likert scale and distributed among specialists in this field. After naming and ordering the information, the information was entered in the software. The list of measurement criteria and results after the interview are presented. Ranking of the influencing factors are displayed in Table 4 with the help of EC software.

The following results obtained regarding the third objective of the study:

* Based on the results of the current study and deficits in production and maintenance process, it seems that by following the guidelines that have been mentioned, one can reduce the cost of software maintenance to achieve desired results found in increasing productivity as well as making benefit of limited financial resources and manpower available in the country.


1) Providing an effective tool for Software Maintenance:

* Use appropriate language for system maintenance (especially in developing application systems) and develop tools to use these languages.

* Optimal use of system implementation such as CASE tools.

* The use of programming standards and protocols.

* The use of the principles, methods and modern programming techniques.


2) Using proper techniques in software development:

* Designing on the basis of independent modules.

* Designing and programming using methods consistent with the effective software engineering principles in software development.

* Prototyping before making the full system.


3) Having the right people for the software maintenance:

Select professionals familiar with the project language and programming language.

* Enough familiarity of project group with the host machine and the target machine.

* Having experienced group to offset the effect of the product increasing complexity on development and maintenance costs.

* Selecting individuals with the ability to adequately analyze the project and coordinate teamwork.

* Having individuals with experience in the similar work like this project and the host machine.

* Having individuals aware of the application and familiar with the expectations of the system.


4) Considering future

* Consider the program structure and acceptability of changes.

* Careful analysis of the needs based on the present situation and future trends for software maintenance.

* Doing changes in environment regarding software conditions, the efficiency increase rate and maintenance costs.

When the COCOMO model was accurately described the use of structured programming was not like today and software tools were not much available. Nowadays use of tools, has increased and structured techniques are common. Therefore, the factors that may have initially been defined are not important anymore. So some of the factors identified by Mr. Boehm (1) (such as computer memory limitations factor) are outdated, but the overall coefficients of the product categories, computer, personnel and project are still fit. Given that all HIS systems are linked in a network, computer network factor has been added. Bohemia took these factors into consideration at his time, but today with such the technology, no scholar has examined and updated these factors. In this study we updated factors extracted by Boehm. According to the results, the validity of all these factors were confirmed and importance of the factors related to “project” and “computer network” was higher than other attributes, this means that project managers must estimate the cost of maintenance software, taking into account these two characteristics.




Based on interviews, 32 factors were identified in the cost estimation of medical software maintenance and were approved by informatics specialists. Using AHP model parameters, 6 groups were ranked. Since in each research a problem is stated and examined and at the end solutions are proposed, in this study, we also provide solutions to reduce maintenance costs. What the Informatics experts agree on for reducing maintenance costs, is that “with respect to some important factors such as accuracy in HIS projects feasibility, along with complete documentation and helping the design and implementation mechanisms in the maintenance phase ,favorable results can be achieved in reducing the cost.“

Generally we can conclude that for an accurate assessment and reduce the cost of software maintenance, software maintenance factors determining is essential. This will lead to the longer life of software. Evaluation of these factors and their influence on each of the maintenance costs, help the project manager in making decisions and planning, and is essential in the success of software maintenance. Project managers must consider these factors for success in their projects and decisions:

* HIS software is generally in a network and for giving a better service to applicants, data collection is done on the central server. As a result, software should be developed in a network and maintainers should give their service in a network. In other words, if the software is Single that costs less, but for network applications, computer network costs are added to the costs. So in designing this software these costs should also be noted.

* To reduce maintenance costs and increase the longevity of HIS software determining the cost estimation factors is necessary, this can help to increase productivity and provide a native model to estimate the system maintenance cost. It will make the project manager able to estimate the real cost at any time in the system.


Associated Kursus: KI142303BKI142303B
Anyone in the world


Entering and tracking defects, is one of the main tasks for testers. It is important that any time that a defect is found (whether it was actively being looked for or not), that it is logged. Even if you are not sure if it really is a defect, or if you think it maybe isn’t really that big of a deal; I believe it is better to log it, analyze and then if necessary close it, than it is to just ignore it. This article explains the process of software defect tracking and give recommendations on how to implement a defect tracking system.


So what exactly is a defect? As a general definition, a defect is any aspect of the application that does not act or behave as designed or expected. This can range from the obvious cases where the application crashes, data is lost, calculations give the wrong results, etc., to more subtle cases of useability and lack of features. From a strictly test process point of view, defects are linked to the software’s requirements. A tester examines the requirements, and then writes test cases that test against these requirements. If a test fails, or if in any other way it is found that a requirement is not met correctly, then it is a defect. From a product, or company point of view, however, a defect is anything that a customer finds or would find as a problem whether there is a requirement or not. This latter definition is often more important since it is most organizations’ goal to satisfy their customers.

Entering a Defect

As mentioned, any time a defect or even a potential defect is found, it should be entered. The exact format for entering a defect will vary between organizations, and some organizations may need more or less information. Here I will provide the basic and most common information that should be included when creating a defect report. ID: A unique identifier for the defect. This ID is what is typically used, when different people are referring to the defect. When using a defect tracking system, this ID is typically generated automatically. Product: This is the product that the defect is for. If your organization only has one product this may not be necessary. Component: If your software application contains several components, e.g. database, web server, UI, etc. this field is used indicate which component the defect is in. Sometime a tester may have to guess at this, if they aren’t familiar with the architecture. Summary: A short one-line description of the defect. This is critical because the summary is often what is shown in summary reports and search results, so it must be descriptive enough so any team member (i.e. tester, developer, manager, etc.) can get a basic understanding of the problem. Description: This is where more details of the defect is entered. It is important to be as clear and complete as possible so that there is no ambiguity when reading. It should also be concise, because if it is too verbose, the reader might get lost in it and miss the point. When appropriate, include the steps needed to reproduce the problem, including expected and actual results. This will make it easier for the developer to locate the problem, and easier later on for the tester to verify that the problem has been fixed. Test Case Reference: If applicable reference the Test Case that was being executed when the defect was found. Environment: This will vary greatly depending on the type of application being tested but should at least include the version of the software being tested, and may include things like Operating System, Firmware Version, Browser, Database version, etc. Severity: As an organization, you should decide on a standard set of Severity levels. For example: Critical, Major, Normal, Minor. Some organizations may choose to have more or less, but 3 to 5 levels seems to work the best. Assigning a severity is obviously a judgment call, but it helps to prioritize defect later on. Priority: Again, an organization should also have pre-defined priority levels, e.g. P1 to P5. Typically it should not be up to the tester to assign the priority, this should be done by the management team. Often the question arises between the difference between Severity and Priority. They are quite correlated, but it is possible to have a high severity defect with a low priority. For example, if a certain actions causes the application to crash, then that is a very Severe defect, but if the conditions to cause this would almost never occur, it may be given a low priority since there may be other more easily seen defects to be fixed. An opposite example, may be a typo on a screen. This has low severity, but if the typo may confuse the user and/or make the application look unprofessional and, therefore, affect sales, the defect could be set to a high priority. Typically priority is used for developers to determine which defects to address first.

Defect Life Cycle

Although a defect is typically entered by a tester, it cannot remain with the tester, but instead needs to be fixed and then verified. This movement of a defect between different people and different states is often called the defect life cycle. Each organization will probably have a slightly different set of rules for how they want defects handled, and will, therefore, have a slightly different life cycle. Most however, will have a format similar to the following:

  1. When a defect is first created, it will start off a New state
  2. A triage meeting (see below) is held and the defect is prioritized and then set to an Open state and assigned to a developer.
  3. The developer fixes the defect, and checks in the code. They the set the defect to Fixed and assign it to a tester to verify it. Alternatively, if the developer either doesn’t think it is really a defect or can’t reproduce it, they may set it to a different state (e.g. As Designed or Can’t Reproduce)
  4. The tester verifies that the bug is fixed by testing it in the next build. The tester then marks it as Verified. Alternatively if the defect isn’t fixed they set the state back to Open and reassigns it back to the developer.

As mentioned this is a simplified life cycle and they can often get more complex with more states. It is good to try to define this ahead of time to handle all situations you may encounter in your organization. Below is an example of a defect life cycle, used in Bugzilla:

Bugzilla Lifecycle

Defect Triage Meeting

A triage meeting is a meeting that brings all stakeholders (e.g. testers, manager, developers, etc) together to discuss defects. Typically, a triage meeting will look at all new defects that have been logged since the last triage meeting, any hold over defects from a previous meeting, or any defects that have been re-opened. The purpose of the meeting is to discuss the defects and clarify any details about them, and then decide which defects should be fixed, and to set the priority for fixing them. Each stakeholder may have their own agenda as to which defects get fixed and the priority of them, but it is in this meeting that a consensus should be reached. Once the defects have been analyzed and prioritized, they should be assigned to the development team to fix them.


One reason for tracking defects, is that the data can provide metrics, that can be used when evaluating the quality of the application and in deciding when an application is ready to be released. A common defect related metric to track, is the number of new defects logged over the past set period of time (e.g. day, week, etc.) and then track that over time. Typically, there will be spike in new defects every time a new build is given to test, but over time, the number of new defects found in each period, should decline. The following bar graph is an example, showing the number of new defects that were entered on each day of the month: Defects by day

Another common metric to track is the total number defects in new/open states vs those in a verified/closed state over the life of the project. At the beginning of the project the new/open defects will be rising, but over time they should be decreasing to near zero, as shown in the example below: New and Open defect

Another useful metric is a snapshot chart showing the Total number of defects, grouped by the various states, at a specific point in time. Near the beginning of a project, most defects will be New, but as the project comes to an end most should be closed. Below is an example of the chart part way through a project: Defect Status

With all these metrics you can further add information by separating the data by priority and/or severity, which can put a better perspective on things. You can also separate data by components to help get a better idea which parts of an application are more stable than others. Like all metrics, however, it is important not to read too much into them, and look at them more as guidelines when making quality decisions.

Defect Tracking Software

While it is possible to manage defects using a paper based system and/or to use a Word Processor and/or Spreadsheet, I would suggest using a 3rd party application, for tracking defects. When looking for a good defect tracking system you should be looking at the following criteria: web-based: This isn’t a must, especially for a small group, but it is definitely easier to administer the application if it is web-based. The software only needs to be installed and configured in one location, and then all users can access it. multi-user: This should be a given. Even if you are a one-man show to start, it leaves room to expand. search: This is one of the most important things to look at. All systems should have some kind of searching, but check out how easy it is to use and how comprehensive it is. Does it allow searching on any custom fields? Also does the system allow for Saved Searches or Saved Filters? This is critical, because over time, you will set up and save many often used searches. reports: Check what kind of reports, if any come with the system. Are the reports useful. Also, check that you can generate custom reports that fit the needs of your organization. history: The defect tracking system should track all changes made to a defect, including comments added, and state transitions. This information allows the user to easily see the history of changes made to the defect. configurable fields: Does the system support configuration or customization. You may want to add fields that are unique to the type of project you are working on. You would probably also want control of the values that certain fields can take. configurable work flow: As mentioned earlier, each organization may have a slightly different work flow. Does the Application allow you to add new states or customize which states transitions are allowed? email notifications: This is not a must, but it is nice to email notifications for users for when defects get assigned to them or when defects that are assigned to them get modified. When I began working as an independent software developer, as I used bugzilla.  It is free, mature, full featured, and covered all of my needs.  I am currently using Jira, which has a few more features and a way nicer user interface. Although it isn’t free, at $10 a year (for 1 to 10 users), I think its worth it. For a list of some of the more common defect tracking systems, see the references section below.

Sumber : Software Defect Tracking

Associated Kursus: KI142303BKI142303B
Anyone in the world

The Right Attitude toward Defects

The president sets a goal of reducing unemployment, but not of eliminating it. Why is that? Well, because having nobody in the country unemployed is simply impossible outside of a planned economy – people will quit and take time off between jobs or get laid off and have to spend time searching for new ones. Some unemployment is inevitable.

Management, particularly in traditional, ‘waterfall’ shops tends to view defects in the same light. We clearly can’t avoid defects, but if we worked really hard, we could reduce them by half. This attitude is a core part of the problem.

It’s often met with initial skepticism, but what I tell  clients is that they should shoot for having no escaped defects (defects that make it to production, as opposed to ones that are caught by the team during testing). In other words, don’t shoot for a 20% or 50% reduction – shoot for not having defects.

It’s not that shooting for 100% will stretch teams further than shooting for 20% or 50%. There’s no psychological gimmickry to it. Instead, it’s about ceasing to view defects as “just part of writing software.” Defects are not inevitable, and coming to view them as preventable mistakes rather than facts of life is important because it leads to a reaction of “oh, wow, a defect – that’s bad, let’s figure out how that happened and fix it” instead of a reaction of “yeah, defects, what are you going to  do?”

When teams realize and accept this, they turn an important corner on the road to defect reduction.

What Won’t Help

Once the mission is properly set to one of defect elimination, it’s important to understand what either won’t help at all or what will help only superficially. And this set includes a lot of the familiar levers that dev managers like to pull.

First and probably most critical to understand is that the core cause of defects is NOT developers not trying hard enough or taking care. In other words, it’s not as though a developer is sitting at his desk and thinking, “I could make this code I’m writing defect free, but, meh, I don’t feel like it because I want to go home.”

It is precisely for this reason that exhortations for developers to work harder or to be more careful won’t work. They already are, assuming they aren’t overworked or unhappy with their jobs, and if those things are true, asking for more won’t work anyway.

And, speaking of overworked, increasing workload in a push to get defect free will backfire. When people are forced to work long hours, the work becomes boring.  “Grueling and boring” is a breeding ground for mistakes – not a fix for them. Resist the urge to make large, effort-intensive quality pushes. That solution should seem too easy, and, in fact, it is.

Finally, resist any impulse to forgo the carrot in favor of the stick and threaten developers or teams with consequences for defects. This is a desperate gambit, and, simply put, it never works. If developers’ jobs depend on not introducing defects, they will find a way to succeed in not introducing defects, even if it means not shipping software, cutting scope, or transferring to other teams/projects. The road to quality isn’t lined by fear.

Understand Superficial Solutions

Once managers understand that eliminating defects is possible and that draconian measures will be counterproductive, the next danger is a tendency to seize on the superficial. Unlike the ideas in the last section, these won’t be actively detrimental, but the realized gains will be limited.

The first thing that everyone seems to seize on is mandating unit test coverage, since this forces the developers to write automated tests, which catch issues. The trouble here is that high coverage doesn’t actually mean that the tests are effective, nor does it cover all possible defect scenarios. Hiring or logging additional QA hours will be of limited efficacy for similar reasons.

Another thing folks seem to love is the “bug bash” concept, wherein the team takes a break from delivering features and does their best to break the software and then repair the breaks. While this certainly helps in the short term, it doesn’t actually change anything about the development or testing process, so gains will be limited.

And finally, coding standards to be enforced at code review certainly don’t hurt anything, but they are also not a game changer. To the chagrin of managers everywhere, “here are all of the mistakes one could make, so don’t make them” doesn’t arise from the past experience of the tenured developers on the team.

Change the Game

So what does it take to put a serious dent into defect counts and to fundamentally alter the organization’s views about defects? The answers here are more philosophical.

The first consideration is to get integration to be continuous and to make deployments to test and production environments trivial. Defects hide and fester in the speculative world between written code and the environment in which it will eventually be run. If, on the other hand, developers see the effects their code will have on production immediately, the defect count will plummet.

Part and parcel with this tight feedback loop strategy is to have an automated regression and problem detection suite. Notice that I’m not talking about test coverage or even unit tests, but about a broader concept. Your suite will include these things, but it might also include smoke/performance tests or tests to see if resources are starved. The idea is to have automated detection for things that could go wrong: regressions, integration mistakes, performance issues, etc. These will allow you to discover defects instead of the customers.

And, finally, on the code side, you need to reduce or eliminate error prone practices and parts of the code. Is there a file that’s constantly being merged and could lead to errors? Do your developers copy, paste, and tweak? Are there config files that require a lot of careful, confusing attention to detail? Does your team have an established code review process, or is it something that is still happening ad-hoc? Recognize these mistake-inviters for what they are and eliminate them.

Sumber : How to Actually Reduce Software Defects

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 08:38 ]
Anyone in the world

Much like we gain knowledge about the behavior of the physical universe via the scientific method, we gain knowledge about the behavior of our software via a system of assertion, observation, and experimentation called “testing.”

There are many things one could desire to know about a software system. It seems that most often we want to know if it actually behaves like we intended it to behave. That is, we wrote some code with a particular intention in mind, does it actually do that when we run it?

In a sense, testing software is the reverse of the traditional scientific method, where you test the universe and then use the results of that experiment to refine your hypothesis. Instead, with software, if our “experiments” (tests) don’t prove out our hypothesis (the assertions the test is making), we change the system we are testing. That is, if a test fails, it hopefully means that our software needs to be changed, not that our test needs to be changed. Sometimes we do also need to change our tests in order to properly reflect the current state of our software, though. It can seem like a frustrating and useless waste of time to do such test adjustment, but in reality it’s a natural part of this two-way scientific method–sometimes we’re learning that our tests are wrong, and sometimes our tests are telling us that our system is out of whack and needs to be repaired.

This tells us a few things about testing:

  1. The purpose of a test is to deliver us knowledge about the system, and knowledge has different levels of value. For example, testing that 1 + 1 still equals two no matter what time of day it is doesn’t give us valuable knowledge. However, knowing that my code still works despite possible breaking changes in APIs I depend on could be very useful, depending on the context. In general, one must know what knowledge one desires before one can create an effective and useful test, and then must judge the value of that information appropriately to understand where to put time and effort into testing.
  2. Given that we want to know something, in order for a test to be a test, it must be asserting something and then informing us about that assertion. Human testers can make qualitative assertions, such as whether or not a color is attractive. But automated tests must make assertions that computers can reliably make, which usually means asserting that some specific quantitative statement is true or false. We are trying to learn something about the system by running the test–whether the assertion is true or false is the knowledge we are gaining. A test without an assertion is not a test.
  3. Every test has certain boundaries as an inherent part of its definition. Much like you couldn’t design a single experiment to prove all the theories and laws of physics, it would be prohibitively difficult to design a single test that actually validated all the behaviors of any complex software system at once. If it seems that you have made such a test, most likely you’ve combined many tests into one and those tests should be split apart. When designing a test, you should know what it is actually testing and what it is not testing.
  4. Every test has a set of assumptions built into it, which it relies on in order to be effective within its boundaries. For example, if you are testing something that relies on access to a database, your test might make the assumption that the database is up and running (because some other test has already checked that that part of the code works). If the database is not up and running, then the test neither passes nor fails–it instead provides you no knowledge at all. This tells us that all tests have at least three results–pass, fail, and unknown. Tests with an “unknown” result must not say that they failed–otherwise they are claiming to give us knowledge when in fact they are not.
  5. Because of these boundaries and assumptions, we need to design our suite of tests in such a way that the full set, when combined, actually gives us all of the knowledge we want to gain. That is, each individual test only gives us knowledge within its boundaries and assumptions, so how do we overlap those boundaries so that they reliably inform us about the real behavior of the entire system? The answer to this question may also affect the design of the software system being tested, as some designs are harder to completely test than others.

This last point leads us into the many methods of testing being practiced today, in particular end to end testing, integration testing, and unit testing.

End to End Testing

“End to end” testing is where you make an assertion that involves one complete “path” through the logic of the system. That is, you start up the whole system, perform some action at the entry point of user input, and check the result that the system produces. You don’t care how things work internally to accomplish this goal, you just care about the input and result. That is generally true for all tests, but here we’re testing at the outermost point of input into the system and checking the outermost result that it produces, only.

An example end to end test for creating a user account in a typical web application would be to start up a web server, a database, and a web browser, and use the web browser to actually load the account creation web page, fill it in, and submit it. Then you would assert that the resulting page somehow tells us the account was created successfully.

The idea behind end to end testing is that we gain fully accurate knowledge about our assertions because we are testing a system that is as close to “real” and “complete” as possible. All of its interactions and all of its complexity along the path we are testing are covered by the test.

The problem of using only end to end testing is that it makes it very difficult to actually get all of the knowledge about the system that we might desire. In any complex software system, the number of interacting components and the combinatorial explosion of paths through the code make it difficult or impossible to actually cover all the paths and make all the assertions we want to make.

It can also be difficult to maintain end to end tests, as small changes in the system’s internals lead to many changes in the tests.

End to end tests are valuable, particularly as an initial stopgap for a system that entirely lacks tests. They are also good as sanity checks that your whole system behaves properly when put together. They have an important place in a test suite, but they are not, by themselves, a good long-term solution for gaining full knowledge of a complex system.

If a system is designed in such a way that it can only be tested via end-to-end tests, that is a symptom of broad architectural problems in the code. These issues should be addressed through refactoring until one of the other testing methods can be used.

Integration Testing

This is where you take two or more full “components” of a system and specifically test how they behave when “put together.” A component could be a code module, a library that your system depends on, a remote service that provides you data–essentially any part of the system that can be conceptually isolated from the rest of the system.

For example, in a web application where creating an account sends the new user an email, one might have a test that runs the account creation code (without going through a web page, just exercising the code directly) and checks that an email was sent. Or one might have a test that checks that account creation succeeds when one is using a real database–that “integrates” account creation and the database. Basically this is any test that is explicitly checking that two or more components behave properly when used together.

Compared to end to end testing, integration testing involves a bit more isolation of components as opposed to just running a test on the whole system as a “black box.”

Integration testing doesn’t suffer as badly from the combinatorial explosion of test paths that end to end testing faces, particularly when the components being tested are simple and thus their interactions are simple. If two components are hard to integration test due to the complexity of their interactions, this indicates that perhaps one or both of them should be refactored for simplicity.

Integration testing is also usually not a sufficient testing methodology on its own, as doing an analysis of an entire system purely through the interactions of components means that one must test a very large number of interactions in order to have a full picture of the system’s behavior. There is also a maintenance burden with integration testing similar to end to end testing, though not as bad–when one makes a small change in one component’s behavior, one might have to then update the tests for all the other components that interact with it.

Unit Testing

This is where you take one component alone and test that it behaves properly. In our account creation example, we could have a series of unit tests for the account creation code, a separate series of unit tests for the email sending code, a separate series of unit tests for the web page where users fill in their account information, and so on.

Unit testing is most valuable when you have a component that presents strong guarantees to the world outside of itself and you want to validate those guarantees. For example, a function’s documentation says that it will return the number “1” if passed the parameter “0.” A unit test would pass this function the parameter “0” and assert that it returned the number “1.” It would not check how the code inside of the component behaved–it would only check that the function’s guarantees were met.

Usually, a unit test is testing one behavior of one function in one class/module. One creates a set of unit tests for a class/module that, when you run them all, cover all behavior that you want to verify in that module. This almost always means testing only the public API of the system, though–unit tests should be testing the behavior of the component, not its implementation.

Theoretically, if all components of the system fully define their behavior in documentation, then by testing that each component is living up to its documented behavior, you are in fact testing all possible behaviors of the entire system. When you change the behavior of one component, you only have to update a minimal set of tests around that component.

Obviously, unit testing works best when the system’s components are reasonably separate and are simple enough that it’s possible to fully define their behavior.

It is often true that if you cannot fully unit test a system, but instead have to do integration testing or end to end testing to verify behavior, some design change to the system is needed. (For example, components of the system may be too entangled and may need more isolation from each other.) Theoretically, if a system were well-isolated and had guarantees for all of the behavior of every function in the system, then no integration testing or end to end testing would be necessary. Reality is often a little different, though.


In reality, there is a scale of testing that has infinite stages between Unit Testing and End to End testing. Sometimes you’re a bit between unit testing and integration testing. Sometimes your test falls somewhere between an integration test and an end to end test. Real systems usually require all sorts of tests along this scale in order to understand their behavior reliably.

For example, sometimes you’re testing only one part of the system but its internals depend on other parts of the system, so you’re implicitly testing those too. This doesn’t make your test an Integration Test, it just makes it a unit test that is also testing other internal components implicitly–slightly larger than a unit test, and slightly smaller than an integration test. In fact, this is the sort of testing that is often the most effective.


Some people believe that in order to do true “unit testing” you must write code in your tests that isolates the component you are testing from every other component in the system–even that component’s internal dependencies. Some even believe that this “true unit testing” is the holy grail that all testing should aspire to. This approach is often misguided, for the following reasons:

  • One advantage of having tests for individual components is that when the system changes, you have to update fewer unit tests than you have to update with integration tests or end to end tests. If you make your tests more complex in order to isolate the component under test, that complexity could defeat this advantage, because you’re adding more test code that has to be kept up to date anyway.

    For example, imagine you want to test an email sending module that takes an object representing a user of the system, and an sends email to that user. You could invent a “fake” user object–a completely separate class–just for your test, out of the belief that you should be “just testing the email sending code and not the user code.” But then when the real User class changes its behavior, you have to update the behavior of the fake User class–and a developer might even forget to do this, making your email sending test now invalid because its assumptions (the behavior of the User object) are invalid.

  • The relationships between a component and its internal dependencies are often complex, and if you’re not testing its real dependencies, you might not be testing its real behavior. This sometimes happens when developers fail to keep “fake” objects in sync with real objects, but it can also happen via failing to make a “fake” object as genuinely complex and full-featured as the “real” object.

    For example, in our email sending example above, what if real users could have seven different formats of username but the fake object only had one format, and this affected the way email sending worked? (Or worse, what if this didn’t affect email sending behavior when the test was originally written, but it did affect email sending behavior a year later and nobody noticed that they had to update the test?) Sure, you could update the fake object to have equal complexity, but then you’re adding even more of a maintenance burden for the fake object.

  • Having to add too many “fake” objects to a test indicates that there is a design problem with the system that should be addressed in the code of the system instead of being “worked around” in the tests. For example, it could be that components are too entangled–the rules of “what is allowed to depend on what” or “what are the layers of the system” might not be well-defined enough.

In general, it is not bad to have “overlap” between tests. That is, you have a test for the public APIs of the User code, and you have a test for the public APIs of the email sending code. The email sending code uses real User objects and thus also does a small bit of implicit “testing” on the User objects, but that overlap is okay. It’s better to have overlap than to miss areas that you want to test.

Isolation via “fakes” is sometimes useful, though. One has to make a judgment call and be aware of the trade-offs above, attempting to mitigate them as much as possible via the design of your “fake” instances. In particular, fakes are worthwhile to add two properties to a test–determinism and speed.


If nothing about the system or its environment changes, then the result of a test should not change. If a test is passing on my system today but failing tomorrow even though I haven’t changed the system, then that test is unreliable. In fact, it is invalid as a test because its “failures” are not really failures–they’re an “unknown” result disguised as knowledge. We say that such tests are “flaky” or “non-deterministic.”

Some aspects of a system are genuinely non-deterministic. For example, you might generate a random string based on the time of day, and then show that string on a web page. In order to test this reliably, you would need two tests:

  1. A test that uses the random-string generation code over and over to make sure that it properly generates random strings.
  2. A test for the web page that uses a fake random-string generator that always returns the same string, so that the web page test is deterministic.

Of course, you would only need the fake in that second test if verifying the exact string in the web page was an important assertion. It’s not that everything about a test needs to be deterministic–it’s that the assertions it is making need to always be true or always be false if the system itself hasn’t changed. If you weren’t asserting anything about the string, the size of the web page, etc. then you would not need to make the string generation deterministic.


One of the most important uses of tests is that developers run them while they are editing code, to see if the new code they’ve written is actually working. As tests become slower, they become less and less useful for this purpose. Or developers continue to use them but start writing code more and more slowly because they keep having to wait for the tests to finish.

In general, a test suite should not take so long that a developer becomes distracted from their work and loses focus while they wait for it to complete. Existing research indicates this takes somewhere between 2 and 30 seconds for most developers. Thus, a test suite used by developers during code editing should take roughly that length of time to run. It might be okay for it to take a few minutes, but that wouldn’t be ideal. It would definitely not be okay for it to take ten minutes, under most circumstances.

There are other reasons to have fast tests beyond just the developer’s code editing cycle. At the extreme, slow tests can become completely useless if they only deliver their result after it is needed. For example, imagine a test that took so long, you only got the result after you had already released the product to users. Slow tests affect lots of processes in a software engineering organization–it’s simplest for them just to be fast.

Sometimes there is some behavior that is inherently slow in a test. For example, reading a large file off of a disk. It can be okay to make a test “fake” out this slow behavior–for example, by having the large file in memory instead of on the disk. Like with all fakes, it is important to understand how this affects the validity of your test and how you will maintain this fake behavior properly over time.

It is sometimes also useful to have an extra suite of “slow” tests that aren’t run by developers while they edit code, but are run by an automated system after code has been checked in to the version control system, or run by a developer right before they check in their code. That way you get the advantage of a fast test suite that developers can use while editing, but also the more-complete testing of real system behavior even if testing that behavior is slow.


There are tools that run a test suite and then tell you which lines of system code actually got run by the tests. They say that this tells you the “test coverage” of the system. These can be useful tools, but it is important to remember that they don’t tell you if those lines were actually tested, they only tell you that those lines of code were run. If there is no assertion about the behavior of that code, then it was never actually tested.


There are many ways to gain knowledge about a system, and testing is just one of them. We could also read its code, look at its documentation, talk to its developers, etc., and each of these would give us a beliefabout how the system behaves. However, testing validates our beliefs, and thus is particularly important out of all of these methods.

The overall goal of testing is to gain valid knowledge about the system. This goal overrides all other principles of testing–any testing method is valid as long as it produces that result. However, some testing methods are more efficient–they make it easier to create and maintain tests which produce all the information we desire. These methods should be understood and used appropriately, as your judgment dictates and as they apply to the specific system you’re testing.

Sumber : The Philosophy of Testing

Associated Kursus: KI142303BKI142303B
Anyone in the world

Model–view–controller (MVC) is a software design pattern for implementing user interfaces on computers. It divides a given software application into three interconnected parts, so as to separate internal representations of information from the ways that information is presented to or accepted from the user.[1][2]

Traditionally used for desktop graphical user interfaces (GUIs), this architecture has become popular for designing web applications.

As with other software architectures, MVC expresses the "core of the solution" to a problem while allowing it to be adapted for each system.[3] Particular MVC architectures can vary significantly from the traditional description here.[4]


A typical collaboration of the MVC components.

The central component of MVC, the model, captures the behavior of the application in terms of its problem domain, independent of the user interface.[5]

        • The model directly manages the data, logic, and rules of the application.
  • view can be any output representation of information, such as a chart or a diagram. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
  • The third part, the controller, accepts input and converts it to commands for the model or view.[6]



In addition to dividing the application into three kinds of components, the model–view–controller design defines the interactions between them.[7]

  • model stores data that is retrieved according to commands from the controller and displayed in the view.
  • view generates new output to the user based on changes in the model.
  • controller can send commands to the model to update the model's state (e.g., editing a document). It can also send commands to its associated view to change the view's presentation of the model (e.g., by scrolling through a document).


One of the seminal insights in the early development of graphical user interfaces, MVC became one of the first approaches to describe and implement software constructs in terms of their responsibilities.[8]

Trygve Reenskaug introduced MVC into Smalltalk-76 while visiting the Xerox Palo Alto Research Center (PARC)[9][10] in the 1970s. In the 1980s, Jim Althoff and others implemented a version of MVC for the Smalltalk-80 class library. Only later did a 1988 article in The Journal of Object Technology (JOT) express MVC as a general concept.[11]

The MVC pattern has subsequently evolved,[12] giving rise to variants such as hierarchical model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter(MVP), model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.[citation needed]

The use of the MVC pattern in web applications exploded in popularity after the introduction of Apple's WebObjects in 1996, which was originally written in Objective-C (that borrowed heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became popular with Java developers when WebObjects was ported to Java. Later frameworks for Java, such as Spring (released in 2002), continued the strong bond between Java and MVC. The introduction of the frameworks Rails (December 2005, for Ruby) and Django (July 2005, for Python), both of which had a strong emphasis on rapid deployment, increased MVC's popularity outside the traditional enterprise environment in which it has long been popular. MVC web frameworks now hold large market-shares relative to non-MVC web toolkits.[13]

Use in web applications

Although originally developed for desktop computing, model–view–controller has been widely adopted as an architecture for World Wide Web applications in major programming languages. Several commercial and noncommercial web frameworks have been created that enforce the pattern. These software frameworks vary in their interpretations, mainly in the way that the MVC responsibilities are divided between the client and server.[14]

Early web MVC frameworks took a thin client approach that placed almost the entire model, view and controller logic on the server. This is still reflected in popular frameworks such as Ruby on RailsDjangoASP.NET MVC. In this approach, the client sends either hyperlink requests or form input to the controller and then receives a complete and updated web page (or other document) from the view; the model exists entirely on the server.[14] As client technologies have matured, frameworks such as AngularJSEmberJSJavaScriptMVC and Backbone have been created that allow the MVC components to execute partly on the client (also see Ajax).

Sumber : Wikipedia MVC


Associated Kursus: KI142303BKI142303B
Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()