Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()
Anyone in the world

The repository pattern is concerned with the static structure of a system and does not show its run-time organization. My next example illustrates a very commonly used run-time organization for distributed systems. Picture 1. describe the Client-server pattern.

Client-server%20pattern.JPG

Picture 1. The Client-server Pattern

A system that follows the client–server pattern is organized as a set of services and associated servers, and clients that access and use the services. The major components of this model are:

1. A set of servers that offer services to other components. Examples of servers include print servers that offer printing services, file servers that offer file management
services, and a compile server, which offers programming language
compilation services.

2. A set of clients that call on the services offered by servers. There will normally be several instances of a client program executing concurrently on different computers.

3. A network that allows the clients to access these services. Most client–server systems are implemented as distributed systems, connected using Internet
protocols.

Client–server architectures are usually thought of as distributed systems architectures but the logical model of independent services running on separate servers can be implemented on a single computer. Again, an important benefit is separation and independence. Services and servers can be changed without affecting other parts of the system.

Clients may have to know the names of the available servers and the services that they provide. However, servers do not need to know the identity of clients or how many clients are accessing their services. Clients access the services provided by a server through remote procedure calls using a request-reply protocol such as the http protocol used in the WWW. Essentially, a client makes a request to a server and waits until it receives a reply.

A%20Client-server%20Architecture%20for%2

Picture 2. A Client-server Architecture for a Film Library

Picture 2. is an example of a system that is based on the client–server model. This is a multi-user, web-based system for providing a film and photograph library. In this system, several servers manage and display the different types of media. Video frames need to be transmitted quickly and in synchronation but at relatively low resolution. They may be compressed in a store, so the video server can handle video compression and decompression in different formats. Still pictures, however, must be maintained at a high resolution, so it is appropriate to maintain them on a separate server.

The catalog must be able to deal with a variety of queries and provide links into the web information system that includes data about the film and video clips, and an e-commerce system that supports the sale of photographs, film, and video clips. The client program is simply an integrated user interface, constructed using a web browser, to access these services.

The most important advantage of the client–server model is that it is a distributed architecture. Effective use can be made of networked systems with many distributed processors. It is easy to add a new server and integrate it with the rest of the system or to upgrade servers transparently without affecting other parts of the system.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 11:25 ]
 
Anyone in the world

The layered architecture and MVC patterns are examples of patterns where the view presented is the conceptual organization of a system. The Repository pattern (Picture 1), describes how a set of interacting components can share data.

The majority of systems that use large amounts of data are organized around a shared database or repository. This model is therefore suited to applications in which data is generated by one component and used by another. Examples of this type of system include command and control systems, management information systems, CAD systems, and interactive development environments for software.

Repository%20Pattern.JPG

Picture 1. The Repository Pattern

Picture 2. is an illustration of a situation in which a repository might be used. This diagram shows an IDE that includes different tools to support model-driven development. The repository in this case might be a version-controlled environment that keeps track of changes to software and allows rollback to earlier versions.

Organizing tools around a repository is an efficient way to share large amounts of data. There is no need to transmit data explicitly from one component to another. However, components must operate around an agreed repository data model. Inevitably, this is a compromise between the specific needs of each tool and it may be difficult or impossible to integrate new components if their data models do not fit the agreed schema. In practice, it may be difficult to distribute the repository over a number of machines. Although it is possible to distribute a logically centralized repository, there may be problems with data redundancy and inconsistency.

Repository%20Architecture%20for%20IDE.JP

Picture 2. A Repository Architecture for an IDE

In the example shown in Picture 2., the repository is passive and control is the responsibility of the components using the repository. An alternative approach, which has been derived for AI systems, uses a ‘blackboard’ model that triggers components when particular data become available. This is appropriate when the form of the repository data is less well structured. Decisions about which tool to activate can only be made when the data has been analyzed. This model is introduced by Nii (1986). Bosch (2000) includes a good discussion of how this style relates to system quality attributes.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 09:16 ]
 
Anyone in the world

1) Optimize for Flow

Flow has a big influence on your productivity. Flow allows you to give all your focus to the specific problem you are solving. Flow is a multiplier of your performance. It is fair to say that when you are programming and not in flow, you are wasting time.

main-qimg-d4ea22e2052f26a40ab47cf711a012

Usually you need some time to get into flow. Use music as a catalyst to speed this process up. Once you are in flow, your new problem is to stay in flow: Make sure you are not getting interrupted. This includes putting the internet - including your email client- and your smartphone on stand-by.

 

2) Use big chunks of time for creating software

Three one-hour sessions of programming are not as effective as a single three-hour session. Remember that you always have a constant overhead to get into flow, so try to allocate big chunks of time for crafting software to keep this overhead low. Schedule your meetings around your programming sessions and not in between. Know the difference between makers schedule and managers schedule and use it to your advantage.

 

3) Fast Feedback

If your application always needs 30 seconds to build you will get distracted during these 30 seconds. Distraction is a flow killer, so make sure you get feedback as fast as possible. Strive for fast build times, or even better live reloading, fast test runs and fast deployments.

 

4) Automate your processes

Your projects usually have some non-automated processes, e.g. running bundler after changes to your Gemfile.lock. Automate these processes. Otherwise they are very human error prone: if a developer forgets about one step he might get stuck. This means loss of flow and will cost you 15 minutes to get back into it.

Your automated processes need to work well. If your CI server is reporting failing tests at random times even when the tests should pass, everybody will ignore the CI server because you cannot trust the results. Then the whole process is needless.

Make sure your processes have a great user experience. This includes crafting great error messages: Error messages should always help the user to recover by providing a possible solution to the problem.

Use code generators to automate the process of software development as much as possible. Computers don't make mistakes, but software developers do.

 

5) Do not debug, make bugs impossible by design

As an engineer your goal is to build a great application. So when you are debugging you are not moving closer to the goal of getting it done. A good way to increase productivity is to make bugs impossible by design. It’s not possible all the time, and sometimes debugging is necessary, but many bugs can be avoided.

Report failure conditions early, fail fast. Use the type system of your programming language to make sure only valid data is passed to your functions at compile-time. Use exceptions instead of `null`-checks, `null`-checks will be forgotten, exceptions will be thrown even if you don't handle them.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!

Software maintenance activities can be classified as [1]:

  • Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
  • Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
  • Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
  • Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)

Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:

How maintainable is my application/source-code, really?

The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.

So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small span of time. Can you tell, albeit subjectively, whether you think one is more maintainable than the other? What does the adjective maintainable imply for you when making this comparison – think about this for a second before we move on.

Done? So, how did you define maintainability? Most software engineers would think of some combination of testability, understandability and modifiability of code, as measures of maintainability. Another aspect that is equally critical is the ability to understand the requirement, the “what” that is implemented by the code, the “how”. That is, is there a mapping from code to requirements and vice versa that could be discerned from the code base itself? This information may exist externally as a traceability document, but even having some information in the source code – either by the way it’s laid out into packages/modules, naming conventions  or having READMEs in every package explaining the role of the classes, can be immensely valuable.

These core facets can be broken down further, to gain further insight into the maintainability of the application:

  1. Testability – the presence of an effective test harness; how much of the application is being tested, the types of tests (unit, integration, scenario etc.,) and the quality of the test cases themselves?
  2. Understandability – the readability of the code; are naming conventions followed? Is it self-descriptive and/or well commented? Are things (e.g., classes) doing only one thing or many things at once? Are the methods really long or short and can their intent be understood in a single pass of reading or does it take a good deal of screen staring and whiteboard analysis?
  3. Modifiability – structural and design simplicity; how easy is it to change things? Are things tightly or loosely coupled (i.e., separation of concerns)? Are all elements in a package/module cohesive and their responsibilities clear and closely related? Does it have overly deep inheritance hierarchies or does it favor composition over inheritance? How many independent paths of execution are there in the method definitions (i.e., cycolmatic complexity)? How much code duplication exists?
  4. Requirement to implementation mapping and vice versa – how easy is it to say “what” the application is supposed to do and correlate it with “how” it is being done, in code? How well is it done? Does it need to be refactored and/or optimized? This information is paramount for maintenance efforts and it may or may not exist for the application under consideration, forcing you to reverse engineer the code and figure out the ‘what’ yourself.

Those are the four major dimensions on which one can measure maintainability. Each of the facets can (and is) broken down further for a more granular comparison. These may or may not be the exact same ones that you thought of, but there will be a great deal of overlap. Also, not every criterion is equally important. For some teams, testability may trump structural/design simplicitly. That is, they may care a lot more about the presence of test cases (depth and breadth) than deep inheritance trees or a slightly more tightly coupled design. It is thus vital to know which dimension of maintainability is more important for your maintenance team when measuring the quality of your application and carry out the reviews and refactoring with those in mind.

The table below, towards the end of the article, shows a detailed breakdown of the above dimensions of maintainability and elaborates on their relevance to measuring the quality of the source code [2]:

  1. Correlation with quality: How much does the metric relate with our notion of software quality? It implies that nearly all programs with a similar value of the metric will possess a similar level of quality. This is a subjective correlational measure, based on our experience.
  2. Importance: How important is the metric and are low or high values preferable when measuring them? The scales, in descending order of priority are: Extremely Important, Important and Good to have
  3. Feasibility of automated evaluation: Are things fully or partially automable and what kinds of metrics are obtainable?
  4. Ease of automated evaluation: In case of automation how easy is it to compute the metric? Does it involve mammoth effort to set up or can it be plug-and-play or does it need to be developed from scratch? Any OTS tools readily available?
  5. Completeness of automated evaluation: Does the automation completely capture the metric value or is it inconclusive, requiring manual intervention? Do we need to verify things manually or can we directly rely on the metric reported by the tool?
  6. Units: What units/measures are we using to quantify the metric?

Maintainability_table

 

There is no single metric that can accurately capture the notion of maintainability of an application. There exist compound metrics like maintainability index (MI) that help predict the maintainability of the application using the Halstead Volume, Cyclomatic Complexity, Total SLOC (source lines of code) and Comments Ratio [3]:

Equation for computing Maintainability Index (MI)

Where:

  • V is the average Halstead Volume per module
  • G is the average Cyclomatic Complexity per module
  • L is the average number of Source Lines of Code (SLOC) per module
  • C is the average number of comment lines per module

(Note: some variants of the formula suggest using ‘sum total values’ instead of averages)

The use of this metric is debatable but could be used in conjunction with the above metrics or your team could create a compound metric based on the above dimensions! As long as the metric makes sense to your team and your organization you’re free to create your own, albeit meaningful, metrics.

It is wise to keep tracking the relevant metrics at various anchor-point milestones and throughout the development life-cycle, as well as having periodic code reviews to ensure that code quality is high. As you can see one can’t (and shouldn’t) solely rely on the metrics output by automated tools. Care must be taken to interpret the value of the metrics and use them to guide the refactoring of the code base.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

The notions of separation and independence are fundamental to architectural design because they allow changes to be localized. The MVC pattern, shown in Picture 1., separates elements of a system, allowing them to change independently. For example, adding a new view or changing an existing view can be done without any changes to the underlying data in the model. The layered architecture pattern is another way of achieving separation and independence. This pattern is shown in Picture 4.. Here, the system functionality is organized into separate layers, and each layer only relies on the facilities and services offered by the layer immediately beneath it.


MVC%20Pattern.JPG

Picture 1. The Model-View-Controller (MVC) Pattern

 
The%20organization%20of%20the%20MVC.JPG

Picture 2. The Organization of The MVC

 

web%20application%20architecture%20using
Picture 3. Web Application Architecture using the MVC Pattern


This layered approach supports the incremental development of systems. As a layer is developed, some of the services provided by that layer may be made available to users. The architecture is also changeable and portable. So long as its interface is unchanged, a layer can be replaced by another, equivalent layer. Furthermore, when layer interfaces change or new facilities are added to a layer, only the adjacent layer is affected. As layered systems localize machine dependencies in inner layers, this makes it easier to provide multi-platform implementations of an application system. Only the inner, machine-dependent layers need be re-implemented to take account of the facilities of a different operating system or database.

Layered%20Architecture%20Pattern.JPG
Picture 4. The Layered Architecture Pattern


Picture 5. is an example of a layered architecture with four layers. The lowest layer includes system support software—typically database and operating system support. The next layer is the application layer that includes the components concerned with the application functionality and utility components that are used by other application components. The third layer is concerned with user interface management and providing user authentication and authorization, with the top layer providing user interface facilities. Of course, the number of layers is arbitrary. Any of the layers in Picture 5. could be split into two or more layers. Picture 6. is an example of how this layered architecture pattern can be applied to a library system called LIBSYS, which allows controlled electronic access to copyright material from a group of university libraries. This has a five-layer architecture, with the bottom layer being the individual databases in each library. This shows the organization of the system for mental healthcare (MHC-PMS) that I have discussed in earlier chapters.

 

 
Generic%20layered%20architecture.JPG

Picture 5. A Generic Layered Architecture

 

Architecture%20of%20The%20LIBSYS%20Syste
Picture 6. The Architecture of The LIBSYS System

 

Source : Sommervile, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 09:03 ]
 
Gambar dari VYNSKA AMALIA PERMADI 5116201002
by VYNSKA AMALIA PERMADI 5116201002 - Friday, 23 December 2016, 08:44
Anyone in the world

A Fresh Look at User Acceptance Testing

User Acceptance Testing is universally recognised as an important element in the delivery of stable software that meets the business requirements. However there are significant challenges in executing the UAT phase in accordance with best industry practices. Some of these challenges are based on a fundamental misunderstanding of the importance of UAT.

Perhaps it is because the User Acceptance Testing is the last significant activity in the project. Perhaps it is because the subject matter experts (SMEs) necessary to its success are rarely dedicated to the activity. Or perhaps there is too much reliance on the quality of testing that has occurred prior to the user acceptance testing phase.

It is surely time to reconsider its status as the ugly duckling of software quality and give it the prominence it deserves. Let me explain why UAT is so fundamental to a successful project.

 

The UAT Quality Gate

It doesn’t matter whether you have adopted a version of an agile testing methodology where UAT is seen as part of a hardening sprint or as the final activity in a waterfall project. What is common to all methodologies is that you to ensure that the delivered software meets the business objectives. It is the vital Go/No Go quality gate.

Search the internet and you will find a vast number of articles about how UAT should be tackled, typically linked to the development methodology employed. But this totally ignores the majority of applications which are not developed in house. Instead they are purchased or consumed in the cloud and often support the key business processes. For these applications user acceptance testing is the only activity to ensure that the business needs will be met.

 

The UAT Challenges

Once the importance of UAT is understood, you will hopefully agree that its inherent challenges must be solved rather accepted as part of an imperfect world. So let’s look at some of those challenges but let’s start by looking at the much touted pre-requisite before UAT can commence.

In the world where which many internet authors appear to inhabit, all the mechanics of an application have been tested and operate perfectly before UAT starts. The UAT best practice is simply to ensure that the delivered capabilities match the business requirements. In the messy real world which most of us inhabit this is absolute rubbish.

 

UAT for ERP applications

Take ERP applications for example which are dominated by SAP and Oracle. One of our customers implemented a new version from one of these major software companies and found over 400 functional errors. Admittedly they were one of the early adopters of the release but if major software companies struggle to deliver code to UAT that is functionally perfect, it will be wise to assume the same for in-house developments and plan accordingly.
That means there will be multiple iterations of the UAT cycle (or conference room pilot in the vernacular of some vendors). So we must manage user fatigue as well as the general availability of SMEs – business users will run with two UAT cycles perhaps but beyond that it is not only their availability but their effectiveness that must be considered as repetition jaundice sets in. Some companies get the UAT test plan initially executed by business aware members of the QA test team and thus protect the scarce availability of business users.
In other companies we find dedicated UAT teams but that may be deemed a luxury by others. Even a dedicated team is not perfect. If UAT is their full time job, how do you ensure that their knowledge of an evolving business remains current?

 

The importance of UAT

So we have established that UAT is a vital activity, but that the delivered software will not function perfectly and that there will challenges getting sufficient qualified resources to undertake the work. Balancing these factors requires two things. You need buy in from senior management and you need a user acceptance test plan that is realistic in its time-scales and has sufficient contingency to deal with functional defects and the response speed of internal or external development teams. That may mean that the planned implementation date needs to move but far better to get that agreed in advance – no-one will lose their jobs if the application is delivered early!

 

UAT testing tools

Lastly let’s consider whether there are tools that can help the user acceptance testing process. The challenges are planning, activity tracking, status and communication. You can try and use scraps of paper, Microsoft Excel/Word and email but in reality for anything beyond a very small project these are not fit for purpose. There are many products available for the planning and management of the user acceptance test process and in Qualify we offer one that can be exactly configured to meet your needs but while valuable, these products don’t tackle the fundamental problem – how do you know what testing was actually performed and how do you create the reproduction steps so that defects can be rapidly resolved? Our TestDrive-UAT represents a new class of products specifically designed to tackle this challenge.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Software maintenance is a major issue for most CIOs because over 50% of IT time is spent on it — a daunting figure that hasn't changed much though the years. It's also one of the least favorite topics that CEOs and other C-level executives want to hear about.

It's easy to avoid the topic of software maintenance, given the focus on today's problems and pressures that are brought on by a constantly changing business environment. In this high-pressure situation, which relentlessly continues its advance, the idea of going back to "fix things" or to even see why they haven't been implemented, seems pointless.

Nevertheless, CIOs have to care about this, especially when IT budgets continue to remain flat and when half of IT staff is deployed on maintenance every day. These CIOs see their project loads burgeoning, knowing that only half of their staff is free to work on them.

The vicious circle of continuous software maintenance is fueled by various factors, which include the following:

  • In many cases, age-old legacy systems that are characterized by difficult to maintain (and to diagnose) "spaghetti code" that was written in the days when code was free-flowing and unstructured and yet continue to run mission-critical systems. It takes time to untangle this code and to fix or embellish it. The task is rendered more difficult because the code is usually undocumented, and the original writers have long since retired.
  • New code is not as technically solid as it should be — the reason is enterprise pressure to deploy the code, even if it is imperfect. Consequently, the organization lives with the imperfections until they become so overwhelming that the software maintenance team has to go in and fix it so it can get back into production.
 

CIOs cannot buck these circumstances, but some are beginning to take steps to reduce the amount of IT time spends on maintaining imperfect and broken systems. Here are five best practices to consider.

 

1: Use the cloud version of software to sidestep a legacy system

Some enterprise have actively deployed cloud versions of internal systems (like their enterprise resource planning systems) when they bring on new companies through acquisition. The reason is simple: By moving a new organization to the cloud, personnel in this business at least get used to using the same software that the acquiring enterprise uses. Over time, a decision can be made to transfer the acquired organization into the in-house enterprise system.

However, as more enterprises use this strategy, more are rethinking their approach. The result has been a change in thinking to where the ultimate goal becomes moving everyone (including the enterprise) to the cloud-based system. The idea is to push software maintenance to the cloud provider, thereby eliminating most of the time that internal IT has to spend on it.

 

 

2: Replace a custom system with a generic package

 

It sometimes makes sense to replace a custom system in favor of a third-party generic package that has more contemporary capabilities. In a situation like this, IT can also eliminate most of the software maintenance it incurs with the old software. The key is getting users — and the business — on board. Many times the customization that has been built into a system over decades can't be replaced with a more generic solution because of the competitive advantage the custom solution provides.

 

 

3: Invest in more quality assurance (QA) test bench automation

QA is one of the functions that many organization shortcut in the interests of getting software into production quickly. This isn't likely to change, but new automated testing tools in QA that run automated scripts and check for software deficiencies can change how well software runs and reduce software fix time.

 

 

4: Retrain and redeploy software maintenance personnel

As much as CIOs don't like to admit it, there is a pecking order in IT. The employees who often get placed on software maintenance teams are older IT programmers, new employees, or programmers who do not demonstrate proficiency in new app development. If software maintenance is to be reduced, these workers will need to be retrained and redeployed. Despite budget limitations and work commitments, CIOs must demonstrate a commitment to adopt these measures.

 

 

5: Set a metric for percent of personnel engaged in new projects

CEOs and other C-level executives might not want to hear about software maintenance, but if the CIO presents (and starts measuring against) a metric that shows the percent of IT staff dedicated to new projects and explains how software maintenance can negatively impact this, others are bound to take notice and view the effort more strategically.

 

 

 

Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Architectural design is a creative process where you design a system organization that will satisfy the functional and non-functional requirements of a system. Because it is a creative process, the activities within the process depend on the type of system being developed, the background and experience of the system architect, and the specific requirements for the system. It is therefore useful to think of architectural design as a series of decisions to be made rather than a sequence of activities. During the architectural design process, system architects have to make a number of structural decisions that profoundly affect the system and its development process. Based on their knowledge and experience, they have to consider the following fundamental questions about the system:

  1. Is there a generic application architecture that can act as a template for the system that is being designed?
  2. How will the system be distributed across a number of cores or processors?
  3. What architectural patterns or styles might be used?
  4. What will be the fundamental approach used to structure the system?
  5. How will the structural components in the system be decomposed into sub-components?
  6. What strategy will be used to control the operation of the components in the system?
  7. What architectural organization is best for delivering the non-functional requirements of the system?
  8. How will the architectural design be evaluated?
  9. How should the architecture of the system be documented?

 

Although each software system is unique, systems in the same application domain often have similar architectures that reflect the fundamental concepts of the domain. For example, application product lines are applications that are built around a core architecture with variants that satisfy specific customer requirements. When designing a system architecture, you have to decide what your system and broader application classes have in common, and decide how much knowledge from these application architectures you can reuse.

For embedded systems and systems designed for personal computers, there is usually only a single processor and you will not have to design a distributed architecture for the system. However, most large systems are now distributed systems in which the system software is distributed across many different computers. The choice of distribution architecture is a key decision that affects the performance and reliability of the system.

The architecture of a software system may be based on a particular architectural pattern or style. An architectural pattern is a description of a system organization (Garlan and Shaw, 1993), such as a client–server organization or a layered architecture. Architectural patterns capture the essence of an architecture that has been used in different software systems. You should be aware of common patterns, where they can be used, and their strengths and weaknesses when making decisions about the architecture of a system.

Garlan and Shaw’s notion of an architectural style (style and pattern have come to mean the same thing) covers questions 4 to 6 in the previous list. You have to choose the most appropriate structure, such as client–server or layered structuring, that will enable you to meet the system requirements. To decompose structural system units, you decide on the strategy for decomposing components into sub-components. The approaches that you can use allow different types of architecture to be implemented. Finally, in the control modeling process, you make decisions about how the execution of components is controlled. You develop a general model of the control relationships between the various parts of the system.

Because of the close relationship between non-functional requirements and software
architecture, the particular architectural style and structure that you choose for a system should depend on the non-functional system requirements:

  1. Performance. If performance is a critical requirement, the architecture should be designed to localize critical operations within a small number of components, with these components all deployed on the same computer rather than distributed across the network. This may mean using a few relatively large components rather than small, fine-grain components, which reduces the number of component communications. You may also consider run-time system organizations that allow the system to be replicated and executed on different processors.
  2. Security. If security is a critical requirement, a layered structure for the architecture should be used, with the most critical assets protected in the innermost layers, with a high level of security validation applied to these layers. 
  3. Safety. If safety is a critical requirement, the architecture should be designed so that safety-related operations are all located in either a single component or in a small number of components. This reduces the costs and problems of safety validation and makes it possible to provide related protection systems that can safely shut down the system in the event of failure.
  4. Availability. If availability is a critical requirement, the architecture should be designed to include redundant components so that it is possible to replace and update components without stopping the system.
  5. Maintainability If maintainability is a critical requirement, the system architecture should be designed using fine-grain, self-contained components that may readily be changed. Producers of data should be separated from consumers and shared data structures should be avoided.

 

Obviously there is potential conflict between some of these architectures. For example, using large components improves performance and using small, fine-grain components improves maintainability. If both performance and maintainability are important system requirements, then some compromise must be found. This can sometimes be achieved by using different architectural patterns or styles for different parts of the system.

Evaluating an architectural design is difficult because the true test of an architecture is how well the system meets its functional and non-functional requirements when it is in use. However, you can do some evaluation by comparing your design against reference architectures or generic architectural patterns. Bosch’s (2000) description of the non-functional characteristics of architectural patterns can also be used to help with architectural evaluation.

 

Reference : Sommervile, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

You can improve the quality of your Software Architecture Design by using the following 10 tips. Describing your software architecture design is useful for any type of project, it will share the design of the system among your stakeholder.

 

  1. Based on non functional requirements
  2. Rationale, rationale, rationale
  3. Don’t Repeat Yourself
  4. Slice the cake
  5. Prototype
  6. Quantify
  7. Get it working, Get it right, Get it optimized
  8. Focus on the boundaries and interfaces
  9. The Perfect is the enemy of the Good
  10. Align with your stakeholders

 

1. Based on requirements

You should base your software architecture design on the requirements of your stakeholders. An architecture focuses on the non-functional requirements. I see many software architecture designs based on purely technical motives. Each part of your design should be based on business requirements. You as an architect should translate these requirements into the right architectural design decisions. If the stakeholder values maintainability, you could use the layer pattern to separate several parts of the application. If performance is important, maybe layering is not a good solution. An exhaustive list of non-functional requirements can be found at ISO 9126 and at QUINT. If you do not use Non-functional requirements in your organization but want to introduce them,  take a look at this post.

Requirements1.jpg

From the Non-functional requirements or quality attributes you have to create the right design. While you could create this from scratch there are many examples in the form of design patterns or architectural patterns. A design or architectural pattern expresses a relation between a problem and a solution. Although we often think that our problem is unique this is often not the case. If you take a step back you will see that many of our problems already have been solved using existing patterns. Two books that I can recommend are “Pattern-Oriented Software Architecture” and “Design Patterns”. Both books contain a catalog patterns. Each pattern describes the problem it solves and in which context it can be used. There are also many online pattern source on the web such as this one on Wikipedia and this from The OpenGROUP

 

2. Rationale, rationale, rationale

The most important aspect of your architecture description is the recording of your rationale behind your design decisions. It is important for a reader of the architecture description to understand the reason why you made a specific decision. Make your assumptions explicit and add them to the description. Assumptions may be invalid now or later but at least it will be clear how you came to that decision. It make communicating with your team (you do communicate do you?) that much easier if you share you rationale.

Rationale1.jpg

Note that recording your rationale become much easier if your non-functional requirements are explicit. It will be much clearer if you describe that you created several components to increase the testability because testability is the most important requirements. Do describe the Why and How in your software architecture design!

 

3. Don’t Repeat Yourself (DRY)

Don’t Repeat Yourself (DRY) or Duplication Is Evil (DIE) come from software-engineering in general. The DRY principle is stated as “Every piece of knowledge must have a single, unambiguous, authoritative representation within a system”. You can apply this principle on many levels; Architecture, Design, Testing, Source Code and Data. For me this is one of the most difficult things to uphold. You have to fight the repetition because it will slow you and your project down. The difficult part of this Repetition Creep as I call it is that it is introduces very slowly. The first repetition won’t hurt you directly, it will even gain some time. You are able to release the first version of the application somewhat quicker, but as I found it always shows up later and makes something else more difficult. At that moment you regret the decision to introduce repetition.

Repetition.jpg

If you absolutely must add another copy of information make sure that you automatically generate that copy of the information. It will make your live so much easier in the future. One thing that helps to fight repetition is to store the data where it belongs. This seems logical and is the basis of object oriented design but I often see this violated with regards to system architectures. For example take packaging an application for deployment. The process in which you filter the build of your software to include the components that are necessary in a package. Where would you store the information which component should be included in the package? You could create a list that includes the names of the components that should be packaged. That means you introduce your first repetition! You now have two places where component names are mentioned. A better solution would be to add that information to the component itself.

When the first list in any format shows up in or around an application, alarm bells should sound and you should be on the lookout for repetition!

 

4. Slice the cake

I struggled with naming this, but found Slicing the cake as it is called in Agile development the best description. By slicing the cake I mean that you design your architecture iterative in vertical slices. An architect implements or prototypes each vertical slice to confirm if it actually works. You should do this because architectures cannot be created on paper. It does not mean that you cannot use horizontal layering or any other pattern in your architecture. In the case of layering the horizontal layers are smaller. The picture below shows the principle.

Slicingthecake.png

Say you use layering in your architecture design because your stakeholders expect that the components that you develop for this system will be used in other systems as well. During the first iteration you design a small part of the User Interface (UI), a small part of the Business Layer (BL) and a small part of the Data Layer (DL). You make sure that this works as expected by proving it with a prototype or by actually implementing it. In the second iteration you add new functionality and expand each layer horizontally with the needed functionality. The existing UI, BL and DL are combined with the new UI, BL and DL to form the new layers.

The difficulty with slicing is how to slice the cake so that the next slice will properly align with the previous.

 

5. Prototype

When creating a software architecture design make sure that you prototype your design. Validate your assumptions, do that performance test and make sure that the security architecture is valid. A prototype will give you the opportunity to fail fast which is a good thing.

 

6. Quantify

This principle extends the first principle “Based on Requirements”. To be able to create a proper software architecture design you need to quantify your Non-functional requirements. It should be “fast” cannot be a requirement neither is maintainable or testable. How will you know if you have met these requirements? You won’t.

Measure1.jpg

ISO 9126 and QUINT both describe ways to quantify the non-functional requirements. For example testability specifies an indicator “number of test cases per unit volume”. QUINT also specifies how you can actually measure an indicator for example the indicator “Ratio Reused Parts” from the quality attribute Reusability which you can measure using the following protocol:

  1. Measure the size of each reused part;
  2. Measure the size of the entire software product;
  3. Calculate the ratio of reused parts, which is the sum of reused parts divided by (2).

 

7. Get it working, Get it right, Get it optimized

In many projects I have seen architects and developers design software architectures that focus on creating general purpose libraries, services or infrastructure. These are created without a direct reference to a concrete application. Instead they are designing for tomorrow. This for me is like walking backwards, generality cannot be designed up-front. If you think, well… stop! you actually can’t. Today’s businesses change way too fast to design for generality up-front.

You should always start with a concrete implementation for a specific problem. At the time you start working on the next application and find similarities, that’s the time to think about generalizing. This makes the first solution simpler, which should be your design goal.

 

8. Focus on the Boundaries and Interfaces

When creating your software architecture design you should focus on the boundaries of your system and components. When starting blank you should think about separation of concerns. What component or system has which responsibility? Between the components or system design explicit interfaces. Don’t separate a system of component when a lot of communication is necessary between these components or systems.

 

9. The Perfect is the enemy of the Good

The phrase “The perfect is the enemy of the good” from Voltaire is also valid for software architecture design. How many times have you started a new project and thought I want this project to be perfect? And how many times have you actually found out that the project wasn’t perfect. Well, guess what – a project will never be perfect. There will always be problems or forgotten requirements.

Perfection1.jpg

Perfection is never possible. However you are able to create a good software architecture design. Do not try to analyze everything during the start of the project it will slow you down. Watch out for Analysis Paralysis.

 

10. Align with your stakeholders

Before you can create any type of system you need to identify your stakeholders. Each stakeholder has different needs of your software architecture and may require a different view. Software developers may need descriptions using Unified Modeling Language (UML) while business sponsors need a description in natural language. Operations and support staff for example may need other view such as context diagrams.

Stakeholders.jpg

There is a tension between creating all these views for stakeholders and principle 3. Don’t Repeat Yourself. Each view essentially describe the same system and adds repetition. Therefore you should only add those descriptions that adds value for a specific stakeholder.


Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Introduction

The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors.

Methods

In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011.

Results and discussion

Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. “Project” was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.

Key words: Health information systems, Cost, Effective factors, Software maintenance, AHP model
 
 

1. INTRODUCTION

Software production and maintenance issues, costs estimation, project schedule and knowledge of the process have always been complicated cases in software engineering (1-8). Cost depends on the creation and maintenance of the software. Thus, continuous monitoring and control of maintenance costs, and software optimization, are really important. Taking into account this statistic, also leads to careful software maintenance to reduce costs. Software maintenance costs are rising and based on the estimations about 90% of the cost related to the software life is in the maintenance phase. The estimations show 50 percent increase over the past two decades (56). This increase is shown in the Figure 1.

Figure 1.
Development of Software maintenance costs as percentage of total cost [Floris and Harald, 2010]

In another study, the relative costs of maintenance and software development management were estimated more than 90% of the total cost of the software life (910).

Floris and Harald, in their study introduced incomplete documentation and low maintenance as the factor to increase the cost. Therefore the defect makes it difficult for the maintenance team to expand or rebuild the product. Because the production team members may have left the company, be retired or replaced by another person who are unaware of the production process (2).

Since quality improvement and reduced software lifecycle time are among rapid application development techniques, the use of common-sense approach in the production shows that using individual techniques is not a threat to high availability, acceptable performance and quality of projects (4).

In a study researchers introduced support and maintenance software to estimate the maintenance effort. In these researchers’ point of view support and maintenance software were a set of activities to support IT. Magne Jorgensen came to this conclusion that 43 to 44% of the estimations are mentally done by the experts and using such models results in the estimations complexity (10).

Therefore in this research a software is introduced that due to the simplicity and ease of use is a replacement for the estimation models and experts’ mental estimations.

Because the design and implementation of medical software is growing in Iran, and today most medical centers and health centers like to set up this system, it seems to be a growing and effective trend in automation of hospitals and medical and healthcare centers.

Mr. Boehm studied the various cost factors in the simple or complex public systems (1). The results of his research are published in details in the book (Software Architectures: Critical Success Factors and Cost Drivers) (14). Many researchers focused on models and different methods of cost estimation. But what is important is to update and review each model factors. These models include analog models such as the Delphi method or estimations based on professional experience, models such as analysis of performance indicators and models of machine learning algorithms including neural networks, genetic programming, fuzzy logic, and many other models (111213).

Henry Raymond (2007) in a study used the estimation techniques along with the knowledge of the project team, project manager and the president to design a predictive model for estimating the software. This model suggests that the maintenance plays an important role in the success of IT projects. Though the effective use of technology for estimating the time and cost is necessary but is not sufficient. To predict the exact time and cost, the management needs the knowledge, knowledge integration and sharing it.

 
 

2. HISTORICAL PERSPECTIVE TO THE STUDIES WITH AIM OF ESTIMATING THE EFFORT

A thesis of the University of California, with the aim of improving the volume and effort estimation models for software maintenance (12).

A study by Magne Jørgensen considering results of Simula Research Laboratory with an overview of studies in estimating software development effort (3).

Studies evaluation of Jyvaskyla Research Institute and University by Jussi Koskinena et al to estimate the costs of software, support modernization, repair and maintenance (5). Václav Macinka thesis from the University of Brno, with the aim to provide methods for determining the cost of software projects (8).

Taking into account the importance of software maintenance costs, Isfahan University of Medical Sciences, is pursuing the following objectives in this paper:

  • Identify the factors affecting the cost of software maintenance.
  • Prioritize each of the factors affecting the cost of software maintenance.
  • Provide solutions to reduce the maintenance costs of medical software
 
 

3. METHODOLOGY

The scope of this study is all the software produced in the years 2010 to 2011. 15 maintenance team members were kept as a community in this study. After sampling 40 members were selected randomly. In this study, a checklist- designed based on software engineering standards, researcher’s experiences and experts’ confirmation- was used for data collection. SPSS software and Expert Choice software were used for data analysis.

 
 

4. RESULTS

Environments to run 14 software were Windows and 1 other one was DOS. System operational dates were from 2000 until 2011 and the operational period has been variable from 20 months to 102 months. Current status of all systems was active, and only one of them had been disabled.

The results of the first research objectives are as follow:

* Based on studies from reputable books and literature in the field of software engineering, well-known sites and interviews with informatics experts, 32 effective factors were obtained and examined in the software maintenance cost estimations.

Cost factors were classified in 6 groups, which are as follow:

In line with the second goal (to prioritize each of the factors affecting the cost of medical software maintenance) the following results were obtained:

* For prioritization of the factors, at first and before modeling, the measurement criteria are needed to be identified. Then six found characteristics and their measurement criteria should be estimated and finally entered into the EC application.

To achieve this goal, the first measurement criteria (32 factors) were determined based on the importance in the software maintenance.

This questionnaire was prepared for a five-degree Likert scale and distributed among specialists in this field. After naming and ordering the information, the information was entered in the software. The list of measurement criteria and results after the interview are presented. Ranking of the influencing factors are displayed in Table 4 with the help of EC software.

The following results obtained regarding the third objective of the study:

* Based on the results of the current study and deficits in production and maintenance process, it seems that by following the guidelines that have been mentioned, one can reduce the cost of software maintenance to achieve desired results found in increasing productivity as well as making benefit of limited financial resources and manpower available in the country.

 

1) Providing an effective tool for Software Maintenance:

* Use appropriate language for system maintenance (especially in developing application systems) and develop tools to use these languages.

* Optimal use of system implementation such as CASE tools.

* The use of programming standards and protocols.

* The use of the principles, methods and modern programming techniques.

 

2) Using proper techniques in software development:

* Designing on the basis of independent modules.

* Designing and programming using methods consistent with the effective software engineering principles in software development.

* Prototyping before making the full system.

 

3) Having the right people for the software maintenance:

Select professionals familiar with the project language and programming language.

* Enough familiarity of project group with the host machine and the target machine.

* Having experienced group to offset the effect of the product increasing complexity on development and maintenance costs.

* Selecting individuals with the ability to adequately analyze the project and coordinate teamwork.

* Having individuals with experience in the similar work like this project and the host machine.

* Having individuals aware of the application and familiar with the expectations of the system.

 

4) Considering future

* Consider the program structure and acceptability of changes.

* Careful analysis of the needs based on the present situation and future trends for software maintenance.

* Doing changes in environment regarding software conditions, the efficiency increase rate and maintenance costs.

When the COCOMO model was accurately described the use of structured programming was not like today and software tools were not much available. Nowadays use of tools, has increased and structured techniques are common. Therefore, the factors that may have initially been defined are not important anymore. So some of the factors identified by Mr. Boehm (1) (such as computer memory limitations factor) are outdated, but the overall coefficients of the product categories, computer, personnel and project are still fit. Given that all HIS systems are linked in a network, computer network factor has been added. Bohemia took these factors into consideration at his time, but today with such the technology, no scholar has examined and updated these factors. In this study we updated factors extracted by Boehm. According to the results, the validity of all these factors were confirmed and importance of the factors related to “project” and “computer network” was higher than other attributes, this means that project managers must estimate the cost of maintenance software, taking into account these two characteristics.

 

 

5. CONCLUSION

Based on interviews, 32 factors were identified in the cost estimation of medical software maintenance and were approved by informatics specialists. Using AHP model parameters, 6 groups were ranked. Since in each research a problem is stated and examined and at the end solutions are proposed, in this study, we also provide solutions to reduce maintenance costs. What the Informatics experts agree on for reducing maintenance costs, is that “with respect to some important factors such as accuracy in HIS projects feasibility, along with complete documentation and helping the design and implementation mechanisms in the maintenance phase ,favorable results can be achieved in reducing the cost.“

Generally we can conclude that for an accurate assessment and reduce the cost of software maintenance, software maintenance factors determining is essential. This will lead to the longer life of software. Evaluation of these factors and their influence on each of the maintenance costs, help the project manager in making decisions and planning, and is essential in the success of software maintenance. Project managers must consider these factors for success in their projects and decisions:

* HIS software is generally in a network and for giving a better service to applicants, data collection is done on the central server. As a result, software should be developed in a network and maintainers should give their service in a network. In other words, if the software is Single that costs less, but for network applications, computer network costs are added to the costs. So in designing this software these costs should also be noted.

* To reduce maintenance costs and increase the longevity of HIS software determining the cost estimation factors is necessary, this can help to increase productivity and provide a native model to estimate the system maintenance cost. It will make the project manager able to estimate the real cost at any time in the system.



Sumber 

Associated Kursus: KI142303BKI142303B
 
Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()