Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 42 ()
Anyone in the world

All systems that involve interaction with a shared database can be considered to be transaction-based information systems. An information system allows controlled access to a large base of information, such as a library catalog, a flight timetable, or the records of patients in a hospital. Increasingly, information systems are web-based systems that are accessed through a web browser.

Picture 1. a very general model of an information system. The system is modeled using a layered approach where the top layer supports the user interface and the bottom layer is the system database. The user communications layer handles all input and output from the user interface, and the information retrieval layer includes application-specific logic for accessing and updating the database. As we shall see later, the layers in this model can map directly onto servers in an Internet-based system.

Layered%20Information%20System%20Archite

Picture 1. Layered Information System Architecture

As an example of an instantiation of this layered model, Picture 2. shows the architecture of the MHC-PMS. Recall that this system maintains and manages details of patients who are consulting specialist doctors about mental health problems. The detail added to each layer in the model by identifying the components that support user communications and information retrieval and access :


1. The top layer is responsible for implementing the user interface. In this case, the UI has been implemented using a web browser.
2. The second layer provides the user interface functionality that is delivered through the web browser. It includes components to allow users to log in to the system and checking components that ensure that the operations they use are allowed by their role. This layer includes form and menu management components that present information to users, and data validation components that check information consistency.
3. The third layer implements the functionality of the system and provides components that implement system security, patient information creation and updating, import and export of patient data from other databases, and report generators that create management reports.
4. Finally, the lowest layer, which is built using a commercial database management system, provides transaction management and persistent data storage.

The%20Architecture%20of%20The%20MHC-PMS.

Picture 2. The Architecture of The MHC-PMS

Information and resource management systems are now usually web-based systems where the user interfaces are implemented using a web browser. For example, e-commerce systems are Internet-based resource management systems that accept electronic orders for goods or services and then arrange delivery of these goods or services to the customer. In an e-commerce system, the application specific layer includes additional functionality supporting a ‘shopping cart’ in which users can place a number of items in separate transactions, then pay for them all together in a single transaction.

The organization of servers in these systems usually reflects the four-layer generic model presented in Picture 1. These systems are often implemented as multi-tier client server/architectures :

1. The web server is responsible for all user communications, with the user interface implemented using a web browser
2. The application server is responsible for implementing application-specific logic as well as information storage and retrieval requests
3. The database server moves information to and from the database and handles transaction management.

Using multiple servers allows high throughput and makes it possible to handle hundreds of transactions per minute. As demand increases, servers can be added at each level to cope with the extra processing involved.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 15:30 ]
 
Anyone in the world

Transaction processing (TP) systems are designed to process user requests for information from a database, or requests to update a database (Lewis et al., 2003). Technically, a database transaction is sequence of operations that is treated as a single unit (an atomic unit). All of the operations in a 
transaction have to be completed before the database changes are made permanent. This ensures that failure of operations within the transaction does not lead to inconsistencies in the database.

From a user perspective, a transaction is any coherent sequence of operations that satisfies a goal, such as ‘find the times of flights from London to Paris’. If the user transaction does not require the database to be changed then it may not be necessary to package this as a technical database transaction.

An example of a transaction is a customer request to withdraw money from a bank account using an ATM. This involves getting details of the customer’s account, checking the balance, modifying the balance by the amount withdrawn, and sending commands to the ATM to deliver the cash. Until all of these steps have been completed, the transaction is incomplete and the customer accounts database is not changed.

The%20Structure%20of%20Transaction%20Pro

Picture 1. The Structure of Transaction Processing Applications

Transaction processing systems are usually interactive systems in which users make asynchronous requests for service. Picture 1. illustrates the conceptual architectural structure of TP applications. First a user makes a request to the system through an I/O processing component. The request is processed by some application specific logic. A transaction is created and passed to a transaction manager, which is usually embedded in the database management system. After the transaction manager has ensured that the transaction is properly completed, it signals to the application that processing has finished.

The%20software%20architecture%20of%20an%

Picture 2. The Software Architecture of an ATM System

Transaction processing systems may be organized as a ‘pipe and filter’ architecture with system components responsible for input, processing, and output. For example, consider a banking system that allows customers to query their accounts and withdraw cash from an ATM. The system is composed of two cooperating software components—the ATM software and the account processing 
software in the bank’s database server. The input and output components are implemented as software in the ATM and the processing component is part of the bank’s database server. Picture 2. shows the architecture of this system, illustrating the functions of the input, process, and output components.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 11:32 ]
 
Gambar dari HENDRO EKO PRABOWO 5116201006
by HENDRO EKO PRABOWO 5116201006 - Friday, 23 December 2016, 11:27
Anyone in the world

Application systems are intended to meet a business or organizational need. All businesses have much in common—they need to hire people, issue invoices, keep accounts, and so on. Businesses operating in the same sector use common sector specific applications. Therefore, as well as general business functions, all phone companies need systems to connect calls, manage their network, issue bills to customers, etc. Consequently, the application systems used by these businesses also have much in common.

These commonalities have led to the development of software architectures that describe the structure and organization of particular types of software systems. Application architectures encapsulate the principal characteristics of a class of systems. For example, in real-time systems, there might be generic architectural models of different system types, such as data collection systems or monitoring systems. Although instances of these systems differ in detail, the common architectural structure can be reused when developing new systems of the same type.

The application architecture may be re-implemented when developing new systems but, for many business systems, application reuse is possible without re-implementation. We see this in the growth of Enterprise Resource Planning (ERP) systems from companies such as SAP and Oracle, and vertical software packages (COTS) for specialized applications in different areas of business. In these systems, a generic system is configured and adapted to create a specific business application. For example, a system for supply chain management can be adapted for different types of suppliers, goods, and contractual arrangements.

As a software designer, you can use models of application architectures in a number of ways:

1. As a starting point for the architectural design process If you are unfamiliar with the type of application that you are developing, you can base your initial design on a generic application architecture. Of course, this will have to be specialized for the specific system being developed, but it is a good starting point for design.
2. As a design checklist If you have developed an architectural design for an application system, you can compare this with the generic application architecture. You can check that your design is consistent with the generic architecture.
3. As a way of organizing the work of the development team The application architectures identify stable structural features of the system architectures and in many cases, it is possible to develop these in parallel. You can assign work to group members to implement different components within the architecture.
4. As a means of assessing components for reuse If you have components you might be able to reuse, you can compare these with the generic structures to see whether there are comparable components in the application architecture.
5. As a vocabulary for talking about types of applications If you are discussing a specific application or trying to compare applications of the same types, then you can use the concepts identified in the generic architecture to talk about the applications.

There are many types of application system and, in some cases, they may seem to be very different. However, many of these superficially dissimilar applications actually have much in common, and thus can be represented by a single abstract application architecture. I illustrate this here by describing the following architectures of two types of application:

1. Transaction processing applications Transaction processing applications are database-centered applications that process user requests for information and update the information in a database. These are the most common type of inter- active business systems. They are organized in such a way that user actions can’t interfere with each other and the integrity of the database is maintained. This class of system includes interactive banking systems, e-commerce systems, information systems, and booking systems.
2. Language processing systems Language processing systems are systems in which the user’s intentions are expressed in a formal language (such as Java). The language processing system processes this language into an internal format and then interprets this internal representation. The best-known language processing systems are compilers, which translate high-level language programs into machine code.

These particular types of system has been chosen because a large number of web based business systems are transaction-processing systems, and all software development relies on language processing systems.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

The repository pattern is concerned with the static structure of a system and does not show its run-time organization. My next example illustrates a very commonly used run-time organization for distributed systems. Picture 1. describe the Client-server pattern.

Client-server%20pattern.JPG

Picture 1. The Client-server Pattern

A system that follows the client–server pattern is organized as a set of services and associated servers, and clients that access and use the services. The major components of this model are:

1. A set of servers that offer services to other components. Examples of servers include print servers that offer printing services, file servers that offer file management
services, and a compile server, which offers programming language
compilation services.

2. A set of clients that call on the services offered by servers. There will normally be several instances of a client program executing concurrently on different computers.

3. A network that allows the clients to access these services. Most client–server systems are implemented as distributed systems, connected using Internet
protocols.

Client–server architectures are usually thought of as distributed systems architectures but the logical model of independent services running on separate servers can be implemented on a single computer. Again, an important benefit is separation and independence. Services and servers can be changed without affecting other parts of the system.

Clients may have to know the names of the available servers and the services that they provide. However, servers do not need to know the identity of clients or how many clients are accessing their services. Clients access the services provided by a server through remote procedure calls using a request-reply protocol such as the http protocol used in the WWW. Essentially, a client makes a request to a server and waits until it receives a reply.

A%20Client-server%20Architecture%20for%2

Picture 2. A Client-server Architecture for a Film Library

Picture 2. is an example of a system that is based on the client–server model. This is a multi-user, web-based system for providing a film and photograph library. In this system, several servers manage and display the different types of media. Video frames need to be transmitted quickly and in synchronation but at relatively low resolution. They may be compressed in a store, so the video server can handle video compression and decompression in different formats. Still pictures, however, must be maintained at a high resolution, so it is appropriate to maintain them on a separate server.

The catalog must be able to deal with a variety of queries and provide links into the web information system that includes data about the film and video clips, and an e-commerce system that supports the sale of photographs, film, and video clips. The client program is simply an integrated user interface, constructed using a web browser, to access these services.

The most important advantage of the client–server model is that it is a distributed architecture. Effective use can be made of networked systems with many distributed processors. It is easy to add a new server and integrate it with the rest of the system or to upgrade servers transparently without affecting other parts of the system.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 11:25 ]
 
Anyone in the world

The layered architecture and MVC patterns are examples of patterns where the view presented is the conceptual organization of a system. The Repository pattern (Picture 1), describes how a set of interacting components can share data.

The majority of systems that use large amounts of data are organized around a shared database or repository. This model is therefore suited to applications in which data is generated by one component and used by another. Examples of this type of system include command and control systems, management information systems, CAD systems, and interactive development environments for software.

Repository%20Pattern.JPG

Picture 1. The Repository Pattern

Picture 2. is an illustration of a situation in which a repository might be used. This diagram shows an IDE that includes different tools to support model-driven development. The repository in this case might be a version-controlled environment that keeps track of changes to software and allows rollback to earlier versions.

Organizing tools around a repository is an efficient way to share large amounts of data. There is no need to transmit data explicitly from one component to another. However, components must operate around an agreed repository data model. Inevitably, this is a compromise between the specific needs of each tool and it may be difficult or impossible to integrate new components if their data models do not fit the agreed schema. In practice, it may be difficult to distribute the repository over a number of machines. Although it is possible to distribute a logically centralized repository, there may be problems with data redundancy and inconsistency.

Repository%20Architecture%20for%20IDE.JP

Picture 2. A Repository Architecture for an IDE

In the example shown in Picture 2., the repository is passive and control is the responsibility of the components using the repository. An alternative approach, which has been derived for AI systems, uses a ‘blackboard’ model that triggers components when particular data become available. This is appropriate when the form of the repository data is less well structured. Decisions about which tool to activate can only be made when the data has been analyzed. This model is introduced by Nii (1986). Bosch (2000) includes a good discussion of how this style relates to system quality attributes.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 09:16 ]
 
Anyone in the world

1) Optimize for Flow

Flow has a big influence on your productivity. Flow allows you to give all your focus to the specific problem you are solving. Flow is a multiplier of your performance. It is fair to say that when you are programming and not in flow, you are wasting time.

main-qimg-d4ea22e2052f26a40ab47cf711a012

Usually you need some time to get into flow. Use music as a catalyst to speed this process up. Once you are in flow, your new problem is to stay in flow: Make sure you are not getting interrupted. This includes putting the internet - including your email client- and your smartphone on stand-by.

 

2) Use big chunks of time for creating software

Three one-hour sessions of programming are not as effective as a single three-hour session. Remember that you always have a constant overhead to get into flow, so try to allocate big chunks of time for crafting software to keep this overhead low. Schedule your meetings around your programming sessions and not in between. Know the difference between makers schedule and managers schedule and use it to your advantage.

 

3) Fast Feedback

If your application always needs 30 seconds to build you will get distracted during these 30 seconds. Distraction is a flow killer, so make sure you get feedback as fast as possible. Strive for fast build times, or even better live reloading, fast test runs and fast deployments.

 

4) Automate your processes

Your projects usually have some non-automated processes, e.g. running bundler after changes to your Gemfile.lock. Automate these processes. Otherwise they are very human error prone: if a developer forgets about one step he might get stuck. This means loss of flow and will cost you 15 minutes to get back into it.

Your automated processes need to work well. If your CI server is reporting failing tests at random times even when the tests should pass, everybody will ignore the CI server because you cannot trust the results. Then the whole process is needless.

Make sure your processes have a great user experience. This includes crafting great error messages: Error messages should always help the user to recover by providing a possible solution to the problem.

Use code generators to automate the process of software development as much as possible. Computers don't make mistakes, but software developers do.

 

5) Do not debug, make bugs impossible by design

As an engineer your goal is to build a great application. So when you are debugging you are not moving closer to the goal of getting it done. A good way to increase productivity is to make bugs impossible by design. It’s not possible all the time, and sometimes debugging is necessary, but many bugs can be avoided.

Report failure conditions early, fail fast. Use the type system of your programming language to make sure only valid data is passed to your functions at compile-time. Use exceptions instead of `null`-checks, `null`-checks will be forgotten, exceptions will be thrown even if you don't handle them.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!

Software maintenance activities can be classified as [1]:

  • Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
  • Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
  • Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
  • Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)

Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:

How maintainable is my application/source-code, really?

The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.

So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small span of time. Can you tell, albeit subjectively, whether you think one is more maintainable than the other? What does the adjective maintainable imply for you when making this comparison – think about this for a second before we move on.

Done? So, how did you define maintainability? Most software engineers would think of some combination of testability, understandability and modifiability of code, as measures of maintainability. Another aspect that is equally critical is the ability to understand the requirement, the “what” that is implemented by the code, the “how”. That is, is there a mapping from code to requirements and vice versa that could be discerned from the code base itself? This information may exist externally as a traceability document, but even having some information in the source code – either by the way it’s laid out into packages/modules, naming conventions  or having READMEs in every package explaining the role of the classes, can be immensely valuable.

These core facets can be broken down further, to gain further insight into the maintainability of the application:

  1. Testability – the presence of an effective test harness; how much of the application is being tested, the types of tests (unit, integration, scenario etc.,) and the quality of the test cases themselves?
  2. Understandability – the readability of the code; are naming conventions followed? Is it self-descriptive and/or well commented? Are things (e.g., classes) doing only one thing or many things at once? Are the methods really long or short and can their intent be understood in a single pass of reading or does it take a good deal of screen staring and whiteboard analysis?
  3. Modifiability – structural and design simplicity; how easy is it to change things? Are things tightly or loosely coupled (i.e., separation of concerns)? Are all elements in a package/module cohesive and their responsibilities clear and closely related? Does it have overly deep inheritance hierarchies or does it favor composition over inheritance? How many independent paths of execution are there in the method definitions (i.e., cycolmatic complexity)? How much code duplication exists?
  4. Requirement to implementation mapping and vice versa – how easy is it to say “what” the application is supposed to do and correlate it with “how” it is being done, in code? How well is it done? Does it need to be refactored and/or optimized? This information is paramount for maintenance efforts and it may or may not exist for the application under consideration, forcing you to reverse engineer the code and figure out the ‘what’ yourself.

Those are the four major dimensions on which one can measure maintainability. Each of the facets can (and is) broken down further for a more granular comparison. These may or may not be the exact same ones that you thought of, but there will be a great deal of overlap. Also, not every criterion is equally important. For some teams, testability may trump structural/design simplicitly. That is, they may care a lot more about the presence of test cases (depth and breadth) than deep inheritance trees or a slightly more tightly coupled design. It is thus vital to know which dimension of maintainability is more important for your maintenance team when measuring the quality of your application and carry out the reviews and refactoring with those in mind.

The table below, towards the end of the article, shows a detailed breakdown of the above dimensions of maintainability and elaborates on their relevance to measuring the quality of the source code [2]:

  1. Correlation with quality: How much does the metric relate with our notion of software quality? It implies that nearly all programs with a similar value of the metric will possess a similar level of quality. This is a subjective correlational measure, based on our experience.
  2. Importance: How important is the metric and are low or high values preferable when measuring them? The scales, in descending order of priority are: Extremely Important, Important and Good to have
  3. Feasibility of automated evaluation: Are things fully or partially automable and what kinds of metrics are obtainable?
  4. Ease of automated evaluation: In case of automation how easy is it to compute the metric? Does it involve mammoth effort to set up or can it be plug-and-play or does it need to be developed from scratch? Any OTS tools readily available?
  5. Completeness of automated evaluation: Does the automation completely capture the metric value or is it inconclusive, requiring manual intervention? Do we need to verify things manually or can we directly rely on the metric reported by the tool?
  6. Units: What units/measures are we using to quantify the metric?

Maintainability_table

 

There is no single metric that can accurately capture the notion of maintainability of an application. There exist compound metrics like maintainability index (MI) that help predict the maintainability of the application using the Halstead Volume, Cyclomatic Complexity, Total SLOC (source lines of code) and Comments Ratio [3]:

Equation for computing Maintainability Index (MI)

Where:

  • V is the average Halstead Volume per module
  • G is the average Cyclomatic Complexity per module
  • L is the average number of Source Lines of Code (SLOC) per module
  • C is the average number of comment lines per module

(Note: some variants of the formula suggest using ‘sum total values’ instead of averages)

The use of this metric is debatable but could be used in conjunction with the above metrics or your team could create a compound metric based on the above dimensions! As long as the metric makes sense to your team and your organization you’re free to create your own, albeit meaningful, metrics.

It is wise to keep tracking the relevant metrics at various anchor-point milestones and throughout the development life-cycle, as well as having periodic code reviews to ensure that code quality is high. As you can see one can’t (and shouldn’t) solely rely on the metrics output by automated tools. Care must be taken to interpret the value of the metrics and use them to guide the refactoring of the code base.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

The notions of separation and independence are fundamental to architectural design because they allow changes to be localized. The MVC pattern, shown in Picture 1., separates elements of a system, allowing them to change independently. For example, adding a new view or changing an existing view can be done without any changes to the underlying data in the model. The layered architecture pattern is another way of achieving separation and independence. This pattern is shown in Picture 4.. Here, the system functionality is organized into separate layers, and each layer only relies on the facilities and services offered by the layer immediately beneath it.


MVC%20Pattern.JPG

Picture 1. The Model-View-Controller (MVC) Pattern

 
The%20organization%20of%20the%20MVC.JPG

Picture 2. The Organization of The MVC

 

web%20application%20architecture%20using
Picture 3. Web Application Architecture using the MVC Pattern


This layered approach supports the incremental development of systems. As a layer is developed, some of the services provided by that layer may be made available to users. The architecture is also changeable and portable. So long as its interface is unchanged, a layer can be replaced by another, equivalent layer. Furthermore, when layer interfaces change or new facilities are added to a layer, only the adjacent layer is affected. As layered systems localize machine dependencies in inner layers, this makes it easier to provide multi-platform implementations of an application system. Only the inner, machine-dependent layers need be re-implemented to take account of the facilities of a different operating system or database.

Layered%20Architecture%20Pattern.JPG
Picture 4. The Layered Architecture Pattern


Picture 5. is an example of a layered architecture with four layers. The lowest layer includes system support software—typically database and operating system support. The next layer is the application layer that includes the components concerned with the application functionality and utility components that are used by other application components. The third layer is concerned with user interface management and providing user authentication and authorization, with the top layer providing user interface facilities. Of course, the number of layers is arbitrary. Any of the layers in Picture 5. could be split into two or more layers. Picture 6. is an example of how this layered architecture pattern can be applied to a library system called LIBSYS, which allows controlled electronic access to copyright material from a group of university libraries. This has a five-layer architecture, with the bottom layer being the individual databases in each library. This shows the organization of the system for mental healthcare (MHC-PMS) that I have discussed in earlier chapters.

 

 
Generic%20layered%20architecture.JPG

Picture 5. A Generic Layered Architecture

 

Architecture%20of%20The%20LIBSYS%20Syste
Picture 6. The Architecture of The LIBSYS System

 

Source : Sommervile, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 09:03 ]
 
Gambar dari VYNSKA AMALIA PERMADI 5116201002
by VYNSKA AMALIA PERMADI 5116201002 - Friday, 23 December 2016, 08:44
Anyone in the world

A Fresh Look at User Acceptance Testing

User Acceptance Testing is universally recognised as an important element in the delivery of stable software that meets the business requirements. However there are significant challenges in executing the UAT phase in accordance with best industry practices. Some of these challenges are based on a fundamental misunderstanding of the importance of UAT.

Perhaps it is because the User Acceptance Testing is the last significant activity in the project. Perhaps it is because the subject matter experts (SMEs) necessary to its success are rarely dedicated to the activity. Or perhaps there is too much reliance on the quality of testing that has occurred prior to the user acceptance testing phase.

It is surely time to reconsider its status as the ugly duckling of software quality and give it the prominence it deserves. Let me explain why UAT is so fundamental to a successful project.

 

The UAT Quality Gate

It doesn’t matter whether you have adopted a version of an agile testing methodology where UAT is seen as part of a hardening sprint or as the final activity in a waterfall project. What is common to all methodologies is that you to ensure that the delivered software meets the business objectives. It is the vital Go/No Go quality gate.

Search the internet and you will find a vast number of articles about how UAT should be tackled, typically linked to the development methodology employed. But this totally ignores the majority of applications which are not developed in house. Instead they are purchased or consumed in the cloud and often support the key business processes. For these applications user acceptance testing is the only activity to ensure that the business needs will be met.

 

The UAT Challenges

Once the importance of UAT is understood, you will hopefully agree that its inherent challenges must be solved rather accepted as part of an imperfect world. So let’s look at some of those challenges but let’s start by looking at the much touted pre-requisite before UAT can commence.

In the world where which many internet authors appear to inhabit, all the mechanics of an application have been tested and operate perfectly before UAT starts. The UAT best practice is simply to ensure that the delivered capabilities match the business requirements. In the messy real world which most of us inhabit this is absolute rubbish.

 

UAT for ERP applications

Take ERP applications for example which are dominated by SAP and Oracle. One of our customers implemented a new version from one of these major software companies and found over 400 functional errors. Admittedly they were one of the early adopters of the release but if major software companies struggle to deliver code to UAT that is functionally perfect, it will be wise to assume the same for in-house developments and plan accordingly.
That means there will be multiple iterations of the UAT cycle (or conference room pilot in the vernacular of some vendors). So we must manage user fatigue as well as the general availability of SMEs – business users will run with two UAT cycles perhaps but beyond that it is not only their availability but their effectiveness that must be considered as repetition jaundice sets in. Some companies get the UAT test plan initially executed by business aware members of the QA test team and thus protect the scarce availability of business users.
In other companies we find dedicated UAT teams but that may be deemed a luxury by others. Even a dedicated team is not perfect. If UAT is their full time job, how do you ensure that their knowledge of an evolving business remains current?

 

The importance of UAT

So we have established that UAT is a vital activity, but that the delivered software will not function perfectly and that there will challenges getting sufficient qualified resources to undertake the work. Balancing these factors requires two things. You need buy in from senior management and you need a user acceptance test plan that is realistic in its time-scales and has sufficient contingency to deal with functional defects and the response speed of internal or external development teams. That may mean that the planned implementation date needs to move but far better to get that agreed in advance – no-one will lose their jobs if the application is delivered early!

 

UAT testing tools

Lastly let’s consider whether there are tools that can help the user acceptance testing process. The challenges are planning, activity tracking, status and communication. You can try and use scraps of paper, Microsoft Excel/Word and email but in reality for anything beyond a very small project these are not fit for purpose. There are many products available for the planning and management of the user acceptance test process and in Qualify we offer one that can be exactly configured to meet your needs but while valuable, these products don’t tackle the fundamental problem – how do you know what testing was actually performed and how do you create the reproduction steps so that defects can be rapidly resolved? Our TestDrive-UAT represents a new class of products specifically designed to tackle this challenge.




Sumber

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Software maintenance is a major issue for most CIOs because over 50% of IT time is spent on it — a daunting figure that hasn't changed much though the years. It's also one of the least favorite topics that CEOs and other C-level executives want to hear about.

It's easy to avoid the topic of software maintenance, given the focus on today's problems and pressures that are brought on by a constantly changing business environment. In this high-pressure situation, which relentlessly continues its advance, the idea of going back to "fix things" or to even see why they haven't been implemented, seems pointless.

Nevertheless, CIOs have to care about this, especially when IT budgets continue to remain flat and when half of IT staff is deployed on maintenance every day. These CIOs see their project loads burgeoning, knowing that only half of their staff is free to work on them.

The vicious circle of continuous software maintenance is fueled by various factors, which include the following:

  • In many cases, age-old legacy systems that are characterized by difficult to maintain (and to diagnose) "spaghetti code" that was written in the days when code was free-flowing and unstructured and yet continue to run mission-critical systems. It takes time to untangle this code and to fix or embellish it. The task is rendered more difficult because the code is usually undocumented, and the original writers have long since retired.
  • New code is not as technically solid as it should be — the reason is enterprise pressure to deploy the code, even if it is imperfect. Consequently, the organization lives with the imperfections until they become so overwhelming that the software maintenance team has to go in and fix it so it can get back into production.
 

CIOs cannot buck these circumstances, but some are beginning to take steps to reduce the amount of IT time spends on maintaining imperfect and broken systems. Here are five best practices to consider.

 

1: Use the cloud version of software to sidestep a legacy system

Some enterprise have actively deployed cloud versions of internal systems (like their enterprise resource planning systems) when they bring on new companies through acquisition. The reason is simple: By moving a new organization to the cloud, personnel in this business at least get used to using the same software that the acquiring enterprise uses. Over time, a decision can be made to transfer the acquired organization into the in-house enterprise system.

However, as more enterprises use this strategy, more are rethinking their approach. The result has been a change in thinking to where the ultimate goal becomes moving everyone (including the enterprise) to the cloud-based system. The idea is to push software maintenance to the cloud provider, thereby eliminating most of the time that internal IT has to spend on it.

 

 

2: Replace a custom system with a generic package

 

It sometimes makes sense to replace a custom system in favor of a third-party generic package that has more contemporary capabilities. In a situation like this, IT can also eliminate most of the software maintenance it incurs with the old software. The key is getting users — and the business — on board. Many times the customization that has been built into a system over decades can't be replaced with a more generic solution because of the competitive advantage the custom solution provides.

 

 

3: Invest in more quality assurance (QA) test bench automation

QA is one of the functions that many organization shortcut in the interests of getting software into production quickly. This isn't likely to change, but new automated testing tools in QA that run automated scripts and check for software deficiencies can change how well software runs and reduce software fix time.

 

 

4: Retrain and redeploy software maintenance personnel

As much as CIOs don't like to admit it, there is a pecking order in IT. The employees who often get placed on software maintenance teams are older IT programmers, new employees, or programmers who do not demonstrate proficiency in new app development. If software maintenance is to be reduced, these workers will need to be retrained and redeployed. Despite budget limitations and work commitments, CIOs must demonstrate a commitment to adopt these measures.

 

 

5: Set a metric for percent of personnel engaged in new projects

CEOs and other C-level executives might not want to hear about software maintenance, but if the CIO presents (and starts measuring against) a metric that shows the percent of IT staff dedicated to new projects and explains how software maintenance can negatively impact this, others are bound to take notice and view the effort more strategically.

 

 

 

Sumber

Associated Kursus: KI142303BKI142303B
 
Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 42 ()