## Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()
Anyone in the world

Software evolution is the term used in software engineering (specifically software maintenance) to refer to the process of developing software initially, then repeatedly updating it for various reasons. The aim of software evolution would be to implement (and revalidate) the possible major changes to the system without being able a priori to predict how user requirements will evolve. The existing larger system is never complete and continues to evolve. As it evolves, the complexity of the system will grow unless there is a better solution available to solve these issues. The main objectives of software evolution are ensuring the reliability and flexibility of the system. During the 20 years past, the lifespan of a system could be on average 6–10 years[citation needed]. However, it was recently[when?] found that a system should be evolved once every few months to ensure it is adapted to the real-world environment. This is due to the rapid growth of World Wide Web and Internet Resources that make it easier for users to find related information. The idea of software evolution leads to open source development as anybody could download the source codes and hence modify it. The positive impact in this case is large amounts of new ideas would be discovered and generated that aims the system to have better improvement in variety choices.

Over time, software systems, programs as well as applications, continue to develop. These changes will require new laws and theories to be created and justified. Some models as well would require additional aspects in developing future programs. Innovations and improvements do increase unexpected form of software development. The maintenance issues also would probably change as to adapt to the evolution of the future software. Software process and development are an ongoing experience that has a never-ending cycle. After going through learning and refinements, it is always an arguable issue when it comes to matter of efficiency and effectiveness of the programs.

Associated Kursus: KI142303B

Anyone in the world

Software testing adalah aktivitas-aktivitas yang bertujuan untuk mengevaluasi atribut-atribut atau kemampuan sebuah program atau sistem dan penentuan apakah sesuai dengan hasil yang diharapkan.

Teknik melakukan test.

Teknik ini mencoba untuk melakukan break-down sebuah program dengan melakukan skenario tes dan data flow. Teknik ini diklasifikasikan menjadi white-box, jika tes didasarkan pada informasi tentang bagaimana perangkat lunak telah dirancang atau fokus pada kode, atau black box jika tes hanya bergantung pada input / output  perangkat lunak.

1. Based on the Software Engineer's Intuition and Experience
Adalah tes berdasarkan pengalaman menangani kode yang serupa
b. Exploratory Testing
Adalah tes yang tidak didefinisikan terlebih dahulu dalam pembuatan program, tetapi dirancang secara dinamis

Resouce : http://swebokwiki.org/Chapter_4:_Software_Testing

Tag:
[ Mengubah: Friday, 23 December 2016, 23:46 ]

Anyone in the world

Language processing systems translate a natural or artificial language into another representation of that language and, for programming languages, may  also execute the resulting code. In software engineering, compilers translate an artificial programming language into machine code. Other language-processing systems may translate an XML data description into commands to query a database or to an alternative XML representation. Natural language processing systems may translate one natural language to another e.g., French to Norwegian.

A possible architecture for a language processing system for a programming language is illustrated in Picture 1. The source language instructions define the program to be executed and a translator converts these into instructions for an abstract machine. These instructions are then interpreted by another component that fetches the instructions for execution and executes them using (if necessary) data from the environment. The output of the process is the result of interpreting the instructions on the input data.

Picture 1. The Architecture of a Language Processing System

Of course, for many compilers, the interpreter is a hardware unit that processes machine instructions and the abstract machine is a real processor. However, for dynamically typed languages, such as Python, the interpreter may be a software component.

Programming language compilers that are part of a more general programming environment have a generic architecture (Picture 2.) that includes the following components:

1. A lexical analyzer, which takes input language tokens and converts them to an internal form.
2. A symbol table, which holds information about the names of entities (variables, class names, object names, etc.) used in the text that is being translated.
3. A syntax analyzer, which checks the syntax of the language being translated. It uses a defined grammar of the language and builds a syntax tree.
4. A syntax tree, which is an internal structure representing the program being compiled.
5. A semantic analyzer that uses information from the syntax tree and the symbol table to check the semantic correctness of the input language text.
6. A code generator that ‘walks’ the syntax tree and generates abstract machine code.

Other components might also be included which analyze and transform the syntax tree to improve efficiency and remove redundancy from the generated machine code. In other types of language processing system, such as a natural language translator, there will be additional components such as a dictionary, and the generated code is actually the input text translated into another language.

Picture 2. A Pipe and Filter Compiler Architecture

There are alternative architectural patterns that may be used in a language processing system (Garlan and Shaw, 1993). Compilers can be implemented using a composite of a repository and a pipe and filter model. In a compiler architecture, the symbol table is a repository for shared data. The phases of lexical, syntactic, and semantic analysis are organized sequentially, as shown in Picture 2., and communicate through the shared symbol table.

This pipe and filter model of language compilation is effective in batch environments where programs are compiled and executed without user interaction; for example, in the translation of one XML document to another. It is less effective when a compiler is integrated with other language processing tools such as a structured editing system, an interactive debugger or a program pretty printer. In this situation, changes from one component need to be reflected immediately in other components. It is better, therefore, to organize the system around a repository, as shown in Picture 3.

Picture 3. A Repository Architecture for A Language Processing System

This figure illustrates how a language processing system can be part of an integrated set of programming support tools. In this example, the symbol table and syntax tree act as a central information repository. Tools or tool fragments communicate through it. Other information that is sometimes embedded in tools, such as the grammar definition and the definition of the output format
for the program, have been taken out of the tools and put into the repository. Therefore, a syntax-directed editor can check that the syntax of a program is correct as it is being typed and a pretty printer can create listings of the program in a format that is easy to read.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Tag:
Associated Kursus: KI142303B
[ Mengubah: Friday, 23 December 2016, 16:21 ]

Anyone in the world

All systems that involve interaction with a shared database can be considered to be transaction-based information systems. An information system allows controlled access to a large base of information, such as a library catalog, a flight timetable, or the records of patients in a hospital. Increasingly, information systems are web-based systems that are accessed through a web browser.

Picture 1. a very general model of an information system. The system is modeled using a layered approach where the top layer supports the user interface and the bottom layer is the system database. The user communications layer handles all input and output from the user interface, and the information retrieval layer includes application-specific logic for accessing and updating the database. As we shall see later, the layers in this model can map directly onto servers in an Internet-based system.

Picture 1. Layered Information System Architecture

As an example of an instantiation of this layered model, Picture 2. shows the architecture of the MHC-PMS. Recall that this system maintains and manages details of patients who are consulting specialist doctors about mental health problems. The detail added to each layer in the model by identifying the components that support user communications and information retrieval and access :

1. The top layer is responsible for implementing the user interface. In this case, the UI has been implemented using a web browser.
2. The second layer provides the user interface functionality that is delivered through the web browser. It includes components to allow users to log in to the system and checking components that ensure that the operations they use are allowed by their role. This layer includes form and menu management components that present information to users, and data validation components that check information consistency.
3. The third layer implements the functionality of the system and provides components that implement system security, patient information creation and updating, import and export of patient data from other databases, and report generators that create management reports.
4. Finally, the lowest layer, which is built using a commercial database management system, provides transaction management and persistent data storage.

Picture 2. The Architecture of The MHC-PMS

Information and resource management systems are now usually web-based systems where the user interfaces are implemented using a web browser. For example, e-commerce systems are Internet-based resource management systems that accept electronic orders for goods or services and then arrange delivery of these goods or services to the customer. In an e-commerce system, the application specific layer includes additional functionality supporting a ‘shopping cart’ in which users can place a number of items in separate transactions, then pay for them all together in a single transaction.

The organization of servers in these systems usually reflects the four-layer generic model presented in Picture 1. These systems are often implemented as multi-tier client server/architectures :

1. The web server is responsible for all user communications, with the user interface implemented using a web browser
2. The application server is responsible for implementing application-specific logic as well as information storage and retrieval requests
3. The database server moves information to and from the database and handles transaction management.

Using multiple servers allows high throughput and makes it possible to handle hundreds of transactions per minute. As demand increases, servers can be added at each level to cope with the extra processing involved.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Tag:
Associated Kursus: KI142303B
[ Mengubah: Friday, 23 December 2016, 15:30 ]

Anyone in the world

Transaction processing (TP) systems are designed to process user requests for information from a database, or requests to update a database (Lewis et al., 2003). Technically, a database transaction is sequence of operations that is treated as a single unit (an atomic unit). All of the operations in a
transaction have to be completed before the database changes are made permanent. This ensures that failure of operations within the transaction does not lead to inconsistencies in the database.

From a user perspective, a transaction is any coherent sequence of operations that satisfies a goal, such as ‘find the times of flights from London to Paris’. If the user transaction does not require the database to be changed then it may not be necessary to package this as a technical database transaction.

An example of a transaction is a customer request to withdraw money from a bank account using an ATM. This involves getting details of the customer’s account, checking the balance, modifying the balance by the amount withdrawn, and sending commands to the ATM to deliver the cash. Until all of these steps have been completed, the transaction is incomplete and the customer accounts database is not changed.

Picture 1. The Structure of Transaction Processing Applications

Transaction processing systems are usually interactive systems in which users make asynchronous requests for service. Picture 1. illustrates the conceptual architectural structure of TP applications. First a user makes a request to the system through an I/O processing component. The request is processed by some application specific logic. A transaction is created and passed to a transaction manager, which is usually embedded in the database management system. After the transaction manager has ensured that the transaction is properly completed, it signals to the application that processing has finished.

Picture 2. The Software Architecture of an ATM System

Transaction processing systems may be organized as a ‘pipe and filter’ architecture with system components responsible for input, processing, and output. For example, consider a banking system that allows customers to query their accounts and withdraw cash from an ATM. The system is composed of two cooperating software components—the ATM software and the account processing
software in the bank’s database server. The input and output components are implemented as software in the ATM and the processing component is part of the bank’s database server. Picture 2. shows the architecture of this system, illustrating the functions of the input, process, and output components.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Tag:
Associated Kursus: KI142303B
[ Mengubah: Friday, 23 December 2016, 11:32 ]

Anyone in the world

Application systems are intended to meet a business or organizational need. All businesses have much in common—they need to hire people, issue invoices, keep accounts, and so on. Businesses operating in the same sector use common sector specific applications. Therefore, as well as general business functions, all phone companies need systems to connect calls, manage their network, issue bills to customers, etc. Consequently, the application systems used by these businesses also have much in common.

These commonalities have led to the development of software architectures that describe the structure and organization of particular types of software systems. Application architectures encapsulate the principal characteristics of a class of systems. For example, in real-time systems, there might be generic architectural models of different system types, such as data collection systems or monitoring systems. Although instances of these systems differ in detail, the common architectural structure can be reused when developing new systems of the same type.

The application architecture may be re-implemented when developing new systems but, for many business systems, application reuse is possible without re-implementation. We see this in the growth of Enterprise Resource Planning (ERP) systems from companies such as SAP and Oracle, and vertical software packages (COTS) for specialized applications in different areas of business. In these systems, a generic system is configured and adapted to create a specific business application. For example, a system for supply chain management can be adapted for different types of suppliers, goods, and contractual arrangements.

As a software designer, you can use models of application architectures in a number of ways:

1. As a starting point for the architectural design process If you are unfamiliar with the type of application that you are developing, you can base your initial design on a generic application architecture. Of course, this will have to be specialized for the specific system being developed, but it is a good starting point for design.
2. As a design checklist If you have developed an architectural design for an application system, you can compare this with the generic application architecture. You can check that your design is consistent with the generic architecture.
3. As a way of organizing the work of the development team The application architectures identify stable structural features of the system architectures and in many cases, it is possible to develop these in parallel. You can assign work to group members to implement different components within the architecture.
4. As a means of assessing components for reuse If you have components you might be able to reuse, you can compare these with the generic structures to see whether there are comparable components in the application architecture.
5. As a vocabulary for talking about types of applications If you are discussing a specific application or trying to compare applications of the same types, then you can use the concepts identified in the generic architecture to talk about the applications.

There are many types of application system and, in some cases, they may seem to be very different. However, many of these superficially dissimilar applications actually have much in common, and thus can be represented by a single abstract application architecture. I illustrate this here by describing the following architectures of two types of application:

1. Transaction processing applications Transaction processing applications are database-centered applications that process user requests for information and update the information in a database. These are the most common type of inter- active business systems. They are organized in such a way that user actions can’t interfere with each other and the integrity of the database is maintained. This class of system includes interactive banking systems, e-commerce systems, information systems, and booking systems.
2. Language processing systems Language processing systems are systems in which the user’s intentions are expressed in a formal language (such as Java). The language processing system processes this language into an internal format and then interprets this internal representation. The best-known language processing systems are compilers, which translate high-level language programs into machine code.

These particular types of system has been chosen because a large number of web based business systems are transaction-processing systems, and all software development relies on language processing systems.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Associated Kursus: KI142303B

Anyone in the world

The repository pattern is concerned with the static structure of a system and does not show its run-time organization. My next example illustrates a very commonly used run-time organization for distributed systems. Picture 1. describe the Client-server pattern.

Picture 1. The Client-server Pattern

A system that follows the client–server pattern is organized as a set of services and associated servers, and clients that access and use the services. The major components of this model are:

1. A set of servers that offer services to other components. Examples of servers include print servers that offer printing services, file servers that offer file management
services, and a compile server, which offers programming language
compilation services.

2. A set of clients that call on the services offered by servers. There will normally be several instances of a client program executing concurrently on different computers.

3. A network that allows the clients to access these services. Most client–server systems are implemented as distributed systems, connected using Internet
protocols.

Client–server architectures are usually thought of as distributed systems architectures but the logical model of independent services running on separate servers can be implemented on a single computer. Again, an important benefit is separation and independence. Services and servers can be changed without affecting other parts of the system.

Clients may have to know the names of the available servers and the services that they provide. However, servers do not need to know the identity of clients or how many clients are accessing their services. Clients access the services provided by a server through remote procedure calls using a request-reply protocol such as the http protocol used in the WWW. Essentially, a client makes a request to a server and waits until it receives a reply.

Picture 2. A Client-server Architecture for a Film Library

Picture 2. is an example of a system that is based on the client–server model. This is a multi-user, web-based system for providing a film and photograph library. In this system, several servers manage and display the different types of media. Video frames need to be transmitted quickly and in synchronation but at relatively low resolution. They may be compressed in a store, so the video server can handle video compression and decompression in different formats. Still pictures, however, must be maintained at a high resolution, so it is appropriate to maintain them on a separate server.

The catalog must be able to deal with a variety of queries and provide links into the web information system that includes data about the film and video clips, and an e-commerce system that supports the sale of photographs, film, and video clips. The client program is simply an integrated user interface, constructed using a web browser, to access these services.

The most important advantage of the client–server model is that it is a distributed architecture. Effective use can be made of networked systems with many distributed processors. It is easy to add a new server and integrate it with the rest of the system or to upgrade servers transparently without affecting other parts of the system.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Tag:
Associated Kursus: KI142303B
[ Mengubah: Friday, 23 December 2016, 11:25 ]

Anyone in the world

The layered architecture and MVC patterns are examples of patterns where the view presented is the conceptual organization of a system. The Repository pattern (Picture 1), describes how a set of interacting components can share data.

The majority of systems that use large amounts of data are organized around a shared database or repository. This model is therefore suited to applications in which data is generated by one component and used by another. Examples of this type of system include command and control systems, management information systems, CAD systems, and interactive development environments for software.

Picture 1. The Repository Pattern

Picture 2. is an illustration of a situation in which a repository might be used. This diagram shows an IDE that includes different tools to support model-driven development. The repository in this case might be a version-controlled environment that keeps track of changes to software and allows rollback to earlier versions.

Organizing tools around a repository is an efficient way to share large amounts of data. There is no need to transmit data explicitly from one component to another. However, components must operate around an agreed repository data model. Inevitably, this is a compromise between the specific needs of each tool and it may be difficult or impossible to integrate new components if their data models do not fit the agreed schema. In practice, it may be difficult to distribute the repository over a number of machines. Although it is possible to distribute a logically centralized repository, there may be problems with data redundancy and inconsistency.

Picture 2. A Repository Architecture for an IDE

In the example shown in Picture 2., the repository is passive and control is the responsibility of the components using the repository. An alternative approach, which has been derived for AI systems, uses a ‘blackboard’ model that triggers components when particular data become available. This is appropriate when the form of the repository data is less well structured. Decisions about which tool to activate can only be made when the data has been analyzed. This model is introduced by Nii (1986). Bosch (2000) includes a good discussion of how this style relates to system quality attributes.

Source : Sommerville, Ian. 2011. Software Engineering. 9th Ed. Boston: Pearson Education, Inc.

Tag:
Associated Kursus: KI142303B
[ Mengubah: Friday, 23 December 2016, 09:16 ]

Anyone in the world

1) Optimize for Flow

Flow has a big influence on your productivity. Flow allows you to give all your focus to the specific problem you are solving. Flow is a multiplier of your performance. It is fair to say that when you are programming and not in flow, you are wasting time.

Usually you need some time to get into flow. Use music as a catalyst to speed this process up. Once you are in flow, your new problem is to stay in flow: Make sure you are not getting interrupted. This includes putting the internet - including your email client- and your smartphone on stand-by.

2) Use big chunks of time for creating software

Three one-hour sessions of programming are not as effective as a single three-hour session. Remember that you always have a constant overhead to get into flow, so try to allocate big chunks of time for crafting software to keep this overhead low. Schedule your meetings around your programming sessions and not in between. Know the difference between makers schedule and managers schedule and use it to your advantage.

3) Fast Feedback

If your application always needs 30 seconds to build you will get distracted during these 30 seconds. Distraction is a flow killer, so make sure you get feedback as fast as possible. Strive for fast build times, or even better live reloading, fast test runs and fast deployments.

Your projects usually have some non-automated processes, e.g. running bundler after changes to your Gemfile.lock. Automate these processes. Otherwise they are very human error prone: if a developer forgets about one step he might get stuck. This means loss of flow and will cost you 15 minutes to get back into it.

Your automated processes need to work well. If your CI server is reporting failing tests at random times even when the tests should pass, everybody will ignore the CI server because you cannot trust the results. Then the whole process is needless.

Make sure your processes have a great user experience. This includes crafting great error messages: Error messages should always help the user to recover by providing a possible solution to the problem.

Use code generators to automate the process of software development as much as possible. Computers don't make mistakes, but software developers do.

5) Do not debug, make bugs impossible by design

As an engineer your goal is to build a great application. So when you are debugging you are not moving closer to the goal of getting it done. A good way to increase productivity is to make bugs impossible by design. It’s not possible all the time, and sometimes debugging is necessary, but many bugs can be avoided.

Report failure conditions early, fail fast. Use the type system of your programming language to make sure only valid data is passed to your functions at compile-time. Use exceptions instead of null-checks, null-checks will be forgotten, exceptions will be thrown even if you don't handle them.

Sumber

Associated Kursus: KI142303B

Anyone in the world

Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!

Software maintenance activities can be classified as [1]:

• Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
• Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
• Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
• Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)

Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:

How maintainable is my application/source-code, really?

The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.

So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small span of time. Can you tell, albeit subjectively, whether you think one is more maintainable than the other? What does the adjective maintainable imply for you when making this comparison – think about this for a second before we move on.

Done? So, how did you define maintainability? Most software engineers would think of some combination of testability, understandability and modifiability of code, as measures of maintainability. Another aspect that is equally critical is the ability to understand the requirement, the “what” that is implemented by the code, the “how”. That is, is there a mapping from code to requirements and vice versa that could be discerned from the code base itself? This information may exist externally as a traceability document, but even having some information in the source code – either by the way it’s laid out into packages/modules, naming conventions  or having READMEs in every package explaining the role of the classes, can be immensely valuable.

These core facets can be broken down further, to gain further insight into the maintainability of the application:

1. Testability – the presence of an effective test harness; how much of the application is being tested, the types of tests (unit, integration, scenario etc.,) and the quality of the test cases themselves?
2. Understandability – the readability of the code; are naming conventions followed? Is it self-descriptive and/or well commented? Are things (e.g., classes) doing only one thing or many things at once? Are the methods really long or short and can their intent be understood in a single pass of reading or does it take a good deal of screen staring and whiteboard analysis?
3. Modifiability – structural and design simplicity; how easy is it to change things? Are things tightly or loosely coupled (i.e., separation of concerns)? Are all elements in a package/module cohesive and their responsibilities clear and closely related? Does it have overly deep inheritance hierarchies or does it favor composition over inheritance? How many independent paths of execution are there in the method definitions (i.e., cycolmatic complexity)? How much code duplication exists?
4. Requirement to implementation mapping and vice versa – how easy is it to say “what” the application is supposed to do and correlate it with “how” it is being done, in code? How well is it done? Does it need to be refactored and/or optimized? This information is paramount for maintenance efforts and it may or may not exist for the application under consideration, forcing you to reverse engineer the code and figure out the ‘what’ yourself.

Those are the four major dimensions on which one can measure maintainability. Each of the facets can (and is) broken down further for a more granular comparison. These may or may not be the exact same ones that you thought of, but there will be a great deal of overlap. Also, not every criterion is equally important. For some teams, testability may trump structural/design simplicitly. That is, they may care a lot more about the presence of test cases (depth and breadth) than deep inheritance trees or a slightly more tightly coupled design. It is thus vital to know which dimension of maintainability is more important for your maintenance team when measuring the quality of your application and carry out the reviews and refactoring with those in mind.

The table below, towards the end of the article, shows a detailed breakdown of the above dimensions of maintainability and elaborates on their relevance to measuring the quality of the source code [2]:

1. Correlation with quality: How much does the metric relate with our notion of software quality? It implies that nearly all programs with a similar value of the metric will possess a similar level of quality. This is a subjective correlational measure, based on our experience.
2. Importance: How important is the metric and are low or high values preferable when measuring them? The scales, in descending order of priority are: Extremely Important, Important and Good to have
3. Feasibility of automated evaluation: Are things fully or partially automable and what kinds of metrics are obtainable?
4. Ease of automated evaluation: In case of automation how easy is it to compute the metric? Does it involve mammoth effort to set up or can it be plug-and-play or does it need to be developed from scratch? Any OTS tools readily available?
5. Completeness of automated evaluation: Does the automation completely capture the metric value or is it inconclusive, requiring manual intervention? Do we need to verify things manually or can we directly rely on the metric reported by the tool?
6. Units: What units/measures are we using to quantify the metric?

There is no single metric that can accurately capture the notion of maintainability of an application. There exist compound metrics like maintainability index (MI) that help predict the maintainability of the application using the Halstead Volume, Cyclomatic Complexity, Total SLOC (source lines of code) and Comments Ratio [3]:

Where:

• V is the average Halstead Volume per module
• G is the average Cyclomatic Complexity per module
• L is the average number of Source Lines of Code (SLOC) per module
• C is the average number of comment lines per module

(Note: some variants of the formula suggest using ‘sum total values’ instead of averages)

The use of this metric is debatable but could be used in conjunction with the above metrics or your team could create a compound metric based on the above dimensions! As long as the metric makes sense to your team and your organization you’re free to create your own, albeit meaningful, metrics.

It is wise to keep tracking the relevant metrics at various anchor-point milestones and throughout the development life-cycle, as well as having periodic code reviews to ensure that code quality is high. As you can see one can’t (and shouldn’t) solely rely on the metrics output by automated tools. Care must be taken to interpret the value of the metrics and use them to guide the refactoring of the code base.

Sumber

Associated Kursus: KI142303B

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()