Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()
Anyone in the world

Introduction

Entering and tracking defects, is one of the main tasks for testers. It is important that any time that a defect is found (whether it was actively being looked for or not), that it is logged. Even if you are not sure if it really is a defect, or if you think it maybe isn’t really that big of a deal; I believe it is better to log it, analyze and then if necessary close it, than it is to just ignore it. This article explains the process of software defect tracking and give recommendations on how to implement a defect tracking system.

 

So what exactly is a defect? As a general definition, a defect is any aspect of the application that does not act or behave as designed or expected. This can range from the obvious cases where the application crashes, data is lost, calculations give the wrong results, etc., to more subtle cases of useability and lack of features. From a strictly test process point of view, defects are linked to the software’s requirements. A tester examines the requirements, and then writes test cases that test against these requirements. If a test fails, or if in any other way it is found that a requirement is not met correctly, then it is a defect. From a product, or company point of view, however, a defect is anything that a customer finds or would find as a problem whether there is a requirement or not. This latter definition is often more important since it is most organizations’ goal to satisfy their customers.

Entering a Defect

As mentioned, any time a defect or even a potential defect is found, it should be entered. The exact format for entering a defect will vary between organizations, and some organizations may need more or less information. Here I will provide the basic and most common information that should be included when creating a defect report. ID: A unique identifier for the defect. This ID is what is typically used, when different people are referring to the defect. When using a defect tracking system, this ID is typically generated automatically. Product: This is the product that the defect is for. If your organization only has one product this may not be necessary. Component: If your software application contains several components, e.g. database, web server, UI, etc. this field is used indicate which component the defect is in. Sometime a tester may have to guess at this, if they aren’t familiar with the architecture. Summary: A short one-line description of the defect. This is critical because the summary is often what is shown in summary reports and search results, so it must be descriptive enough so any team member (i.e. tester, developer, manager, etc.) can get a basic understanding of the problem. Description: This is where more details of the defect is entered. It is important to be as clear and complete as possible so that there is no ambiguity when reading. It should also be concise, because if it is too verbose, the reader might get lost in it and miss the point. When appropriate, include the steps needed to reproduce the problem, including expected and actual results. This will make it easier for the developer to locate the problem, and easier later on for the tester to verify that the problem has been fixed. Test Case Reference: If applicable reference the Test Case that was being executed when the defect was found. Environment: This will vary greatly depending on the type of application being tested but should at least include the version of the software being tested, and may include things like Operating System, Firmware Version, Browser, Database version, etc. Severity: As an organization, you should decide on a standard set of Severity levels. For example: Critical, Major, Normal, Minor. Some organizations may choose to have more or less, but 3 to 5 levels seems to work the best. Assigning a severity is obviously a judgment call, but it helps to prioritize defect later on. Priority: Again, an organization should also have pre-defined priority levels, e.g. P1 to P5. Typically it should not be up to the tester to assign the priority, this should be done by the management team. Often the question arises between the difference between Severity and Priority. They are quite correlated, but it is possible to have a high severity defect with a low priority. For example, if a certain actions causes the application to crash, then that is a very Severe defect, but if the conditions to cause this would almost never occur, it may be given a low priority since there may be other more easily seen defects to be fixed. An opposite example, may be a typo on a screen. This has low severity, but if the typo may confuse the user and/or make the application look unprofessional and, therefore, affect sales, the defect could be set to a high priority. Typically priority is used for developers to determine which defects to address first.

Defect Life Cycle

Although a defect is typically entered by a tester, it cannot remain with the tester, but instead needs to be fixed and then verified. This movement of a defect between different people and different states is often called the defect life cycle. Each organization will probably have a slightly different set of rules for how they want defects handled, and will, therefore, have a slightly different life cycle. Most however, will have a format similar to the following:

  1. When a defect is first created, it will start off a New state
  2. A triage meeting (see below) is held and the defect is prioritized and then set to an Open state and assigned to a developer.
  3. The developer fixes the defect, and checks in the code. They the set the defect to Fixed and assign it to a tester to verify it. Alternatively, if the developer either doesn’t think it is really a defect or can’t reproduce it, they may set it to a different state (e.g. As Designed or Can’t Reproduce)
  4. The tester verifies that the bug is fixed by testing it in the next build. The tester then marks it as Verified. Alternatively if the defect isn’t fixed they set the state back to Open and reassigns it back to the developer.

As mentioned this is a simplified life cycle and they can often get more complex with more states. It is good to try to define this ahead of time to handle all situations you may encounter in your organization. Below is an example of a defect life cycle, used in Bugzilla:

Bugzilla Lifecycle

Defect Triage Meeting

A triage meeting is a meeting that brings all stakeholders (e.g. testers, manager, developers, etc) together to discuss defects. Typically, a triage meeting will look at all new defects that have been logged since the last triage meeting, any hold over defects from a previous meeting, or any defects that have been re-opened. The purpose of the meeting is to discuss the defects and clarify any details about them, and then decide which defects should be fixed, and to set the priority for fixing them. Each stakeholder may have their own agenda as to which defects get fixed and the priority of them, but it is in this meeting that a consensus should be reached. Once the defects have been analyzed and prioritized, they should be assigned to the development team to fix them.

Metrics

One reason for tracking defects, is that the data can provide metrics, that can be used when evaluating the quality of the application and in deciding when an application is ready to be released. A common defect related metric to track, is the number of new defects logged over the past set period of time (e.g. day, week, etc.) and then track that over time. Typically, there will be spike in new defects every time a new build is given to test, but over time, the number of new defects found in each period, should decline. The following bar graph is an example, showing the number of new defects that were entered on each day of the month: Defects by day

Another common metric to track is the total number defects in new/open states vs those in a verified/closed state over the life of the project. At the beginning of the project the new/open defects will be rising, but over time they should be decreasing to near zero, as shown in the example below: New and Open defect

Another useful metric is a snapshot chart showing the Total number of defects, grouped by the various states, at a specific point in time. Near the beginning of a project, most defects will be New, but as the project comes to an end most should be closed. Below is an example of the chart part way through a project: Defect Status

With all these metrics you can further add information by separating the data by priority and/or severity, which can put a better perspective on things. You can also separate data by components to help get a better idea which parts of an application are more stable than others. Like all metrics, however, it is important not to read too much into them, and look at them more as guidelines when making quality decisions.

Defect Tracking Software

While it is possible to manage defects using a paper based system and/or to use a Word Processor and/or Spreadsheet, I would suggest using a 3rd party application, for tracking defects. When looking for a good defect tracking system you should be looking at the following criteria: web-based: This isn’t a must, especially for a small group, but it is definitely easier to administer the application if it is web-based. The software only needs to be installed and configured in one location, and then all users can access it. multi-user: This should be a given. Even if you are a one-man show to start, it leaves room to expand. search: This is one of the most important things to look at. All systems should have some kind of searching, but check out how easy it is to use and how comprehensive it is. Does it allow searching on any custom fields? Also does the system allow for Saved Searches or Saved Filters? This is critical, because over time, you will set up and save many often used searches. reports: Check what kind of reports, if any come with the system. Are the reports useful. Also, check that you can generate custom reports that fit the needs of your organization. history: The defect tracking system should track all changes made to a defect, including comments added, and state transitions. This information allows the user to easily see the history of changes made to the defect. configurable fields: Does the system support configuration or customization. You may want to add fields that are unique to the type of project you are working on. You would probably also want control of the values that certain fields can take. configurable work flow: As mentioned earlier, each organization may have a slightly different work flow. Does the Application allow you to add new states or customize which states transitions are allowed? email notifications: This is not a must, but it is nice to email notifications for users for when defects get assigned to them or when defects that are assigned to them get modified. When I began working as an independent software developer, as I used bugzilla.  It is free, mature, full featured, and covered all of my needs.  I am currently using Jira, which has a few more features and a way nicer user interface. Although it isn’t free, at $10 a year (for 1 to 10 users), I think its worth it. For a list of some of the more common defect tracking systems, see the references section below.




Sumber : Software Defect Tracking

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

The Right Attitude toward Defects

The president sets a goal of reducing unemployment, but not of eliminating it. Why is that? Well, because having nobody in the country unemployed is simply impossible outside of a planned economy – people will quit and take time off between jobs or get laid off and have to spend time searching for new ones. Some unemployment is inevitable.

Management, particularly in traditional, ‘waterfall’ shops tends to view defects in the same light. We clearly can’t avoid defects, but if we worked really hard, we could reduce them by half. This attitude is a core part of the problem.

It’s often met with initial skepticism, but what I tell  clients is that they should shoot for having no escaped defects (defects that make it to production, as opposed to ones that are caught by the team during testing). In other words, don’t shoot for a 20% or 50% reduction – shoot for not having defects.

It’s not that shooting for 100% will stretch teams further than shooting for 20% or 50%. There’s no psychological gimmickry to it. Instead, it’s about ceasing to view defects as “just part of writing software.” Defects are not inevitable, and coming to view them as preventable mistakes rather than facts of life is important because it leads to a reaction of “oh, wow, a defect – that’s bad, let’s figure out how that happened and fix it” instead of a reaction of “yeah, defects, what are you going to  do?”

When teams realize and accept this, they turn an important corner on the road to defect reduction.

What Won’t Help

Once the mission is properly set to one of defect elimination, it’s important to understand what either won’t help at all or what will help only superficially. And this set includes a lot of the familiar levers that dev managers like to pull.

First and probably most critical to understand is that the core cause of defects is NOT developers not trying hard enough or taking care. In other words, it’s not as though a developer is sitting at his desk and thinking, “I could make this code I’m writing defect free, but, meh, I don’t feel like it because I want to go home.”

It is precisely for this reason that exhortations for developers to work harder or to be more careful won’t work. They already are, assuming they aren’t overworked or unhappy with their jobs, and if those things are true, asking for more won’t work anyway.

And, speaking of overworked, increasing workload in a push to get defect free will backfire. When people are forced to work long hours, the work becomes boring.  “Grueling and boring” is a breeding ground for mistakes – not a fix for them. Resist the urge to make large, effort-intensive quality pushes. That solution should seem too easy, and, in fact, it is.

Finally, resist any impulse to forgo the carrot in favor of the stick and threaten developers or teams with consequences for defects. This is a desperate gambit, and, simply put, it never works. If developers’ jobs depend on not introducing defects, they will find a way to succeed in not introducing defects, even if it means not shipping software, cutting scope, or transferring to other teams/projects. The road to quality isn’t lined by fear.

Understand Superficial Solutions

Once managers understand that eliminating defects is possible and that draconian measures will be counterproductive, the next danger is a tendency to seize on the superficial. Unlike the ideas in the last section, these won’t be actively detrimental, but the realized gains will be limited.

The first thing that everyone seems to seize on is mandating unit test coverage, since this forces the developers to write automated tests, which catch issues. The trouble here is that high coverage doesn’t actually mean that the tests are effective, nor does it cover all possible defect scenarios. Hiring or logging additional QA hours will be of limited efficacy for similar reasons.

Another thing folks seem to love is the “bug bash” concept, wherein the team takes a break from delivering features and does their best to break the software and then repair the breaks. While this certainly helps in the short term, it doesn’t actually change anything about the development or testing process, so gains will be limited.

And finally, coding standards to be enforced at code review certainly don’t hurt anything, but they are also not a game changer. To the chagrin of managers everywhere, “here are all of the mistakes one could make, so don’t make them” doesn’t arise from the past experience of the tenured developers on the team.

Change the Game

So what does it take to put a serious dent into defect counts and to fundamentally alter the organization’s views about defects? The answers here are more philosophical.

The first consideration is to get integration to be continuous and to make deployments to test and production environments trivial. Defects hide and fester in the speculative world between written code and the environment in which it will eventually be run. If, on the other hand, developers see the effects their code will have on production immediately, the defect count will plummet.

Part and parcel with this tight feedback loop strategy is to have an automated regression and problem detection suite. Notice that I’m not talking about test coverage or even unit tests, but about a broader concept. Your suite will include these things, but it might also include smoke/performance tests or tests to see if resources are starved. The idea is to have automated detection for things that could go wrong: regressions, integration mistakes, performance issues, etc. These will allow you to discover defects instead of the customers.

And, finally, on the code side, you need to reduce or eliminate error prone practices and parts of the code. Is there a file that’s constantly being merged and could lead to errors? Do your developers copy, paste, and tweak? Are there config files that require a lot of careful, confusing attention to detail? Does your team have an established code review process, or is it something that is still happening ad-hoc? Recognize these mistake-inviters for what they are and eliminate them.



Sumber : How to Actually Reduce Software Defects

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 08:38 ]
 
Anyone in the world

Much like we gain knowledge about the behavior of the physical universe via the scientific method, we gain knowledge about the behavior of our software via a system of assertion, observation, and experimentation called “testing.”

There are many things one could desire to know about a software system. It seems that most often we want to know if it actually behaves like we intended it to behave. That is, we wrote some code with a particular intention in mind, does it actually do that when we run it?

In a sense, testing software is the reverse of the traditional scientific method, where you test the universe and then use the results of that experiment to refine your hypothesis. Instead, with software, if our “experiments” (tests) don’t prove out our hypothesis (the assertions the test is making), we change the system we are testing. That is, if a test fails, it hopefully means that our software needs to be changed, not that our test needs to be changed. Sometimes we do also need to change our tests in order to properly reflect the current state of our software, though. It can seem like a frustrating and useless waste of time to do such test adjustment, but in reality it’s a natural part of this two-way scientific method–sometimes we’re learning that our tests are wrong, and sometimes our tests are telling us that our system is out of whack and needs to be repaired.

This tells us a few things about testing:

  1. The purpose of a test is to deliver us knowledge about the system, and knowledge has different levels of value. For example, testing that 1 + 1 still equals two no matter what time of day it is doesn’t give us valuable knowledge. However, knowing that my code still works despite possible breaking changes in APIs I depend on could be very useful, depending on the context. In general, one must know what knowledge one desires before one can create an effective and useful test, and then must judge the value of that information appropriately to understand where to put time and effort into testing.
  2. Given that we want to know something, in order for a test to be a test, it must be asserting something and then informing us about that assertion. Human testers can make qualitative assertions, such as whether or not a color is attractive. But automated tests must make assertions that computers can reliably make, which usually means asserting that some specific quantitative statement is true or false. We are trying to learn something about the system by running the test–whether the assertion is true or false is the knowledge we are gaining. A test without an assertion is not a test.
  3. Every test has certain boundaries as an inherent part of its definition. Much like you couldn’t design a single experiment to prove all the theories and laws of physics, it would be prohibitively difficult to design a single test that actually validated all the behaviors of any complex software system at once. If it seems that you have made such a test, most likely you’ve combined many tests into one and those tests should be split apart. When designing a test, you should know what it is actually testing and what it is not testing.
  4. Every test has a set of assumptions built into it, which it relies on in order to be effective within its boundaries. For example, if you are testing something that relies on access to a database, your test might make the assumption that the database is up and running (because some other test has already checked that that part of the code works). If the database is not up and running, then the test neither passes nor fails–it instead provides you no knowledge at all. This tells us that all tests have at least three results–pass, fail, and unknown. Tests with an “unknown” result must not say that they failed–otherwise they are claiming to give us knowledge when in fact they are not.
  5. Because of these boundaries and assumptions, we need to design our suite of tests in such a way that the full set, when combined, actually gives us all of the knowledge we want to gain. That is, each individual test only gives us knowledge within its boundaries and assumptions, so how do we overlap those boundaries so that they reliably inform us about the real behavior of the entire system? The answer to this question may also affect the design of the software system being tested, as some designs are harder to completely test than others.

This last point leads us into the many methods of testing being practiced today, in particular end to end testing, integration testing, and unit testing.

End to End Testing

“End to end” testing is where you make an assertion that involves one complete “path” through the logic of the system. That is, you start up the whole system, perform some action at the entry point of user input, and check the result that the system produces. You don’t care how things work internally to accomplish this goal, you just care about the input and result. That is generally true for all tests, but here we’re testing at the outermost point of input into the system and checking the outermost result that it produces, only.

An example end to end test for creating a user account in a typical web application would be to start up a web server, a database, and a web browser, and use the web browser to actually load the account creation web page, fill it in, and submit it. Then you would assert that the resulting page somehow tells us the account was created successfully.

The idea behind end to end testing is that we gain fully accurate knowledge about our assertions because we are testing a system that is as close to “real” and “complete” as possible. All of its interactions and all of its complexity along the path we are testing are covered by the test.

The problem of using only end to end testing is that it makes it very difficult to actually get all of the knowledge about the system that we might desire. In any complex software system, the number of interacting components and the combinatorial explosion of paths through the code make it difficult or impossible to actually cover all the paths and make all the assertions we want to make.

It can also be difficult to maintain end to end tests, as small changes in the system’s internals lead to many changes in the tests.

End to end tests are valuable, particularly as an initial stopgap for a system that entirely lacks tests. They are also good as sanity checks that your whole system behaves properly when put together. They have an important place in a test suite, but they are not, by themselves, a good long-term solution for gaining full knowledge of a complex system.

If a system is designed in such a way that it can only be tested via end-to-end tests, that is a symptom of broad architectural problems in the code. These issues should be addressed through refactoring until one of the other testing methods can be used.

Integration Testing

This is where you take two or more full “components” of a system and specifically test how they behave when “put together.” A component could be a code module, a library that your system depends on, a remote service that provides you data–essentially any part of the system that can be conceptually isolated from the rest of the system.

For example, in a web application where creating an account sends the new user an email, one might have a test that runs the account creation code (without going through a web page, just exercising the code directly) and checks that an email was sent. Or one might have a test that checks that account creation succeeds when one is using a real database–that “integrates” account creation and the database. Basically this is any test that is explicitly checking that two or more components behave properly when used together.

Compared to end to end testing, integration testing involves a bit more isolation of components as opposed to just running a test on the whole system as a “black box.”

Integration testing doesn’t suffer as badly from the combinatorial explosion of test paths that end to end testing faces, particularly when the components being tested are simple and thus their interactions are simple. If two components are hard to integration test due to the complexity of their interactions, this indicates that perhaps one or both of them should be refactored for simplicity.

Integration testing is also usually not a sufficient testing methodology on its own, as doing an analysis of an entire system purely through the interactions of components means that one must test a very large number of interactions in order to have a full picture of the system’s behavior. There is also a maintenance burden with integration testing similar to end to end testing, though not as bad–when one makes a small change in one component’s behavior, one might have to then update the tests for all the other components that interact with it.

Unit Testing

This is where you take one component alone and test that it behaves properly. In our account creation example, we could have a series of unit tests for the account creation code, a separate series of unit tests for the email sending code, a separate series of unit tests for the web page where users fill in their account information, and so on.

Unit testing is most valuable when you have a component that presents strong guarantees to the world outside of itself and you want to validate those guarantees. For example, a function’s documentation says that it will return the number “1” if passed the parameter “0.” A unit test would pass this function the parameter “0” and assert that it returned the number “1.” It would not check how the code inside of the component behaved–it would only check that the function’s guarantees were met.

Usually, a unit test is testing one behavior of one function in one class/module. One creates a set of unit tests for a class/module that, when you run them all, cover all behavior that you want to verify in that module. This almost always means testing only the public API of the system, though–unit tests should be testing the behavior of the component, not its implementation.

Theoretically, if all components of the system fully define their behavior in documentation, then by testing that each component is living up to its documented behavior, you are in fact testing all possible behaviors of the entire system. When you change the behavior of one component, you only have to update a minimal set of tests around that component.

Obviously, unit testing works best when the system’s components are reasonably separate and are simple enough that it’s possible to fully define their behavior.

It is often true that if you cannot fully unit test a system, but instead have to do integration testing or end to end testing to verify behavior, some design change to the system is needed. (For example, components of the system may be too entangled and may need more isolation from each other.) Theoretically, if a system were well-isolated and had guarantees for all of the behavior of every function in the system, then no integration testing or end to end testing would be necessary. Reality is often a little different, though.

Reality

In reality, there is a scale of testing that has infinite stages between Unit Testing and End to End testing. Sometimes you’re a bit between unit testing and integration testing. Sometimes your test falls somewhere between an integration test and an end to end test. Real systems usually require all sorts of tests along this scale in order to understand their behavior reliably.

For example, sometimes you’re testing only one part of the system but its internals depend on other parts of the system, so you’re implicitly testing those too. This doesn’t make your test an Integration Test, it just makes it a unit test that is also testing other internal components implicitly–slightly larger than a unit test, and slightly smaller than an integration test. In fact, this is the sort of testing that is often the most effective.

Fakes

Some people believe that in order to do true “unit testing” you must write code in your tests that isolates the component you are testing from every other component in the system–even that component’s internal dependencies. Some even believe that this “true unit testing” is the holy grail that all testing should aspire to. This approach is often misguided, for the following reasons:

  • One advantage of having tests for individual components is that when the system changes, you have to update fewer unit tests than you have to update with integration tests or end to end tests. If you make your tests more complex in order to isolate the component under test, that complexity could defeat this advantage, because you’re adding more test code that has to be kept up to date anyway.

    For example, imagine you want to test an email sending module that takes an object representing a user of the system, and an sends email to that user. You could invent a “fake” user object–a completely separate class–just for your test, out of the belief that you should be “just testing the email sending code and not the user code.” But then when the real User class changes its behavior, you have to update the behavior of the fake User class–and a developer might even forget to do this, making your email sending test now invalid because its assumptions (the behavior of the User object) are invalid.

  • The relationships between a component and its internal dependencies are often complex, and if you’re not testing its real dependencies, you might not be testing its real behavior. This sometimes happens when developers fail to keep “fake” objects in sync with real objects, but it can also happen via failing to make a “fake” object as genuinely complex and full-featured as the “real” object.

    For example, in our email sending example above, what if real users could have seven different formats of username but the fake object only had one format, and this affected the way email sending worked? (Or worse, what if this didn’t affect email sending behavior when the test was originally written, but it did affect email sending behavior a year later and nobody noticed that they had to update the test?) Sure, you could update the fake object to have equal complexity, but then you’re adding even more of a maintenance burden for the fake object.

  • Having to add too many “fake” objects to a test indicates that there is a design problem with the system that should be addressed in the code of the system instead of being “worked around” in the tests. For example, it could be that components are too entangled–the rules of “what is allowed to depend on what” or “what are the layers of the system” might not be well-defined enough.

In general, it is not bad to have “overlap” between tests. That is, you have a test for the public APIs of the User code, and you have a test for the public APIs of the email sending code. The email sending code uses real User objects and thus also does a small bit of implicit “testing” on the User objects, but that overlap is okay. It’s better to have overlap than to miss areas that you want to test.

Isolation via “fakes” is sometimes useful, though. One has to make a judgment call and be aware of the trade-offs above, attempting to mitigate them as much as possible via the design of your “fake” instances. In particular, fakes are worthwhile to add two properties to a test–determinism and speed.

Determinism

If nothing about the system or its environment changes, then the result of a test should not change. If a test is passing on my system today but failing tomorrow even though I haven’t changed the system, then that test is unreliable. In fact, it is invalid as a test because its “failures” are not really failures–they’re an “unknown” result disguised as knowledge. We say that such tests are “flaky” or “non-deterministic.”

Some aspects of a system are genuinely non-deterministic. For example, you might generate a random string based on the time of day, and then show that string on a web page. In order to test this reliably, you would need two tests:

  1. A test that uses the random-string generation code over and over to make sure that it properly generates random strings.
  2. A test for the web page that uses a fake random-string generator that always returns the same string, so that the web page test is deterministic.

Of course, you would only need the fake in that second test if verifying the exact string in the web page was an important assertion. It’s not that everything about a test needs to be deterministic–it’s that the assertions it is making need to always be true or always be false if the system itself hasn’t changed. If you weren’t asserting anything about the string, the size of the web page, etc. then you would not need to make the string generation deterministic.

Speed

One of the most important uses of tests is that developers run them while they are editing code, to see if the new code they’ve written is actually working. As tests become slower, they become less and less useful for this purpose. Or developers continue to use them but start writing code more and more slowly because they keep having to wait for the tests to finish.

In general, a test suite should not take so long that a developer becomes distracted from their work and loses focus while they wait for it to complete. Existing research indicates this takes somewhere between 2 and 30 seconds for most developers. Thus, a test suite used by developers during code editing should take roughly that length of time to run. It might be okay for it to take a few minutes, but that wouldn’t be ideal. It would definitely not be okay for it to take ten minutes, under most circumstances.

There are other reasons to have fast tests beyond just the developer’s code editing cycle. At the extreme, slow tests can become completely useless if they only deliver their result after it is needed. For example, imagine a test that took so long, you only got the result after you had already released the product to users. Slow tests affect lots of processes in a software engineering organization–it’s simplest for them just to be fast.

Sometimes there is some behavior that is inherently slow in a test. For example, reading a large file off of a disk. It can be okay to make a test “fake” out this slow behavior–for example, by having the large file in memory instead of on the disk. Like with all fakes, it is important to understand how this affects the validity of your test and how you will maintain this fake behavior properly over time.

It is sometimes also useful to have an extra suite of “slow” tests that aren’t run by developers while they edit code, but are run by an automated system after code has been checked in to the version control system, or run by a developer right before they check in their code. That way you get the advantage of a fast test suite that developers can use while editing, but also the more-complete testing of real system behavior even if testing that behavior is slow.

Coverage

There are tools that run a test suite and then tell you which lines of system code actually got run by the tests. They say that this tells you the “test coverage” of the system. These can be useful tools, but it is important to remember that they don’t tell you if those lines were actually tested, they only tell you that those lines of code were run. If there is no assertion about the behavior of that code, then it was never actually tested.

Overall

There are many ways to gain knowledge about a system, and testing is just one of them. We could also read its code, look at its documentation, talk to its developers, etc., and each of these would give us a beliefabout how the system behaves. However, testing validates our beliefs, and thus is particularly important out of all of these methods.

The overall goal of testing is to gain valid knowledge about the system. This goal overrides all other principles of testing–any testing method is valid as long as it produces that result. However, some testing methods are more efficient–they make it easier to create and maintain tests which produce all the information we desire. These methods should be understood and used appropriately, as your judgment dictates and as they apply to the specific system you’re testing.



Sumber : The Philosophy of Testing

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

Model–view–controller (MVC) is a software design pattern for implementing user interfaces on computers. It divides a given software application into three interconnected parts, so as to separate internal representations of information from the ways that information is presented to or accepted from the user.[1][2]

Traditionally used for desktop graphical user interfaces (GUIs), this architecture has become popular for designing web applications.

As with other software architectures, MVC expresses the "core of the solution" to a problem while allowing it to be adapted for each system.[3] Particular MVC architectures can vary significantly from the traditional description here.[4]

Components

200px-MVC-Process.svg.png
 
A typical collaboration of the MVC components.

The central component of MVC, the model, captures the behavior of the application in terms of its problem domain, independent of the user interface.[5]

        • The model directly manages the data, logic, and rules of the application.
  • view can be any output representation of information, such as a chart or a diagram. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
  • The third part, the controller, accepts input and converts it to commands for the model or view.[6]

 

Interactions

In addition to dividing the application into three kinds of components, the model–view–controller design defines the interactions between them.[7]

  • model stores data that is retrieved according to commands from the controller and displayed in the view.
  • view generates new output to the user based on changes in the model.
  • controller can send commands to the model to update the model's state (e.g., editing a document). It can also send commands to its associated view to change the view's presentation of the model (e.g., by scrolling through a document).

History

One of the seminal insights in the early development of graphical user interfaces, MVC became one of the first approaches to describe and implement software constructs in terms of their responsibilities.[8]

Trygve Reenskaug introduced MVC into Smalltalk-76 while visiting the Xerox Palo Alto Research Center (PARC)[9][10] in the 1970s. In the 1980s, Jim Althoff and others implemented a version of MVC for the Smalltalk-80 class library. Only later did a 1988 article in The Journal of Object Technology (JOT) express MVC as a general concept.[11]

The MVC pattern has subsequently evolved,[12] giving rise to variants such as hierarchical model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter(MVP), model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.[citation needed]

The use of the MVC pattern in web applications exploded in popularity after the introduction of Apple's WebObjects in 1996, which was originally written in Objective-C (that borrowed heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became popular with Java developers when WebObjects was ported to Java. Later frameworks for Java, such as Spring (released in 2002), continued the strong bond between Java and MVC. The introduction of the frameworks Rails (December 2005, for Ruby) and Django (July 2005, for Python), both of which had a strong emphasis on rapid deployment, increased MVC's popularity outside the traditional enterprise environment in which it has long been popular. MVC web frameworks now hold large market-shares relative to non-MVC web toolkits.[13]

Use in web applications

Although originally developed for desktop computing, model–view–controller has been widely adopted as an architecture for World Wide Web applications in major programming languages. Several commercial and noncommercial web frameworks have been created that enforce the pattern. These software frameworks vary in their interpretations, mainly in the way that the MVC responsibilities are divided between the client and server.[14]

Early web MVC frameworks took a thin client approach that placed almost the entire model, view and controller logic on the server. This is still reflected in popular frameworks such as Ruby on RailsDjangoASP.NET MVC. In this approach, the client sends either hyperlink requests or form input to the controller and then receives a complete and updated web page (or other document) from the view; the model exists entirely on the server.[14] As client technologies have matured, frameworks such as AngularJSEmberJSJavaScriptMVC and Backbone have been created that allow the MVC components to execute partly on the client (also see Ajax).


Sumber : Wikipedia MVC

 

Associated Kursus: KI142303BKI142303B
 
Gambar dari HENDRO EKO PRABOWO 5116201006
by HENDRO EKO PRABOWO 5116201006 - Friday, 23 December 2016, 08:36
Anyone in the world

Architectural design is concerned with understanding how a system should be organized and designing the overall structure of that system. Architectural design is the first stage in the software design process. It is the critical link between design and requirements engineering, as it identifies the main structural components in a system and the relationships between them. The output of the architectural design process is an architectural model that describes how the system is organized as a set of communicating components.

In agile processes, it is generally accepted that an early stage of the development process should be concerned with establishing on overall system architecture. Incremental development of architectures is not usually successful. While refactoring components in response to changes is usually relatively easy, refactoring a system architecture is likely to be expensive.

 

Picture 1 shows an abstract model of the architecture for a packing robot system that shows the components that have to be developed. This robotic system can pack different kinds of object. It uses a vision component to pick out objects on a conveyor, identify the type of object, and select the right kind of packaging. The system then moves objects from the delivery conveyor to be packaged. It places packaged objects on another conveyor. The architectural model shows these components and the links between them.

In practice, there is a significant overlap between the processes of requirements engineering and architectural design. Ideally, a system specification should not include any design information. This is unrealistic except for very small systems. Architectural decomposition is usually necessary to structure and organize the specification. Therefore, as part of the requirements engineering process, you might propose an abstract system architecture where you associate groups of system functions of features with large-scale components or sub-systems. You can then use this decomposition to discuss the requirements and features of the system with stakeholders.

You can design software architectures at two levels of abstraction, which I call architecture in the small and architecture in the large:

1. Architecture in the small is concerned with the architecture of individual programs. At this level, we are concerned with the way that an individual program is decomposed into components. This chapter is mostly concerned with program architectures.

2. Architecture in the large is concerned with the architecture of complex enterprise systems that include other systems, programs, and program components. These enterprise systems are distributed over different computers, which may be owned and managed by different companies.

Robot%20architecture.JPG

Picture 1 The architecture of a packing robot control system

 

Software architecture is important because it affects the performance, robustness, distributability, and maintainability of system. Bosch (2000) discusses, individual components implement the functional system requirements. The non-functional requirements depend on the system architecture--the way in which these components are organized and communicate. In many system, non-functional requirements are also influenced by individual components, but there is no doubt that architecture of the systems is the dominant influence.

Bass, et al. (2003) discuss three advantages of explicitly designing and documenting software architecture :

1. Stakeholder communication. The architecture is a high-level presentation of the system that many be used as a focus for discussion by a range of different stakeholders.

2. System analysis. Making the system architecture explicit at an early stage in the system development requires some analysis. Architectural design decisions have a profound effect on whether or not system can meet critical requirements such as performance, reliability, and maintainability.

3. Large-scale reuse. A model of system architecture is a compact, manageable, description of how a system is organized and how the components interoperate. The system architecture is often the same for system with similar requirements and so can support large-scale software reuse.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 08:39 ]
 
Gambar dari HENDRO EKO PRABOWO 5116201006
by HENDRO EKO PRABOWO 5116201006 - Friday, 23 December 2016, 08:33
Anyone in the world

Testing is intended to show that a program does what it is intended to do and to discover program defects before it it put into use. When we test software, we execute a program using artificial data. We check the result of the test run for errors, anomalies, or information about the program's non-functional attributes. The testing process how distinct goals:

1. To demonstrate to the developer and the customer that the software meets its requirements. For custom software, this means that there should be at least one test for every requirement in the requirements document. For generic software products, it means that there should be tests for all of the system features, plus combinations of these features, that will be incorporated in the product release.

2. To discover situations in which the behavior of the software is incorrect, undesirable or does not conform to its specifications. These are a consequence of software defects. Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with other systems, incorrect computations, and data corruption.

 

The first goal leads to validation testing, where you expect the system to perform correctly using a given set of test case that reflect the system's expected use. The second goal leads to defect testing, where the test cases are designed to expose defects. The test cases in defect testing can be deliberately obscure and need not reflect how the system is normally used. Of course, there is no definite boundary between these two approaches to testing. During validation testing, you will find defects in the system; during defect testing, some of the tests will show that the program meets its requirements.

 

Validation%20and%20defect%20testing.JPG

Picture 1. An Input-Output Model of Program Testing

 

Let see Picture 1, it will help to explain the differences between validation testing and defect testing. Think of the system being tested as a black box. The system accepts inputs from some input set I and generates outputs in an output set O. Some of the outputs will be erroneous. These are the outputs in set 

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 08:40 ]
 
Anyone in the world

Let us get one thing straight. The only estimation I have done earlier is during my engineering days where I worked out quantities and rates for construction work. The key advantage then was that most of the design and specifications were finalized before the estimation began. If not, one could quote a vague figure called the 'accepted market rate', and then go on changing that figure [provided one had the skill set ;) to do so] as and when the design and specs were finalized.

For others with my kind of background, estimating a software project based upon a few discussions with the customer and stakeholders is nonsense. Estimating a software project is like estimating what it would take to invent something without knowing what that something is. Function point analysis and cocomo(?) is for geeks. Quite often, software estimates are way off the mark, and the end result being some geeks are then given pink slips.

And so, I did a sensible thing. I stayed well clear of estimating software projects. But situations change and it is getting more difficult to avoid this dumb task. The inability to estimate projects, resources, risks etc. also means inability to get perks, broaden our horizons, and rise up the geek ladder. Meanwhile, there are more mouths to feed, more financial commitments, more taxes to handle. And anyways, there is no point doing low level coding all your life. One should strive to reach a level where one can really screw things up.

Brass Tacks

Right then, enough of cribbing. It doesn’t pay. And my superiors are better at it. What I aim to do is to figure out a way to make estimation of a regular software project similar to estimating a regular engineering project. A typical engineering project estimate contains several parts as follows:

  • Identifying the items of work.
  • Fixing a unit for each item of work.
  • Calculating the quantity of work.
  • Calculating the rate for each item of work.
  • Fine tuning the final cost.

The rate is the cost of resources involved per unit of work. The resources are usually:

  • Man-hours.
  • Materials.
  • Tools and plants.
  • Overheads.
  • Profit margin.

The overheads involve:

  • The establishment charges.
  • Depreciation of tools and plants.
  • Interest factor etc.
  • Licenses, permits, taxes.

The same concept can be extended for estimating software projects. We start by identifying and describing items of work, then the units for the item, the quantity, and so on. Describing items of work can be fairly irritating form a techie point of view, thanks to its verbose nature. The rate part should look normal enough. Usually, overheads and profit margin [decided by authorized persons only ;)] are taken as a percentage. By now, many would be smirking and/or condemning this kind of an approach. The main defense in favor of this approach is that most engineering works have to last for years, the project durations range from months to years, and there are many non technical people who believe they know things better than engineers (in some case, it's true). Many engineering projects have critics ranging from politicians, journalists, lawyers, social workers, citizens, idiots, and even engineers. Engineering projects include the Egyptian pyramids, space exploration, or manufacturing paper clips. So, why can't engineering techniques be applied to software development - a field that includes engineers of software, hardware, computers, and networking, and where reliability, robustness, and scalability are required.

Identifying items of work

Okay, now for the first part. What does an item of work consist of? In business applications, this could be a workflow, sub workflow, or development of components or sub components. We also need to classify the items of work (i.e., identify types of work).

In the following example, I will create some categories of work in software projects. A typical classification tree could be as shown in the figure below:

estimation categories

We can continue working out more categories and subcategories, but for now, we will attempt to estimate within the categories shown above. All the items under hardware category and sub categories are easier to estimate as we can get the costing from their purchase/market value. The same applies to the operating systems, run time software, and tools. The workstation cost includes establishment charges like office space, electricity, water, air conditioning etc. Other costs like renting, licenses, work permits, and taxes for acquiring each item should be added as applicable. Some overheads and hidden costs are not reflected in the tree. These are usually estimated and added as either a lump sum cost or as a percentage of the overall cost.

The actual software development costing starts from the category 'Application'. Most business applications have several modules. A module is a feature in an application. Now, we start to identify each item of work for each of its categories.

The category 'Framework' provides the backbone on which all the other components in the application link to or depend upon to do common tasks.

The framework development could include the following items of work:

  • Data access components
  • Exception handling
  • Logging
  • Email/Fax
  • Printing
  • Data Export
  • Services (web services, remoting, DCOM, COM+, application server etc.)
  • Security (authorization, authentication, code access, encryption)
  • Resource Files
  • Deployment
  • Miscellaneous (utilities, constants, enumerations)

The category 'Presentation' could include the following items of work:

  • GUI (Forms and controls)
  • Custom controls
  • Validation
  • Animation and Graphics

The category 'Business' could include the following items of work:

  • Coding (for work flows and business requirements)
  • Validating data
  • Formulas and calculations
  • Data manipulations
  • Implementing business rules

The category 'Data formatter' is used to convert data into acceptable formats before sending and after receiving. The formats can be XML, value objects (an object that is used to carry data from tier to tier), and encryption.

The category 'Data storage' could have the following items of work:

  • Database design
  • Writing SQLs, procedures, triggers
  • Indexing
  • Performance tuning
  • Flat Files (including text, XML, graphics, multimedia, etc.)

The Application itself is a category. In the category Application, an item of work is a work flow. A work flow itself may consists of other workflows and processes. The most common features when fixing a rate for an item of work are the expected amount of work to be done (GUI objects, lines of code, non private methods,) and expected complexity. This depends upon the experience of the bidding company, technology being used, and support available. The amount of work increases with the size of the workflow. If the workflow has a lot of rules, branches, sub workflows, and processes, it indicates complexity. Complex workflows should be broken down into simpler workflows. Therefore, the basis of estimation would be to develop as many workflows as possible and to put in the maximum number of details into the workflow. In case, developing all the workflows is not possible (often the case), then (the business analyst should be able to) map expected workflows to existing workflows so that estimation can be done with better accuracy. Trying to estimate a project without doing the workflows is practically impossible except in the case of static web pages.

Now that we have identified some of the items of work, we need to explain them in detail. Determine what exactly is going to be provided (and maybe even what is not being provided), the corresponding specifications, the cost of manpower, hardware, documents and other deliverables, overheads up to completion (or delivery). In software, this would also include the cost of testing, quality assurance, user acceptance testing, and warranties.

After describing an item of work, fix the unit for each item. For form development, the unit could be done as numbers (no.s). Note, the forms maybe further divided into categories depending upon the complexity and content in it. So, do not put all the forms into one single item of work unless all forms are similar to develop. In the Business category, we should identify the main business objects, components, or features, and categorize them depending upon the content and complexity. Again the unit for business category works could be no.s.

The database could be estimated based upon the number of database objects expected, and the complexity of the tables, views, SQLs, procedures, normalization, and level of data integrity required.

Once a workflow is done, it should be studied carefully to identify the items of work involved. Below are a set of workflows to request and provide a resource.

Sample Work Flows

Categorization

Let us study the resource allotment workflow for different categories.

Category Data Storage

This requires resource details to be stored so that resources can be easily stored and retrieved. This involves a relation database management system. The database should contain tables to store resources based upon certain characteristics, say human resources, items, and documents. To retrieve these faster, they can be further categorized and have keywords associated with them. All these categories would themselves be stored in another table(s) and mapped to their parent categories.

The request for resource should be mapped to the resources found. Therefore, all requests are stored in a table and can be categorized based upon priority, impact, department/person placing the request, and availability of resource.

Finally, the different criteria for accepting or rejecting a resource should be recorded along with the department/person who accepted or rejected the resource, along with an explanation. The criteria for accepting/rejecting a resource could be stored in another table and also could be categorized.

To place requests, search and allocate for resources would require writing SQLs. By viewing the functionality of the screens (if prototypes are available) as well as the workflow itself gives a fair idea of the complexity of the SQLs to be developed.

So right away, we get an idea of what is required from the data storage point of view.

Category Presentation

The persons in charge of resource allocation would need to view the requests for resources. They should also be able to sort resources based upon the status of the request, date of the request, priority, impact, department, costing etc. This would help them decide how to handle each request. The resource in charge persons should be able to search for resources based upon categories and sub categories, keywords, status, location etc. Finally, a resource should be mapped to a request. Screens should be developed to display requests details, resource details, resources and requests mapping, search for requests and resources, etc. The display can be on WinForms, browser or console.

Category Business

The Business category usually contains the interfaces and business objects. The business object may perform calculations (say, time period can be calculated from date(s) provided), implementing business rules (like whether the particular department can be given a certain resource), sequence of calling the business methods, and interactions with the data tier.

In the resource allocation workflow, we could have business objects such as requester, resource manager, resource, and search engine. Each object required can be considered while estimating the cost of the workflow.

Category Data Formatter

The Data formatter ensures that the format of the data is correct. If a particular data were of type float, then the data formatter would convert the data passed to it into a float before assigning it. Else it would not allow the data to be assigned at all. In case of XML, it would ensure that the XML document is built as per some predefined format. The data formatter could also be part of the framework. A data formatter can also be thought of as an object to carry data from one tier to another.

Category Framework

The framework components required by the resource allotment workflow could consist of tier to tier communication, security features, custom components, libraries, resource files, data tier access, error handling, and possibly logging, email, fax, and third party interaction. The extent to which each of these would be developed should be considered after looking at the entire application. But for our item of work, we should take a fraction of the framework development cost into account. This is because the framework is a development requirement and not a business requirement. So, only consider the features of the framework that would be required for the flow and add an approximate cost to the rate.

Item of Work

Let us call the item of work as 'resource allotment'. Now that we have looked at the item of work with respect to different categories, it will be easier to describe it. A critical aspect is the labor or man-hours involved to develop the workflow. Most companies have their own metrics on the development cost based upon efforts put into previous projects. Based upon the previous project metrics, they calculate the number of man-hours.

Now we put the item of work into an estimation table as shown below:

Serial No.

Item

Code

Description

Rate

(INR)

Unit

Qty

Amt

1

wf_res_01

Developing and deploying the work flow resource allotment including the cost coding for sub workflows, components, display screens, inserting, modifying, deleting and retrieving data, passing data between client systems, servers, databases and external services, validating and formatting data, implementing business rules, including database design, application design, quality assurance, unit/integration/black box/load/user acceptance testing, hardware/software tools/operating-systems/compilers/labor/ licenses/permits/shipping/ deliverables.

60000.00

No.s

1.00

60000.00

For an inexperienced estimator, it would be difficult to fix the cost for the item of work shown in the table above. The best way is to describe the item of work with as many details as possible, and then try to arrive at the cost of each detail. Take the help of experienced persons (architects, managers, developers, testers, technical writers. etc.) to identify more details (including hardware, software, man hours, overheads) and their corresponding costs. Then simply add it up and compare the final figure against the cost of similar work done earlier.

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 01:18 ]
 
Anyone in the world

Scrum?

Scrum is a subset of Agile and one of the most popular process frameworks for implementing Agile. It is an iterative software development model used to manage complex software and product development. Fixed-length iterations, called sprints lasting one to two weeks long, allow the team to ship software on a regular cadence. At the end of each sprint, stakeholders and team members meet to plan next steps. 

Scrum follows a set of roles, responsibilities, and meetings that never change. For example, Scrum calls for four ceremonies that provide structure to each sprint: sprint planning, daily stand-up, sprint demo, and sprint retrospective. During each sprint, the team will use visual artifacts like task boards or burndown charts to show progress and receive incremental feedback.

Jeff Sutherland created the Scrum process in 1993, taking the term “Scrum” from an analogy in a 1986 study by Takeuchi and Nonaka published in the Harvard Business Review. In the study, Takeuchi and Nonaka compare high-performing, cross-functional teams to the Scrum formation used by Rugby teams. The original context for this was manufacturing, but Sutherland, along with John Scumniotales and Jeff McKenna, adapted the model for software development.

Advantages of Scrum

Scrum is a highly prescriptive framework with specific roles and ceremonies. While it can be a lot to learn, these rules have a lot of advantages. The benefits of Scrum include:

  • More transparency and project visibility: With daily stand-up meetings, the whole team knows who is doing what, eliminating many misunderstandings and confusion. Issues are identified in advance, allowing the team to resolve them before they get out of hand.
     
  • Increased team accountability: There is no project manager telling the Scrum Team what to do and when. Instead, the team collectively decides what work they can complete in each sprint. They all work together and help each other, improving collaboration and empowering each team member to be independent. 
     
  • Easy to accommodate changes: With short sprints and constant feedback, it’s easier to cope with and accommodate changes. For example, if the team discovers a new user story during one sprint, they can easily add that feature to the next sprint during the backlog refinement meeting.
     
  • Increased cost savings: Constant communication ensures the team is aware of all issues and changes as soon as they arise, helping to lower expenses and increase quality. By coding and testing features in smaller chunks, there is continuous feedback and mistakes can be corrected early on, before they get too expensive to fix.

Disadvantages of Scrum

While Scrum offers some concrete benefits, it also has some downsides. Scrum requires a high level of experience and commitment from the team and projects can be at risk of scope creep. 

Here are the disadvantages of Scrum: 

  • Risk of scope creep: Some Scrum projects can experience scope creep due to a lack of specific end date. With no completion date, stakeholders may be tempted to keep requesting additional functionality. 
     
  • Team requires experience and commitment: With defined roles and responsibilities, the team needs to be familiar with Scrum principles to succeed. Because there are no defined roles in the Scrum Team (everyone does everything), it requires team members with technical experience. The team also needs to commit to the daily Scrum meetings and to stay on the team for the duration of the project.
     
  • The wrong Scrum Master can ruin everything: The Scrum Master is very different from a project manager. The Scrum Master does not have authority over the team; he or she needs to trust the team they are managing and never tell them what to do. If the Scrum Master tries to control the team, the project will fail.
     
  • Poorly defined tasks can lead to inaccuracies: Project costs and timelines won’t be accurate if tasks are not well defined. If the initial goals are unclear, planning becomes difficult and sprints can take more time than originally estimated.

Roles in Scrum

 

There are three specific roles in Scrum. They are:

  • Product Owner: The Scrum Product Owner has the vision of what he or she wants to build and conveys that vision to the team. The Product Owner focuses on business and market requirements, prioritizing all the work that needs to be done. He or she builds and manages the backlog, provides guidance on which features to ship next, and interacts with the team and other stakeholders to make sure everyone understands the items in the product backlog. The Product Owner is not a project manager. Instead of managing the status and progress, his or her job is to motivate the team with a goal and vision. 
     
  • Scrum Master: Often considered the coach for the team, the Scrum Master helps the team do their best possible work. This means organizing meetings, dealing with roadblocks and challenges, and working with the Product Owner to ensure the product backlog is ready for the next sprint. The Scrum Master also makes sure the team follows the Scrum process. He or she doesn’t have authority over the team members, but he or she does have authority over the process. For example, the Scrum Master can’t tell someone what to do, but could propose a new sprint cadence. 
     
  • Scrum Team: The Scrum Team is comprised of five to seven members. Everyone on the project works together, helps each other, and shares a deep sense of camaraderie. Unlike traditional development teams, there are not distinct roles like programmer, designer, or tester. Everyone completes the set of work together. The Scrum Team owns the plan for each sprint; they anticipate how much work they can complete in each iteration. 

Steps in the Scrum Process

 


There are a specific, unchanging set of steps in the Scrum flow. They include:

  • Product backlog: The Product Owner and Scrum Team meet to prioritize the items on the product backlog (the work on the product backlog comes from user stories and requirements). The product backlog is not a list of things to be completed, but rather it is a list of all the desired features for the product. The development team then pulls work from the product backlog to complete during each sprint.
     
  • Sprint planning: Before each sprint, the Product Owner presents the top items on the backlog to the team in a sprint planning meeting. The team then chooses which work they can complete during the sprint and moves the work from the product backlog to the sprint backlog (which is a list of tasks to complete in the sprint).
     
  • Backlog refinement/grooming: At the end of one sprint, the team and Product Owner meet to make sure the backlog is ready for the next sprint. The team may remove user stories that aren’t relevant, create new stories, reassess the priority of stories, or split user stories into smaller tasks. The purpose of this “grooming” meeting is to ensure the backlog only contains items that are relevant and detailed, and that meet the project’s objectives.
     
  • Daily Scrum meetings: The Daily Scrum is a 15-minute stand-up meeting where each team member talks about their goals and any issues that have come up. The Daily Scrum happens every day during the sprint and helps keep the team on track.
     
  • Sprint review meeting: At the end of each sprint, the team presents the work they have completed at a sprint review meeting. This meeting should feature a live demonstration, not a report or a PowerPoint presentation. 
     
  • Sprint retrospective meeting: Also at the end of each sprint, the team reflects on how well Scrum is working for them and talks about any changes that need to be made in the next sprint. The team may talk about what went well during the sprint, what went wrong, and what they could do differently.

Tools, Artifacts, and Methods in Scrum

 


In addition to roles and ceremonies, Scrum projects also include certain tools and artifacts. For example, the team uses a Scrum board to visualize the backlog or a burndown chart to show outstanding work. The most common artifacts and methods are:

  • Scrum board: You can visualize your sprint backlog with a Scrum task board. The board can have different forms; it traditionally involves index cards, Post-It notes, or a whiteboard. The Scrum board is usually divided into three categories: to do, work in progress, and done. The Scrum Team needs to update the board throughout the entire sprint. For example, if someone comes up with a new task, she would write a new card and put it in the appropriate column. 
     
  • User stories: A user story describes a software feature from the customer’s perspective. It includes the type of user, what they want, and why they want it. These short stories follow a similar structure: as a <type of user>, I want to <perform some task> so that I can <achieve some goal.> The development team uses these stories to create code that will meet the requirements of the stories.
     
  • Burndown chart: A burndown chart represents all outstanding work. The backlog is usually on the vertical axis, with time along the horizontal axis. The work remaining can be represented by story points, ideal days, team days, or other metrics. A burndown chart can warn the team if things aren’t going according to plan and helps to show the impact of decisions. 
     
  • Large-Scale Scrum (LeSS): If you want to scale elements of Scrum to hundreds of developers, the Large-Scale Scrum (LeSS) framework helps extend the rules and guidelines without losing the core of Scrum. The principles are taken directly from Scrum, however focuses on scaling up without adding additional overhead (like adding more roles, artifacts, or processes).
     
  • Timeboxing: A timebox is a set period of time during which a team works towards completing a goal. Instead of letting a team work until the goal is reached, the timebox approach stops work when the time limit is reached. Time-boxed iterations are often used in Scrum and Extreme Programming.
     
  • Icebox: Any user stories that are recorded but not moved to development are stored in the icebox.
    The term “icebox” was created by Pivotal Tracker, an Agile project management tool. 
     
  • Scrum vs RUP: While both Scrum and Rational Unified Process (RUP) follow the Agile framework, RUP involves more formal definition of scope, major milestones, and specific dates (Scrum uses a project backlog instead of scope). In addition, RUP involves four major phases of the project lifecycle (inception, elaboration, construction, and transition), whereas Scrum dictates that the whole “traditional lifecycle” fits into one iteration. 
     
  • Lean vs Scrum: Scrum is a software development framework, while Lean helps optimize that process. Scrum’s primary goal is on the people, while Lean focuses on the process. They are both considered Agile techniques, however Lean introduces two major concepts: eliminating waste and improving flow.

How to Get Started with Scrum

Working with Scrum often means changing the team’s habits. They need to take more responsibility, increase the quality of the code, and boost speed of delivery. This level of commitment acts as a change agent; as the teams commit to sprint goals, they are more and more motivated to get better and faster to deliver a quality product.

A good place to start with Scrum is to talk about the roles. Every project must have a Scrum Master, Product Owner, and Scrum Team. You may want to talk about who should be the Scrum Master and Product Owner, or if these roles are already assigned, you may want to clarify their roles and responsibilities. 

Depending on how familiar your team is with Scrum, you may also want to look into training sessions. Certified Scrum Coaches and Trainers and Scrum Alliance Registered Education Providers can help your team learn and embrace Scrum.

Our newest view, Card View, gives Agile teams a more highly-visual way to work, communicate, and collaborate in Smartsheet. Card View enables you to focus attention with rich cards, give perspective with flexible views, and prioritize and adjust work more visually. Display information on cards including custom fields, images, and color coding to better focus your team’s attention. Categorize cards into lanes to organize your work more visually. 

See how easy it can be to use Smartsheet Card View during your next Scrum meeting.

 

 

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 00:20 ]
 
Anyone in the world

1) Code Simplicity

codesimplicityCode Simplicity is a companion blog to author Max Kanat-Alexander’s application design book Code Simplicity: The Science of Software Development. Max is a software engineer at Google, and the chief architect of the Bugzilla Project – and his blog draws upon this experience to offer advice on simplifying software design. His mantra is ‘Complexity is stupid. Simplicity is smart’ – and after reading the blog, I’m inclined to agree.

Follow on Twitter

2) Joel on Software

joelonsoftwareIn addition to being a former Microsoft program manager, Joel Spolsky is a co-founder of programming Q&A site StackExchange, the man behind software development company Fog Creek Software, and the awesome little browser-based workflow tool Trello. He’s been blogging since 2000, and his site is a goldmine of insight on software dev, management and business.

Follow on Twitter

3) Scott Berkun

scottberkunScott Berkun’s eponymous blog is one of the most multi-faceted on this list, offering advice and insight into creativity, leadership and philosophy – alongside his experiences as a manager at giants Microsoft and WordPress. If you’re tired of reading the same old blog content, Scott’s blog offers a plethora of engaging info, all of which is designed to help you become a better person, as well as a better programmer.

Follow on Twitter

4) Coding Horror

codinghorrorCoding Horror is the outlet of seasoned web application developer (and, like Joel Spolsky above, co-founder of StackExchange) Jeff Atwood. The blog tackles all manner of software development and security topics, but it’s Jeff’s interest in the human component of development that makes the blog stand out. As Jeff himself says:

In the art of software development, studying code isn't enough; you have to study the people behind the software, too.’

Follow on Twitter

5) Scott Hanselman

hanselmanScott Hanselman’s blog tackles the full pantheon of software developer interests, covering technology, code, gadgets, dev culture and the web. As a former professor, and current employee of Microsoft, his hands-on advice is clear, concise and helpful. Unlike many of his contemporaries, Scott’s writing is also bursting with personality. If you’re a fan of Scott’s insight, you can also check out his three podcasts and YouTube channel.

Follow on Twitter

6) /\ndy

pragprogAndy Hunt is a prolific author, a co-founder of the Agile Alliance, and part of the team that developed the Agile Manifesto. Andy’s blog tackles a diverse range of development topics, and unsurprisingly, offers some of the most interesting and unique insight into agile development anywhere on the web.

Follow on Twitter

7) Paul Graham’s Essays

paulgrahamPaul Graham was one-half of the duo behind Viaweb, arguably the very first (started in 1995) software as a service company. Since then, he’s gone on to co-found Y Combinator, a start-up incubator that’s funded the likes of Dropbox, Reddit and Airbnb. Paul Graham’s Essays collates his long-form insights into developing SaaS businesses, and provides developers a wonderful insight into their role within the wider business world.

Follow on Twitter

8) Federico Cargnelutti

fedecargFederico is a professional mobile and web developer, and regularly blogs around coding (particularly PHP), software architecture and agile development. With a mixture of straight-to-the-point tutorials and, courtesy of his Twitter, a ton of tech news and insight, Frederico’s blog is a great read for any software developer.

Follow on Twitter

9) DailyJS

dailyjsCourtesy of author Alex Young, DailyJS provides exactly what you might expect – daily insights and advice on all things Javascript. The site contains all-manner of hands-on tips and worked examples, alongside information on the field’s latest news and developments. For users of Vim, Alex also runs the equally useful usevim blog.

Follow on Twitter

10) David Walsh

davidwalshDavid Walsh is Mozilla’s senior web developer, and the core developer for the MooTools Javascript Framework. David’s blog reflects his skills in HTML/5, JS and CSS, and offers a ton of engaging advice and insight into front-end technologies. Even more obvious is his passion for open source contribution and trial-and-error development, making his blog one of the most honest and engaging around.

Follow on Twitter

11) Pontikis

pontikisPonitkis is a blog of two halves, offering the latest in web technology, business and news, alongside a plethora of how-tos and guides. Author Christos Pontikis offers seriously in-depth instructions on all-manner of languages and frameworks, with his expert insights into PHP, jQuery and MySQL a serious incentive for any knowledge-hungry developers. 

Follow on Twitter

12) Six Revisions

sixrevisionsSix Revisions is blog resource for web developers and designers, offering hands-on tutorials, news and advice for anyone involved in website and web app development. Alongside some great commentary on all-things HTML, CSS and JavaScript, the site offers excellent guidance on UX and UI design.

Follow on Twitter

13) Web Appers

webappersWebAppers dedicates itself to sourcing and collating free open-source tools and resources, with the professional web dev and web designer in mind. In addition to a pantheon of almost 700 plugins, the blog shares a ton of actionable guidance and helpful advice, with a view to helping web developers use the tools in the most beneficial way possible. 

Follow on Twitter

14) Ajaxian

ajaxianDespite the name, Ajaxian offers a ton of engaging, insightful advice on a huge range of development topics, covering everything from .Net development to XML. Unsurprisingly, some of the best insights look at Javascript and AJAX - but with contributions coming from a core team of 12 developers (including devs with decades of professional experience working for industry giants like Google), the site is a must-read resource for any software developer.

Follow on Twitter

15) ProgrammableWeb

programmablewebSince its inception in 2005, ProgrammableWeb has been at the forefront of the evolving API economy. It offers a staggering amount of hands-on content, and manages to maintain its quality across an incredible publication schedule ranging as high as 10 posts per day. In addition to its fantastic blog content, ProgrammableWeb has a huge directory of APIs for web and mobile development, and a plethora of whitepapers and research.

Follow on Twitter

16) Martin Fowler

martinfowlerSoftware developer Martin Fowler is a prolific author (having penned no less than seven programming books), and an even more prolific blogger. He writes primarily around agile, refactoring and project delivery – with a particular focus on the design of software systems, and ways to maximise the productivity of development. Whilst the blog is a great resource for all types of developer, it should have a special interest to those managing a development team.

Follow on Twitter

17) Eric Sink

ericsinkEric Sink is a software developer at SourceGear – but prior to his current role, he served as project lead for the browser development team that prototyped a little-known browser called ‘Internet Explorer’. Since then, Eric has been blogging consistently around software development, with his advice, news roundups and opinions stretching all the way back to 2001.

Follow on Twitter

18) The Daily WTF

thedailywtfIf you’re looking to break-up the monotony of personal development, The Daily WTF should provide ample relief. The site pairs genuinely helpful development insights with an awesome sense of humour, creating a blog that’s as funny to read as it is useful. The site has a particular focus on how-not-to-guides, and the disastrous development stories its shares will easily consume your lunch break.

Follow on Twitter

19) UIE Brainsparks

uieUser Interface Engineering is a research and training company focused on web and application usability. Its Brainsparks blog is an industry-leading resource, covering all aspects of UI and UX development – with founder Jared Spool offering his expert insight on a weekly basis. In addition to the blog, UIE offer podcasts, long-form articles, event and seminars for devs interested in improving their UI skills. 

Follow on Twitter

20) PragDave

pragdaveProgrammer turned publisher Dave Thomas blogs and tweets about all manner of development news and advice. Alongside tutorials, guides and opinions, Dave has developed his own Zen-like approach to the art of coding – creating the martial arts inspired CodeKata to help developers change their attitude to coding, and develop an always-learning mindset.

Follow on Twitter

Associated Kursus: KI142303BKI142303B
 
Anyone in the world

 

Open philosophies

Open development breaks the data center down into its lowest-level components, which fit together by open standards. Still, with less than 2% of enterprise applications designed for horizontal scaling, enterprise IT should avoid lifting legacy apps onto open infrastructure.

Instead, put new workloads on building-block infrastructure, and renegotiate your hardware contracts to get ready for more open-standard hardware and software.

Automation

he problem, however, is IT administrators love scripts. They love creating the best scripts, fiddling with scripts that come from colleagues, and leaving little documentation when they move on to another job. IT automation must evolve from scripting to deterministic (defined workloads for tasks) then to heuristic design (automation based on data fed in operations). There are banks today that use heuristic automation because they have all the hardware that you could want, Govekar said. But they lack the ability to automatically place workloads that best at any given moment.

Software-defined everything

the control plane is abstracted from the hardware, and it's going on with every piece of equipment a data center can buy. Software-defined servers are established, software-defined networking is maturing and software-defined storage won't have much impact until at least 2017, Govekar said.

Don't approach software-defined everything as a cost saving venture, because the real point is agility. Avoid vendor lock-in in this turbulent vendor space, and look for interoperable application programming interfaces that enable data-center-wide abstraction. Also, keep in mind that the legacy data center won't die without a fight.

Big data

Big data analysis is used in a number of ways to solve problems today. For example, police departments reduce crime without blanketing the city with patrol cars, by pinpointing likely crime hot spots at a given point in time based on real-time and historical data.

Build new data architectures to handle unstructured data and real-time input, which are disruptive changes today. The biggest inhibitor to enterprise IT adoption of big data analytics, however, isn't the data architecture; it's a lack of big data skills.

Internet of Everything

is IT in charge of the coffee pot? If it has an IP address and connects to the network, it might be.

Internet-connected device proliferation combined with big data analytics means that businesses can automate and refine their operations. It also means security takes on a whole new range of end points. In data center capacity management, Internet of Everything means demand shaping and customer priority tiering, rather than simply buying more hardware.

Build a data center that can change, don't build to last, Govekar said.

Webscale IT

For better or worse, business leaders want to know why you can't do what Google, Facebook and Amazon do.

Conventional hardware and software are not built for webscale IT, which means this trend relies on software-defined everything and open philosophies like the Open Compute Project. It also relies on a major attitude adjustment in IT where experimentation and failure are allowed.

Mobility

our workforce is mobile. Your company's customers are mobile. Bring your own device has morphed into bring your own toys. The IT service desk can't fall behind this trend and risk giving IT a reputation of being out of touch.

Bring data segregation -- personal and business data and applications isolated from each other on the same device -- onto your technology roadmap now.

Bimodal IT

No one's congratulating IT on keeping the lights on and the servers humming, no matter how difficult it can be. Bimodal IT means maintaining traditional IT practices while simultaneously introducing innovative new processes -- safely.

Take the pace layering concept from application development and apply it to IT's roadmap, and find ways to get close to customers. Bimodal IT will make your team more diverse.

Business value dashboards

 By 2017, the majority of infrastructure and operations teams will use dashboards to communicate with the outside world. Govekar made the analogy of the business-value dashboard vs. IT metrics to cruise ship reviews vs. cruise ship boiler calibration reports. They serve different purposes.

Organizational disruption

All the trends above feed shadow IT, where the business units steer around IT to gain agility.

Some IT teams are trying a new approach; rather than quash all shadow IT operations they find, these companies allow business users to set up shadow IT for projects and track the performance like a proof-of-concept trial. If the deployment succeeds, IT formally folds shadow IT into the organization.

 

Associated Kursus: KI142303BKI142303B
[ Mengubah: Thursday, 22 December 2016, 23:44 ]
 
Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()