Site blog

Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()
Anyone in the world


Open philosophies

Open development breaks the data center down into its lowest-level components, which fit together by open standards. Still, with less than 2% of enterprise applications designed for horizontal scaling, enterprise IT should avoid lifting legacy apps onto open infrastructure.

Instead, put new workloads on building-block infrastructure, and renegotiate your hardware contracts to get ready for more open-standard hardware and software.


he problem, however, is IT administrators love scripts. They love creating the best scripts, fiddling with scripts that come from colleagues, and leaving little documentation when they move on to another job. IT automation must evolve from scripting to deterministic (defined workloads for tasks) then to heuristic design (automation based on data fed in operations). There are banks today that use heuristic automation because they have all the hardware that you could want, Govekar said. But they lack the ability to automatically place workloads that best at any given moment.

Software-defined everything

the control plane is abstracted from the hardware, and it's going on with every piece of equipment a data center can buy. Software-defined servers are established, software-defined networking is maturing and software-defined storage won't have much impact until at least 2017, Govekar said.

Don't approach software-defined everything as a cost saving venture, because the real point is agility. Avoid vendor lock-in in this turbulent vendor space, and look for interoperable application programming interfaces that enable data-center-wide abstraction. Also, keep in mind that the legacy data center won't die without a fight.

Big data

Big data analysis is used in a number of ways to solve problems today. For example, police departments reduce crime without blanketing the city with patrol cars, by pinpointing likely crime hot spots at a given point in time based on real-time and historical data.

Build new data architectures to handle unstructured data and real-time input, which are disruptive changes today. The biggest inhibitor to enterprise IT adoption of big data analytics, however, isn't the data architecture; it's a lack of big data skills.

Internet of Everything

is IT in charge of the coffee pot? If it has an IP address and connects to the network, it might be.

Internet-connected device proliferation combined with big data analytics means that businesses can automate and refine their operations. It also means security takes on a whole new range of end points. In data center capacity management, Internet of Everything means demand shaping and customer priority tiering, rather than simply buying more hardware.

Build a data center that can change, don't build to last, Govekar said.

Webscale IT

For better or worse, business leaders want to know why you can't do what Google, Facebook and Amazon do.

Conventional hardware and software are not built for webscale IT, which means this trend relies on software-defined everything and open philosophies like the Open Compute Project. It also relies on a major attitude adjustment in IT where experimentation and failure are allowed.


our workforce is mobile. Your company's customers are mobile. Bring your own device has morphed into bring your own toys. The IT service desk can't fall behind this trend and risk giving IT a reputation of being out of touch.

Bring data segregation -- personal and business data and applications isolated from each other on the same device -- onto your technology roadmap now.

Bimodal IT

No one's congratulating IT on keeping the lights on and the servers humming, no matter how difficult it can be. Bimodal IT means maintaining traditional IT practices while simultaneously introducing innovative new processes -- safely.

Take the pace layering concept from application development and apply it to IT's roadmap, and find ways to get close to customers. Bimodal IT will make your team more diverse.

Business value dashboards

 By 2017, the majority of infrastructure and operations teams will use dashboards to communicate with the outside world. Govekar made the analogy of the business-value dashboard vs. IT metrics to cruise ship reviews vs. cruise ship boiler calibration reports. They serve different purposes.

Organizational disruption

All the trends above feed shadow IT, where the business units steer around IT to gain agility.

Some IT teams are trying a new approach; rather than quash all shadow IT operations they find, these companies allow business users to set up shadow IT for projects and track the performance like a proof-of-concept trial. If the deployment succeeds, IT formally folds shadow IT into the organization.


Associated Kursus: KI142303BKI142303B
[ Mengubah: Thursday, 22 December 2016, 23:44 ]
Anyone in the world
  • Definitions ?

DevOps is a new term emerging from the collision of two major related trends. The first was also called “agile system administration” or “agile operations”; it sprang from applying newer Agile and Lean approaches to operations work.  The second is a much expanded understanding of the value of collaboration between development and operations staff throughout all stages of the development lifecycle when creating and operating a service, and how important operations has become in our increasingly service-oriented world

One definition Jez Humble explained to me is that DevOps is “a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale.”

That’s good and meaty, but it may be a little too esoteric and specific to Internet startup types. I believe that you can define DevOps more practically as DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.


DevOps means a lot of different things to different people because the discussion around it covers a lot of ground.  People talk about DevOps being “developer and operations collaboration,” or it’s “treating your code as infrastructure,” or it’s “using automation,” or “using kanban,” or “a toolchain approach,” or “culture,” or a variety of seemingly loosely related items.  The best way to define it in depth is to use a parallel method to the definition of a similarly complex term, agile development.  Agile development, according to Wikipedia and the agile manifesto, consists of four different “levels” of concern. I’ve added a fifth, the tooling level – talk about agile and devops can get way too obsessed with tools, but pretending they don’t exist is also unhelpful.

  • Agile Values – Top level philosophy, usually agreed to be embodied in the Agile Manifesto. These are the core values that inform agile.
  • Agile Principles – Generally agreed upon strategic approaches that support these values.  The Agile Manifesto cites a dozen of these more specific principles. You don’t have to buy into all of them to be Agile, but if you don’t subscribe to many of them, you’re probably doing something else.
  • Agile Methods – More specific process implementations of the principles.  XP, Scrum, your own homebrew process – this is where the philosophy gives way to operational playbooks of “how we intend to do this in real life.” None of them are mandatory, just possible implementations.
  • Agile Practices – highly specific tactical techniques that tend to be used in conjunction with agile implementations.  None are required to be agile but many agile implementations have seen value from adopting them. Standups, planning poker, backlogs, CI, all the specific artifacts a developer uses to perform their work.
  • Agile Tools – Specific technical implementations of these practices used by teams to facilitate doing their work according to these methods.  JIRA Agile (aka Greenhopper),, et al.

the different parts of DevOps that people are talking about map directly to these same levels.

  • DevOps Values – I believe the fundamental DevOps values are effectively captured in the Agile Manifesto – with perhaps one slight emendation to focus on the overall service or software fully delivered to the customer instead of simply “working software.” Some previous definitions of DevOps, like Alex Honor’s “People over Process over Tools,” echo basic Agile Manifesto statements and urge dev+ops collaboration.
  • DevOps Principles – There is not a single agreed upon list, but there are several widely accepted attempts – here’s John Willis coining “CAMS” and here’s James Turnbull giving his own definition at this level. “Infrastructure as code” is a commonly cited DevOps principle. I’ve made a cut at “DevOps’ing” the existing Agile manifesto and principles here. I personally believe that DevOps at the conceptual level is mainly just the widening of Agile’s principles to include systems and operations instead of stopping its concerns at code checkin.
  • DevOps Methods – Some of the methods here are the same; you can use Scrum with operations, Kanban with operations, etc. (although usually with more focus on integrating ops with dev, QA, and product in the product teams). There are some more distinct methods, like Visible Ops-style change control and using the Incident Command System for incident reponse. The set of these methodologies are growing; a more thoughtful approach to monitoring is an area where common methodologies haven’t been well defined, for example.
  • DevOps Practices –Specific techniques used as part of implementing the above concepts and processes. Continuous integration and continuous deployment, “Give your developers a pager and put them on call,” using configuration management, metrics and monitoring schemes, a toolchain approach to tooling… Even using virtualization and cloud computing is a common practice used to accelerate change in the modern infrastructure world.
  • DevOps Tools – Tools you’d use in the commission of these principles. In the DevOps world there’s been an explosion of tools in release (jenkins, travis, teamcity), configuration management (puppet, chef, ansible, cfengine), orchestration (zookeeper, noah, mesos), monitoring, virtualization and containerization (AWS, OpenStack, vagrant, docker) and many more. While, as with Agile, it’s incorrect to say a tool is “a DevOps tool” in the sense that it will magically bring you DevOps, there are certainly specific tools being developed with the express goal of facilitating the above principles, methods, and practices, and a holistic understanding of DevOps should incorporate this layer.


Associated Kursus: KI142303BKI142303B
[ Mengubah: Thursday, 22 December 2016, 23:29 ]
Anyone in the world

Mark Zukerberg's challenge for 2016 was to build a simple AI to run his home -- like Jarvis in Iron Man. His goal was to learn about the state of artificial intelligence -- where we're further along than people realize and where we're still a long ways off. These challenges always lead me to learn more than he expected, and this one also gave me a better sense of all the internal technology Facebook engineers get to use, as well as a thorough overview of home automation.
So far this year, he has built a simple AI that he can talk to on his phone and computer, that can control his home, including lights, temperature, appliances, music and security, that learns his tastes and patterns, that can learn new words and concepts, and that can even entertain Max. It uses several artificial intelligence techniques, including natural language processing, speech recognition, face recognition, and reinforcement learning, written in Python, PHP and Objective C. In this note, he'll explain what he built and what he learned along the way.


  • Getting Started: Connecting the Home

In some ways, this challenge was easier than he expected. In fact, my running challenge (I also set out to run 365 miles in 2016) took more total time. But one aspect that was much more complicated than he expected was simply connecting and communicating with all of the different systems in my home.
Before he could build any AI, he first needed to write code to connect these systems, which all speak different languages and protocols. We use a Crestron system with our lights, thermostat and doors, a Sonos system with Spotify for music, a Samsung TV, a Nest cam for Max, and of course my work is connected to Facebook's systems. he had to reverse engineer APIs for some of these to even get to the point where he could issue a command from my computer to turn the lights on or get a song to play.
Further, most appliances aren't even connected to the internet yet. It's possible to control some of these using internet-connected power switches that let you turn the power on and off remotely. But often that isn't enough. For example, one thing he learned is it's hard to find a toaster that will let you push the bread down while it's powered off so you can automatically start toasting when the power goes on. he ended up finding an old toaster from the 1950s and rigging it up with a connected switch. Similarly, he found that connecting a food dispenser for Beast or a grey t-shirt cannon would require hardware modifications to work.
For assistants like Jarvis to be able to control everything in homes for more people, we need more devices to be connected and the industry needs to develop common APIs and standards for the devices to talk to each other

  • natural language

This was a two step process: first he made it so he could communicate using text messages, and later he added the ability to speak and have it translate my speech into text for it to read.
It started simple by looking for keywords, like "bedroom", "lights", and "on" to determine he was telling it to turn the lights on in the bedroom. It quickly became clear that it needed to learn synonyms, like that "family room" and "living room" mean the same thing in our home. This meant building a way to teach it new words and concepts.

  • Vision and Face Recognition

Even though he think text will be more important for communicating with AIs than people realize, he still think voice will play a very important role too. The most useful aspect of voice is that it's very fast. You don't need to take out your phone, open an app, and start typing -- you just speak.
To enable voice for Jarvis, he needed to build a dedicated Jarvis app that could listen continuously to what he say. The Messenger bot is great for many things, but the friction for using speech is way too much. My dedicated Jarvis app lets me put my phone on a desk and just have it listen. he could also put a number of phones with the Jarvis app around my home so he could talk to Jarvis in any room. That seems similar to Amazon's vision with Echo, but in my experience, it's surprising how frequently he want to communicate with Jarvis when I'm not home, so having the phone be the primary interface rather than a home device seems critical.


  • Facebook Engineering Environtment

As the CEO of Facebook, he don't get much time to write code in our internal environment. he has never stopped coding, but these days he mostly build personal projects like Jarvis. he expected he'd learn a lot about the state of AI this year, but he didn't realize he would also learn so much about what it's like to be an engineer at Facebook. And it's impressive.
My experience of ramping up in the Facebook codebase is probably pretty similar to what most new engineers here go through. he was consistently impressed by how well organized our code is, and how easy it was to find what you're looking for -- whether it's related to face recognition, speech recognition, the Messenger Bot Framework [] or iOS development. The open source Nuclide [] packages he ve built to work with GitHub's Atom make development much easier. The Buck [] build system he ve developed to build large projects quickly also saved me a lot of time. Our open source FastText [] AI text classification tool is also a good one to check out, and if you're interested in AI development, the whole Facebook Research [] GitHub repo is worth taking a look at.
One of our values is "move fast". That means you should be able to come here and build an app faster than you can anywhere else, including on your own. You should be able to come here and use our infra and AI tools to build things it would take you a long time to build on your own. Building internal tools that make engineering more efficient is important to any technology company, but this is something we take especially seriously. So he want to give a shout out to everyone on our infra and tools teams that make this so good.



Associated Kursus: KI142303BKI142303B
[ Mengubah: Thursday, 22 December 2016, 23:32 ]
Anyone in the world

Perbedaan QA engineer dan QA tester

Pada beberapa perushaan startup kita kerap kali mendengar istilah QA engineer dan QA tester. Kedua jabatan tersebut sangat penting dalam menjaga kualitas perangkat lunak. Apakah perbedaan antara QA engineer dan QA tester ?

QA Engineer

QA engineer lebih fokus pada bagaimana proses pengembangan perangkat lunak dilakukan dengan baik daripada bagaimana hasil akhir perangkat lunak yang dihasilkan. Tugas QA engineer memastikan setiap proses pengembangan perangkat lunak berjalan dengan baik guna menjaga kualitas pengembangan perangkat lunak. Menjaga kualitas proses programming dan build/test menjadi salah satu fokus utama pada pekerjaan QA engineer.

Beberapa tugas QA engineer adalah

  • Function – Test planning, Test Design and execution.
  • Prepare test plans, develop test cases and execute tests with a focus on coverage.
  • Engineers Quality. Good to answer – what is the quality of the product?
  • Logical thinkers. Ability to resolve issues using abstraction. Capability in analysis/predictions/improvements.
  • Can reconcile conflicting constraints.
  • Process and metrics/measurement driven.
  • Cost sensitive
  • Good for System/functionality testing
  • Best to have them involved in complete SDLC Cycle.

QA Tester

Berbeda dengan QA engineer, QA tester lebih berfokus kepada kualitas perangkat lunak yang digunakan. Pengujian-pengujian yang dilakukan oleh QA tester lebih fokus pada metode black-box. QA tester bertugas memastikan perangkat lunak yang di hasilkan sudah sesuai dengan keinginan user.

Beberapa tugas QA tester antara lain

  • Function – Strong in test execution.
  • Write and execute test cases – may not be coverage driven. Requirements driven testing.
  • Determines Quality. Good to answer – did you find any bugs?
  • Linear thinkers, low capability of analysis and re-usability of efforts/resources.
  • Requires a defined environment. Typically are weak in finding solutions in ambiguous/constrained environments.
  • Low process oriented capability.
  • May not be cost sensitive (time, effort, monetary, etc.)
  • Good for UI Testing
  • Typical involvement is in later stage of SDLC

Kesimpulan yang bisa kita ambil, untuk menjaga kualitas perangkat lunak maka dibutuhkan proses QA pada proses pembangunan perangkat lunak/SDLC dan setelah perangkat lunak jadi dan siap rilis. Untuk memenuhi kebutuhan tersebut jabatan QA dibagi menjadi QA engineer dan QA tester. Dimana QA engineer fokus pada proses SDLC dan QA tester fokus pada proses setelah SDLC atau sebelum delivery ke user. 
karena QA engineer melakukan test pada setiap proses SDLC dibandingkan dengan QA tester yang hanya berfokus pada testing aplikasi yang sudah jadi. QA engineer memiliki beban kerja yang lebih berat, sehingga salary pun  lebih tinggi dari QA tester.

Jadi mau pilih yang mana?, QA engineer atau QA tester?



Gambar dari THIAR HASBIYA DITANAYA 5116201048
by THIAR HASBIYA DITANAYA 5116201048 - Thursday, 22 December 2016, 23:12
Anyone in the world

Software Framework

Software Framework adalah sebuah abstraksi yang digunakan untuk mempermudah pembuatan perangkat lunak. Software Framework menyamakan persepsi setiap pengembang pada abstraksi yang sama. Penyamaan persepsi ini mempermudah pengembang baru untuk mengetahui keseluruhan perangkat lunak dengan lebih cepat. Paradigma yang paling populer digunakan pada software framework adalah MVC. MVC memisahkan aplikasi menjadi 3 bagian fungsionalitsa yaitu model,view, dan controller.

Beberapa software framework yang sering digunakan adalah

  • php
    1. Laravel : full-stack web framework menggunakan paradigma MVC, memiliki fitur yang sangat lengkap dan pemodelan ORM(Object Relational Mapping) pada database
    2. CodeIgniter: full-stack web framework menggunakan paradigma MVC, memiliki fitur yang umum dan berjalan cepat dibandingkan framework php lainnya
    3. Lumen : Rest Api web framework dari Laravel. Lumen adalah versi REST API dari Laravel,lumen memiliki fitur yang lebih simpel dibandingkan Laravel
    4. Yii : full-stack web framework menggunakan paradigma MVC memiliki fitur lengkap dan beberapa scaffolding.
  • python
    • Flask : minimal web framework, digunakan untuk membangun aplikasi web atau API
    • Djanggo : full-stack web framework dengan paradigma MVC digunakan untuk membuat aplikasi web yang utuh
  • ruby
    • Ruby on Rails: Pesaing laravel, memiliki kelengkapan fitur seperti laravel dan berjalan menggunakan bahasa ruby, memiliki kelebihan pada syntax nya yang sederhana.
  • nodejs
    • Expressjs: minimal web framework yang dibangun menggunakan javascript, express dapat di desain sebagai API atau fullstack web framework.


Associated Kursus: KI142303BKI142303B
Gambar dari SOLEH ELFRIANTO HARDIYONO 5116201028
by SOLEH ELFRIANTO HARDIYONO 5116201028 - Thursday, 22 December 2016, 22:43
Anyone in the world

        Pemanfaatan teknologi elektronik untuk menyampaikan, mendukung dan meningkatkan pengajaran dan pembelajaran (oleh : Learning Skills Development Agency [LSDA]).
        Penggunaan teknologi multimedia dan Internet untuk meningkatkan kualitas pembelajaran dengan memfasilitasi akses ke sumber daya dan layanan serta kolaborasi sesama member (UE)     
        jika seseorang belajar dengan cara yang menggunakan teknologi informasi dan komunikasi (TIK), dengan menggunakan elearning (oleh : DfES)

        Tersedianya fasilitas e-moderating dimana pengajar dan siswa dapat berkomunikasi secara mudah melalui fasilitas internet secara reguler atau kapan saja kegiatan berkomunikasi itu dilakukan tanpa dibatasi oleh jarak, tempat, dan waktu.
        Pengajar dan siswa dapat menggunakan bahan ajar yang terstruktur dan terjadwal melalui internet.
        Siswa dapat belajar (me-review) bahan ajar setiap saat dan dimana saja apabila diperlukan mengingat bahan ajar tersimpan di komputer.
        Bila siswa memerlukan tambahan informasi yang berkaitan dengan bahan yang dipelajarinya, ia dapat melakukan akses di internet.
        Baik pengajar maupun siswa dapat melakukan diskusi melalui internet yang dapat diikuti dengan jumlah peserta yang banyak.
        Berubahnya peran siswa dari yang pasif menjadi aktif.
        Relatif lebih efisien. Misalnya bagi mereka yang tinggal jauh dari Perguruan Tinggi atau sekolah konvensional dapat mengaksesnya.
        interaksi antara pengajar dan siswa atau bahkan antara siswa itu sendiri, bisa memperlambat terbentuknya values dalam proses belajar mengajar.
        Kecenderungan mengabaikan aspek akademik atau aspek sosial dan sebaliknya mendorong aspek bisnis atau komersial.
        Proses belajar dan mengajarnya cenderung ke arah pelatihan dari pada pendidikan.
        Berubahnya peran guru dari yang semula menguasai teknik pembelajaran konvensional, kini dituntut untuk menguasai teknik pembelajaran dengan menggunakan ICT (Information Communication Technology).
        Siswa yang tidak mempunyai motivasi belajar yang tinggi cenderung gagal.
        Tidak semua tempat tersedia fasilitas internet (berkaitan dengan masalah tersedianya listrik, telepon, dan komputer).
        Kurangnya mereka yang mengetahui dan memiliki keterampilan tentang internet.
        Kurangnya penguasaan bahasa komputer.

        Computer Based Training (CBT)
        eLearning dengan komunikasi satu arah, merupakanawal kemunculan aplikasi e-learning yang berjalan di PC standalone ataupun dalam kemasan CD-ROOM. Isi berupa  materi dalam bentuk tulisan maupun multimedia (video dan audio) dalam  format MOV, MPEG-1 atau AVI. Dengan menggunakan tools  yang disediakan maka pengguna mempunyai kesempatan untuk mencoba  soal-soal latihan tanpa batasan jumlah dan tingkat kesulitannya, Contohnya :
        - Toolbox (keluaran perusahaan Asymetrix sekarang bernama Clickllearn)
        - Autoware (keluaran Macromedia ).

        LMS (LearningManagement System)
        Seiring dengan perkembangan teknologi internet di dunia, masyarakat dunia mulai terkoneksi dengan internet. Kebutuhan akan        informasi yang cepat diperoleh menjadi mutlak, dan jarak serta lokasi  bukanlah halangan lagi. Disinilah muncul sebuah Learning Management     System atau biasa disingkat dengan LMS. Perkembangan LMS yang semakin pesat membuat pemikiran baru untuk mengatasi masalah     interoperability antar LMS yang ada dengan suatu standard. Standard yang   muncul misalnya adalah standard yang dikeluarkan oleh AICC (Airline     Industry CBT   Committee), IMS, IEEE LOM, ARIADNE, dsb. Contoh aplikasi ini adalah
        - Atutor, yang mana aplikasi ini terdapat fasilitas penulisan materi, upload materi, penugasan, pembuatan bank soal, pengujian dan penilaian serta fasilitas komunikasi (chatting, forum dan blog) dan modul lainnya (kalender dan photoalbum).

        Aplikasi e-learning berbasis web
        Perkembangan LMS menuju ke aplikasi e-learning berbasis Web            secara total,      baik untuk pembelajar (learner) maupun administrasi      belajar mengajarnya. LMS      mulai digabungkan dengan situs-situs portal   yang pada saat ini boleh dikata menjadi barometer situs-situs informasi,        majalah, dan surat kabar dunia. Isi juga semakin kaya dengan      berpaduan multimedia, video streaming, serta penampilan interaktif dalam    berbagai pilihan format data yang lebih standard, berukuran kecil dan stabil. Contoh aplikasi ini adalah Dokeos. Dokeos merupakan free            software yang di release oleh GNU GPL dan pengembangannya didukung oleh dunia internasional. Sistem operasinya bersertifikasi yang bisa         digunakan sebagai konten dari sistem managemen untuk pendidikan.      Kontennya meliputi distribusi bahan pelajaran, kalender, progres      pembelajaran, percakapan melalui text/audio maupun video, administrasi             test, dan menyimpan catatan. Tujuan utama dari dokeos adalah menjadi             sistem yang userfriendly dan flexibel serta mudah dipakai.


Associated Kursus: KI142303BKI142303B
Anyone in the world

Berikut beberapa tool yang biasa digunakan untuk merancang pemodelan dalam pengembangan perangkat lunak :

  •     offline :

        Power Designer
        Star UML
        Microsoft Visio

  •     Online

        webSequenceDiagrams >
        colorcombos >
        mockflow >

Associated Kursus: KI142303BKI142303B
[ Mengubah: Friday, 23 December 2016, 00:26 ]
Gambar dari OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 17:55
Anyone in the world

Dalam rekayasa sistem dan rekayasa perangkat lunak, analisis kebutuhan mencakup pekerjaan-pekerjaan penentuan kebutuhan atau kondisi yang harus dipenuhi dalam suatu produk baru atau perubahan produk, yang mempertimbangkan berbagai kebutuhan yang bersinggungan antar berbagai pemangku kepentingan. Kebutuhan dari hasil analisis ini harus dapat dilaksanakan, diukur, diuji, terkait dengan kebutuhan bisnis yang teridentifikasi, serta didefinisikan sampai tingkat detail yang memadai untuk desain sistem.


Analisis kebutuhan merupakan langkah awal untuk menentukan gambaran perangkat yang akan dihasilkan ketika pengembang melaksanakan sebuah proyek pembuatan perangkat lunak. Perangkat lunak yang baik dan sesuai dengan kebutuhan pengguna sangat tergantung pada keberhasilan dalam melakukan analisis kebutuhan. Untuk proyek-proyek perangkat lunak yang besar, analisis kebutuhan dilaksanakan setelah aktivitas sistem information engineering dan software project planning.

Analisa kebutuhan yang baik belum tentu menghasilkan perangkat lunak yang baik, tetapi analisa kebutuhan yang tidak tepat menghasilkan perangkat yang tidak berguna. Mengetahui adanya kesalahan pada analisis kebutuhan pada tahap awal memang jauh lebih baik, tapi kesalahan analisis kebutuhan yang diketahui ketika sudah memasuki penulisan kode atau pengujian, bahkan hampir masuk dalam tahap penyelesaian merupakan malapetaka besar bagi pembuat perangkat lunak. Biaya dan waktu yang diperlukan akan menjadi sia sia.


Ada 3 faktor yang harus dipenuhi ketika melakukan analisa kebutuhan ini yaitu : lengkap, detail, dan benar. Lengkap artinya semua yang diharapkan oleh klien telah didapatkan oleh pihak yang melakukan analisa. Sedangkan detail maksudnya adalah berhasil mengumpulkan informasi yang rinci sampai hal-hal yang kecil. Semua data dari analisa kebutuhan ini haruslah benar, sesuai apa yang dimaksud oleh klien, bukan benar menurut apa yang difikirkan oleh pihak yang melakukan analisa. Sebuah kutipan anonim yang sering disampaikan mengenai hal ini adalah : “Saya percaya anda sangat mengerti dengan apa yang saya katakan, namun saya tidak yakin bahwa apa yang anda dengar adalah sama dengan apa yang saya maksud”.


Analisa kebutuhan ini terdiri dari lima langkah pokok:

1.Identifikasi Masalah
2.Evaluasi dan sintesis


Tujuan analisis kebutuhan

Ada tiga tujuan utama dari proses analasis kebutuhan yang dapat diformulasikan sebagai beriukut :

  1. Mengelola hasil elistasi kebutuhan untuk menghasilkan dokumen spesifikasi kebutuhan yang isi keseluruhannya sesuai dengan apa yang diinginkan pengguna (Liu and Yen, 1996).
  2. Mengembangkan persyaratan kualitas yang memadai dan rinci, dimana para manajer dapat membuat pekerjaan proyek yang realistis dan staf teknis dapat melanjutkan dengan perancangan, implementasi dan pengujian (Wiegers, 2003).
  3. Membangun pemahaman tentang karakteristik ranah permasalahan dan sekumpulan kebutuhan untuk menemukan solusi

Ketiga tujuan tersebut dapat dicapai oleh perekayasa kebutuhan dengan melalui serangkaian tahapan-tahapan aktivitas. 

Tahap Analisis Kebutuhan

Domain Understanding, dalam tahap ini perekayasa kebutuhan perangkat lunak harus mengetahui bagaimana organisasi perusahaan beroperasi dan apa yang menjadi permasalahan pada sistem yg sedang  berjalan pada saat ini. perekayasaan perlu memfokuskan kepada ‘Apa’ yg menjadi permasalahan. Perekaysaan hendaknya tidak berhenti pada menemukan “gejala” dari permasalahn itu terjadi untuk menemukan akar dari pemasalahan dari sistem yg berjalan tersebut.

Requirements Collection, Tahapan ini merupakan tahapan pengumpulan kebutuhan akan sistem yang akan dibangun.Pada tahapan ini diperlukan adanya intekasi intensif dengan pemangku kepentingan terutama dengan pengguna akhir.

Classification, Pada tahapan sebelumnya kumpulan kebutuhan masih tidak terstruktur.Untuk itu kebutuhan yang saling berkaitan dikelompokan,baik menurut kelas penggunaanya maupun jenis kebutuhananya. Kebutuhan kebutuhan tersebut diorganisasi ke dalam kelompk-kelompok yang koheren.Perekayasaan perlu memisahkan antara kebutuhan dan keinginan dari pengguna.

Conflict resolution, Pada tahapan ini adalah menemukan dan menyelesaikan  kebutuhan yang di dalamnya terdapat konflik.

Prioritisation, Pada tahapan dilakukan interaksi dengan pemangku kepentingan untuk mengidentifikasikan kebutuhan-kebutuhan priopritas dari masing-masing kebutuhan agar sumber daya yang tersedia pada organisasi dialokasikan untuk mengimplementasikan kebutuhan yg terutama dari pemangku kepentingan.

Requirements Checking, Menganalisa sekumpulan kebutuhan dari hasil tahapan sebelumnya untuk memverifikasi dan memvalidasi berdasarkan aspek kelengkapan,konsistensi,dan kebutuhan nyata.

Dalam rekayasa kebutuhan, analisa kebutuhan yang baik hedaklah menitik beratkan pada ranah permasalahan dan bukan pada ranah solusi. Tujuan utamanya adalah untuk mencapai pemahaman tetang sifat dari ranah permasalahan dan permasalahan  yang ada didalamnya . Pada dasarnya, analisi kebutuhan diawali dengan spesifikasi (layanan, atribut, properti, kualitas, batasan) dari sistem solusi yang hendak dibangun.

Kegunaan analisis adalah untuk memodelkan permasalahan dunia nyata agar dapat dimengerti. Permasalahan dunia nyata harus dimengerti dan dipelajari supaya spesifikasi kebutuhan perangkat lunak dapat diungkapkan. Tujuan aktivitas ini adalah untuk mengetahui ruang lingkup produk (product space) dan pemakai yang akan menggunakannya. Analisis yang baik akan mengungkapkan hal-hal yang penting dari permasalahan, dan mengabaikan yang tidak penting.


Referensi :

Associated Kursus: KI142303BKI142303B
[ Mengubah: Thursday, 22 December 2016, 17:57 ]
Gambar dari OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 17:54
Anyone in the world

White-box testing is a verification technique software engineers can use to examine if their code works as expected. White-box testing is testing that takes into account the internal mechanism of a system or component (IEEE, 1990). White-box testing is also known as structural testing, clear box testing, and glass box testing (Beizer, 1995). The connotations of “clear box” and “glass box” appropriately indicate that you have full visibility of the internal workings of the software product, specifically, the logic and the structure of the code.

Using the white-box testing techniques outlined in this chapter, a software engineer can design test cases that (1) exercise independent paths within a module or unit; (2) exercise logical decisions on both their true and false side; (3) execute loops at their boundaries and within their operational bounds; and (4) exercise internal data structures to ensure their validity (Pressman, 2001).

There are six basic types of testing: unit, integration, function/system, acceptance, regression, and beta. White-box testing is used for three of these six types:

  • Unit testing, which is testing of individual hardware or software units or groups of related units (IEEE, 1990). A unit is a software component that cannot be subdivided into other components (IEEE, 1990). Software engineers write white-box test cases to examine whether the unit is coded correctly. Unit testing is important for ensuring the code is solid before it is integrated with other code. Once the code is integrated into the code base, the cause of an observed failure is more difficult to find. Also, since the software engineer writes and runs unit tests him or herself, companies often do not track the unit test failures that are observed– making these types of defects the most “private” to the software engineer. We all prefer to find our own mistakes and to have the opportunity to fix them without others knowing. Approximately 65% of all bugs can be caught in unit testing (Beizer, 1990).
  • Integration testing, which is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them (IEEE, 1990). Test cases are written which explicitly examine the interfaces between the various units. These test cases can be black box test cases, whereby the tester understands that a test case requires multiple program units to interact. Alternatively, white-box test cases are written which explicitly exercise the interfaces that are known to the tester.
  • Regression testing, which is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (IEEE, 1990). As with integration testing, regression testing can be done via black-box test cases, white-box test cases, or a combination of the two. White-box unit and integration test cases can be saved and rerun as part of regression testing

White Box Testing Techniques:

  • Statement Coverage - This technique is aimed at exercising all programming statements with minimal tests.
  • Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.
  • Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered.

Advantages of White Box Testing:

  • Forces test developer to reason carefully about implementation.
  • Reveals errors in "hidden" code.
  • Spots the Dead Code or other issues with respect to best programming practices.

Disadvantages of White Box Testing:

  • Expensive as one has to spend both time and money to perform white box testing.
  • Every possibility that few lines of code are missed accidentally.
  • In-depth knowledge about the programming language is necessary to perform white box testing.



Laurie Williams. “White-Box Testing. 2006

Associated Kursus: KI142303BKI142303B
Gambar dari OZZY SECIO RIZA 5116201030
by OZZY SECIO RIZA 5116201030 - Thursday, 22 December 2016, 17:54
Anyone in the world

Often software engineering projects and products are not precise about the targets that should be achieved. Software requirements are stated, but the marginal value of adding a bit more functionality cannot be measured. The result could be late delivery or too-high cost. The “good enough” principle relates marginal value to marginal cost and provides guidance to determine criteria when a deliverable is “good enough” to be delivered.

These criteria depend on business objectives and on prioritization of different alternatives, such as ranking software requirements, measurable quality attributes, or relating schedule to product content and cost. The RACE principle (reduce accidents and control essence) is a popular rule towards good enough software. Accidents imply unnecessary overheads such as gold-plating and rework due to late defect removal or too many requirements changes. Essence is what customers pay for. Software engineering economics provides the mechanisms to define criteria that determine when a deliverable is “good enough” to be delivered. It also highlights that both words are relevant: “good” and “enough.” Insufficient quality or insufficient quantity is not good enough.

Agile methods are examples of “good enough” that try to optimize value by reducing the overhead of delayed rework and the gold plating that Software Engineering Economics 12-15 results from adding features that have low marginal value for the users (see Agile Methods in the Software Engineering Models and Methods KA and Software Life Cycle Models in the Software Engineering Process KA). In agile methods, detailed planning and lengthy development phases are replaced by incremental planning and frequent delivery of small increments of a deliverable product that is tested and evaluated by user representatives

The five key process ideas (KPIs) of good enough software :

1. Utilitarian Strategy

The utilitarian strategy applies to problem, projects, and products. The term is one that I've coined out of necessity (or possibly ignorance, as I just haven't found a suitable alternative). It refers to the art of qualitatively analyzing and maximizing net positive consequences in an ambiguous situation. It encompasses ideas from systems thinking, risk management, economics, decision theory, game theory, control theory, and fuzzy logic.

2. Evolutionary strategy

An evolutionary strategy, applied either to problems, projects, or products, alternates observation with action to effect ongoing improvement. On the project level, this means ongoing process education, experimentation and adjustment, rather than clinging to a notion of the One Right Way to develop software.

On the problem level, it means keeping track of history, and learning about failure and success over time. Here are some of the elements of using the evolutionary approach:

  • Don't even try to plan everything up front.
  • Converge on good enough in successive, self-contained stages.
  • Integrate early and often.
  • Encourage disciplined evolution of feature set and schedule over the course of the project.
  • Salvage, reuse, or purchase components where feasible.
  • Record and review your experience.

3. Heroic Teams

For some reason, the most fundamental key to good enough development also seems to be the most controversial. There is a strong disdain, among many methodologists, for the very word "hero". I'm not sure why that is, since evidence supporting the role of heroes in computing is just a shade less compelling than evidence supporting the role of electricity. I think it's because there are several definitions of hero.

4. Dynamic Infrastructure

Dynamic infrastructure means that the company rapidly responds to the needs of the project. It backs up responsibility with authority and resources. Dynamic infrastructure provides life support for the other four key process ideas. Some of its elements are:

  • Upper management pays attention to projects.
  • Upper management pays attention to the market.
  • The organization identifies and resolves conflicts between projects.
  • In conflicts between projects and organizational bureaucracy, projects win.
  • Project experience is incorporated into the organizational memory

5. Dynamic Processes

Three other important dynamic process attributes are portability, scalability, and durability. Portability is how the process lends itself to being carried into meetings, shared with others, and applied to new problems. Scalability is how readily the process may be expanded or contracted in scope. A highly scalable process is one that can be operated by one person, manually, or by a hundred people, with tool support, without dramatic redesign. Durability is how well the process tolerates neglect and misuse.

Associated Kursus: KI142303BKI142303B
Halaman: () 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ... 41 ()