Subscribe by Email


Tuesday, August 19, 2014

How does Test Driven development benefit developers?

Test driven development (TDD) as a process has proven to be a boon for developers time and again. Businesses tend to change rapidly with the time and so does their requirements from the software that they are using. If we develop these software products using the traditional development methodologies, it is obvious that later it will be more difficult for us to maintain them as requirement changes. If you suggest making changes to the existing model developed using traditional method, it might create havoc in unpredictable ways. As a result of these consequences, organizations often don’t go for modification of the existing software as it might hamper their productivity and effectiveness. But this is no problem for companies who have developed their products using TDD. This is so because it is like a continuous integration model in which testing modules are added. Then it becomes easy for the organizations to make modifications to their products, without fearing any breakdown.
First benefit of TDD to developers is that it is easy for them to maintain the software by virtue of its extensibility and flexibility. Since both testing and development go hand in hand in TDD at the lowest level, it guarantees testing of every single logical piece and can even be changed. Once the development is done, the application is tested once again with thousands of tests. After making a change to the application, associated tests are run to see if there is any impact on the other parts of the application. With this approach there occurs no problem in modifying the existing legacy applications. Apart from benefiting developers, this has benefits for organizations seeking growth by making it easier for them to update their systems.
The codebase is streamlined along with test coverage which is unparallel. In TDD, writing a test first before writing code is mandatory. That is how it provides unparalleled coverage. Further, regression testing and refactoring make the code as minimal and economical as possible. This plays a big role in streamlining the codebase. If for a functionality there is no use case, the test is not written and no code and thus there is no growth in the codebase. This is also another reason behind easy maintenance.
TDD provides a clean interface throughout the development process. Since the tests are written first, the APIs thus produced are from the perspective of an API – user. That’s why it is very easy to use these APIs when compared to the APIs written by programmer’s perspective.
The code refactoring process is central to the success of TDD and makes sure that the codebase is strengthened preventing the software from getting outdated and monolithic.

TDD aims at improving the code and is particularly useful in the following:
> Addition of new feature or functionality: With TDD the programmer feels confident in changing part of a large application (even otherwise the programmer might feel confident but this feeling may not be shared by other stakeholders). If it wasn’t flexible, then we would be adding functionality to the application but without any proper integration. This would have definitely caused many problems.
> Changes in the technical infrastructure: Developers are always thinking of making technical changes to the code for increasing its elegance and make it more extensible.
The use cases used in TDD are actually tests which can be used by the other developers as examples of how the code works and how it can be used. Thus TDD provides executable documentation. Updating the software without any painstaking efforts has been possible because of TDD.
An organization can be successful only if it embraces the changes and makes improvements. Test driven development makes all this possible with its extensability, maintainability and flexibility. 


Saturday, August 16, 2014

Test Driven Development - Some benefits

According to a study, test driven development involves writing more tests and thus makes programmers more productive. The hypotheses produced from the study were inconclusive regarding code quality and relation between productivity and TDD. Programmers that use the pure test driven development on projects said that they rarely felt like invoking the debugger. Version control system and TDD when used together, makes it easy to revert the code when it fails unexpectedly to the previous form that passed all the tests. Thus debugging here proves to be less productive than reverting.
TDD has more to offer other than just correctness validation. It can also be used for driving the designing process. Since the focus is shifted to the test cases, the programmer can see how the client will be using the functionality. As a result of this programmer’s concern with interface increases. This benefit of TDD is just the opposite of design by contract. It is so because we approach through test cases rather than using preconceptions and mathematical assertions.
With TDD you get the ability to proceed with programming in small steps with as much as you are comfortable with. Thus you are able to concentrate at the present task where your goal is to make the code pass the test. Initially in TDD, we are concerned with error and exception handling. There is a separate process for creation and implementation of these extraneous situations. This way it is ensured that at least one test covers each part of the code. The programmer’s and users’ confidence in the software is boosted.
Even though it is obvious that programming with TDD requires more code than with other techniques, according to a model by Müller and Padberg, implementation time in TDD is shorter. The defects are caught early because of the frequent testing in the development cycle. The errors are thus prevented from turning into expensive and endemic bugs. It also reduces the time period of debugging phase. The code produced with test driven development is more extensible, flexible and modular. This is by virtue of TDD’s methodology of forcing the programmers think in the terms of smaller code units. The benefit of all this is that you  get a cleaner, focused but loosely coupled code.
The modularization of the code to some extent is also affected by the design pattern of the mock objects used. This is so because the pattern requires writing the modules so as to make it easy to switch between versions for deployment and testing reasons. Every code path is covered by the automated tests unless no more code is required for the code to pass the test. For example, if the programmer wants to add an else branch to an if statement, then his first requirement is to write a code that causes that branching. That is why all the tests produced in TDD are quite thorough and even the most unexpected of the changes is detected.
It has been proved experimentally that TDD approach is superior to the tradition test – last approach when it comes to lower CBO (coupling between objects). The experiment also showed that TDD results in better modularization, easier testing and reuse of the already developed products. The effect of the TDD approach on generation of unit tests was measured using the MSI (mutation score indicator) and the BC (branch coverage). These indicate the effectiveness and thoroughness of the unit tests. A medium effect is represented by a mean effect size based up on the meta – analysis of the experiment that was performed. For branch coverage the effect size was medium and therefore taken to be substantive. 


Tuesday, August 12, 2014

Test Driven Development - The Process

Continuing from the previous post on the basics of TDD (link), this post continues with the topic of Test Driven Development.

What is the process of Test Driven development?
For the test driven development to be implemented on some software artifact, its units should be kept small. By units here we mean a group or class of functions related to each other. Sometimes these units might also be referred to as the modules. These are a couple of the benefits of using small units:
> The debugging effort is reduced – Upon detection of test failures; it becomes easy to track down the faults when you have smaller units.
> Tests are often self – documenting: Readability and understandability is increased by virtue of small tests.

TDD can be converted into ATDD (acceptance test driven development) by mixing TDD with more advanced practices. The criteria that the customer specifies are converted into acceptance tests which are then used for driving the UTDD (unit TDD) process in traditional manner. With this process it is ensured that there is an automated mechanism which can be used by the customers for deciding whether their requirements have been met or not. The ATDD provides the development team with a fixed target i.e., the acceptance tests which keeps them steadily focused on the requirements of the customers. Now let us examine the TDD cycle. It's test driven development cycle consists of the following phases:

1. Adding a test: The beginning of a new feature is marked by writing a test which must fail because of being written before the implementation of the feature. If this test succeeds then either the test is defective or the feature has already been added to the software. Before writing a test, the requirements and specifications must be fully understood by the developer by means of user stories and use case stories. This step puts the focus of the developer on the requirements before he/ she begins writing the code that makes a subtle yet importance difference.
2. Running the tests and checking if they fail: This step does the step of validating the correct working of the test harness. The test itself is also tested in this process ruling out the possibility that the new test will pass always. This results in increase in confidence in software.
3. Allowing the test to pass by writing some code: the code written at this step is not perfect as proved by later tests but it is eventually improved. The code is written only to pass the test.
4. Running the tests: If all the tests pass, the programmer can be sure of the code that the requirements are being met.
5. Refactoring the code: Now the code has to be refactored as required. This also involves placing the code in its logical place and removing any redundant code. It has to be made sure that the function and variable names represent properly their current values. Any misinterpreted constructs should be clarified. After this the code should be re – run to be sure that the refactored code has not changed the other functionalities.
6. Repeat: Another test is taken to test the next functionality. The steps should be kept small. If the new code does not satisfy this test or if it fails, the changes made should be undone instead of excessive debugging. Maintaining revertible checkpoints becomes easy if continuous integration is used. If external libraries are being used, then it is necessary that the size of the increments should be as small as the library itself unless the library is not sufficient or it has bugs. Between each test run, there can be maximum 10 edits. This cycle goes on until all the functionalities have been tested.


Monday, August 11, 2014

What is Test Driven development?

The test driven development (or TDD in short) is a type of software development process that emphasizes on the use of repetitive development cycles. These development cycles are quite short when compared to the cycles followed in other development processes. Initially an automated test case i.e., an initially failing one is written by the developer defining a new function or improvement that is required. Next a code is produced (as small and as efficient as possible) for passing the test. Lastly, this code refactored so as to make it standard compliant. This technique was discovered by Kent Beck in the year of 2003. He stated that this software development technique is an inspiration for confidence and produces simple designs.
There is a similarity between extreme programming’s concepts of test – first programming and this test – driven development process. However, lately TDD has developed a general interest for its own. The concept behind the test driven development is also applied to the debugging and improvement of the legacy code that has been produced using some older methods. There are a number of aspects in which test – driven development can be used. For instance, TDD principles such as YAGNI (you aren’t gonna need it) and KISS (keep it simple stupid) are used in general context too. Since the focus is on producing only the required amount of code necessary to pass the test, the code that is developed is much clear and clean than what is developed by other methods. Another famous principle of TDD is “fake it till you make it”. For achieving some concepts including advanced design (for example, design pattern), the tests that would produce this design have to be written. The code may be simpler than the actual target code but would still pass all the tests.  At first, you might seem to disagree with it but, on the positive side it allows the developer to focus on the important things.

The first thing that should be done is to write the tests even before the functionality that has to be tested. This approach has a number of benefits:
- It helps in ensuring that the application is test ready making it easy on the developers on how to test it from outset. It eliminates the scene of worrying later.
- It makes sure that for every feature there is a test.
- This approach facilitates an early understanding of the product requirements. Thus, ensuring the test code’s effectiveness. Also the product quality can be focused up on continually.

On the other hand, the feature – first code pattern in development organizations may push the developer to the next feature which results in negligence in full testing. Also the compilation for the first test might not even take place because the classes and functions it requires might not exist then. But, even then the first tests are treated as executable specifications.
The test cases are first failed for ensuring that the test is working and is able to catch the fault or error. Once this is established, we can proceed with the implementation of the underlying functionality. This strategy is called the “test driven development mantra”. Also known as the red/green refactor where red stands for failing and green stands for passing. In TDD we repeat the process of adding the failing test cases and then passing them and again refactoring. When at each stage the programmer receives the expected results, his mental code model keeps getting stronger and reinforced. Further this boosts his productivity and instills in him a sense of self - confidence. This is the development style that TDD follows.


Friday, January 17, 2014

What are different modes of unauthorized access?

You or your company might have a number of resources that you might need to protect from unauthorized people and ensure that only people who have a right to access these resources can access and use them. These resources might be either personnel, physical or informational. All this is done using access control. It is not just about creating user names and passwords whenever resources have to be accessed. There are many models, techniques, methods that can be implemented to maintain security. Also there are a number of types of attacks against each type of the distinct methods. In the process of authorization, access is granted to the authenticated subjects. It involves carrying out specific operations based up on the predefined access rights mentioned in the access criteria. This criterion is based up on the following factors:
- Clearance: Security level of the subject.
- Need – to – know: The approved formal access level.
Attackers employ a number of tricks and techniques for gaining unauthorized access to the resources and information of the company. Necessary countermeasures need to be taken so that these threats can be identified and eliminated.
There are different modes of unauthorized access as discussed further:
- Unauthorized disclosure of information: The disclosure of sensitive information might be intentional or accidental. Whatever the cause maybe, the results are always same. The individual get the information that they were not intended to access. A large part of the access control is about preventing such types of incidents from taking place. People might use different kinds of media for sharing information around the organization such as hard drives, floppy disks, shares on servers and so on. These media might contain sensitive information that might get in to the hand of the people for which it is not intended. Also new employees might be assigned old computers to work up on which might also contain some sensitive information stored by the former employees. Object reuse is one example where some object containing sensitive information might be used by other subjects.
- Emanation security: Attackers can even intercept electrical signals for stealing the information. The signals are radiated by the computers and other devices which can be intercepted by attackers by means of some specialized equipment. Using the right software as well as hardware, this information can be reconstructed without coming into the knowledge of the users using it. Main countermeasures include control zones, white noise and the TEMPEST.
- Man – in – the – middle attacks: An intruder drops in to a conversation going on between two hosts and intercepts the messages. Sequence numbers and digital signals can be used as countermeasures.
- Sniffing: This is a type of passive attack when the network is monitored by the intruder for gaining info concerning the victim and is used for attacking later. Data encryption can be done for preventing all this.
- War dialing: This is a kind of brute force attack where a program is used by the attacker using which it dials a large bank of phone numbers. This is done to check which phone belongs to a particular modem. Using this, the attacker can gain access to the network. Not publicizing telephone numbers is one countermeasure.
- Ping of death: In this Denial – of – service attack, the attacker sends oversized ICMP packets to the victim host. If the host is not familiar with how to handle such large packets, it may reboot or freeze. Implementing ingress filtering and patching the systems are some counter measures for detecting such oversized ICMP packets. Another type of DoS attack is the WinNuk.


Friday, December 13, 2013

Some white box testing techniques – part 2

This post is a continuation of the discussion on white box testing techniques from part 1 of this article (link).

- Path testing: This is a typical application program that consists of 100s of paths between two points of the program i.e., the entry and the exit. The number of paths goes on multiplying by the number of decisions that are encountered during the execution. This number is again increased by the execution of the loops. Each of these paths has to be tested to ensure there are no bug. This seems quite easy for a straight line having 50 – 100 lines of code but what about a large program? Counting gets difficult. With a limited number of resources (and most software projects have a limited number of resources), all the possible paths have to be covered. Path testing can be made a little easy by not consider the redundant code in the program as far as possible. Path testing is responsible for checking whether or not each and every path is getting executed at least once. Today we also have a new type of path testing, called the basis path testing which is a mixture of branch testing and path testing. Even the path testing makes use of the control flow graph. It involves calculating the cyclomatic complexity. For each & every path, a test case has to be designed.

- Statement coverage: We also know this white box testing technique by other names such as segment coverage or line coverage. This coverage methodology provides coverage only to the conditions that are true. The false conditions are neglected. This helps in the identification of the statements that are often executed in the program. It also helps in finding out the areas of the program where there is no data flow because of no execution. But all the lines of code are examined and executed. Statement coverage is important for verifying whether the code is serving its desired purpose or not. In other words it gives us a measure of the code quality. We can consider it as a partial form of path testing. Disadvantages include: false conditions are left untested; it gives no report about the loop termination and does not consider the logical operators. One line of code might be a full statement, a partial one or blank or may contain many statements. There are nested statements too. The statement coverage has to be measured first and then the test cases should be designed.

- Decision coverage: Another name for decision coverage is “all – edges coverage”. Here all the conditions that result because of a logical decision are considered. It does not matter whether the conditions are true or false. This is just the opposite of statement coverage. The outcome of a decision is nothing but a branch. So the branch coverage gives a measure of extent up to which the decisions in a program have been tested. This is why most of us often get confused with branch coverage and decision coverage and use them interchangeably. But it is more than statement coverage. The decision statements include loop control statements, if statements, if – else statements, switch – case statements i.e., the statements that result in either a true or a false value based up on the logical condition in that statement. This white box testing technique is helpful in validation of branch coverage that is whether or not all branches have been covered. It helps in ensuring that no branch gives an abnormal result that may disturb the operation of the whole program. There are certain drawbacks of the statement coverage as we saw above. These drawbacks are eliminated by decision testing. 


Thursday, December 12, 2013

Some white box testing techniques – part 1

In this article we shall give a brief description of the various techniques that are used in white box testing.

- Control flow testing: This is more of being similar to structural testing and is based on the control flow model of the application that is being testing. In control flow testing certain paths among all are selected judiciously so as to provide maximum coverage. The paths are chosen so that a decided thoroughness in the testing process is maintained. This means that paths that have been selected should be enough so as to cover all the statements at least once. It is usually applied to the new software systems that have to be put under unit testing (unit level white box testing). This technique assumes that there are correct specifications, correctly assessed and defined data, no bugs in the program except those in the control flow. The number of control flow bugs is fewer in the programs that are written using object – oriented or structured programming languages. This technique makes use of a flow graph usually called the control flow graph that represents the control structure of the program.

- Data flow testing: Flowing data in a program is what keeps a program active. If there is no data flow, the program won’t be able to perform any operation. The program’s data flow needs to be tested to ensure that it is constant and efficient. Data flow testing also requires a data flow graph. This graph is required to know where and why the data is going or if it is going to correct destination or not. This testing helps in uncovering any anomalies that might restrict the flow of data in a program. Knowing these anomalies also helps in branch and path testing. A number of methods and techniques go in to data flow testing for exploring various events that are taking place in the program, whether right or wrong. It is used for checking whether all the data objects have been initialized or not and makes sure that they are used at least once in the whole program. Arrays and pointers are considered to be the two major elements that play a critical role in data flow testing. These two elements cannot be neglected. Data flow testing might include static anomaly detection, dynamic anomaly detection and anomaly detection through compliers.

- Branch testing: As the name suggests, this testing technique is used for testing all the branches of all the loops within a program. Here branch coverage plays a great role. It has to be made sure that each and every branch in the program is executed once. Thus the test cases are designed in such a way that all the branches are covered. This technique is just for complementing the white box testing at unit testing level. The programmers aim that their test cases must provide 100 percent branch coverage. Also the coverage has to be proper otherwise it can lead to other potential problems such as removal of the code that was actually correct and insertion of faulty code. But providing 100 percent coverage is not possible and we are always left with some bugs and errors that never come to the scene. Branch testing lets you uncover errors which lie in those parts of the program that are least executed or never executed. But there is a potential drawback too; which is; that it is very ineffective in uncovering errors in interactions of structures and decisions. Because of this drawback, the testers usually prefer to go for path testing.

Read more about this in Part 2 - White Box testing techniques.


Facebook activity