Subscribe by Email


Friday, January 17, 2014

What are different modes of unauthorized access?

You or your company might have a number of resources that you might need to protect from unauthorized people and ensure that only people who have a right to access these resources can access and use them. These resources might be either personnel, physical or informational. All this is done using access control. It is not just about creating user names and passwords whenever resources have to be accessed. There are many models, techniques, methods that can be implemented to maintain security. Also there are a number of types of attacks against each type of the distinct methods. In the process of authorization, access is granted to the authenticated subjects. It involves carrying out specific operations based up on the predefined access rights mentioned in the access criteria. This criterion is based up on the following factors:
- Clearance: Security level of the subject.
- Need – to – know: The approved formal access level.
Attackers employ a number of tricks and techniques for gaining unauthorized access to the resources and information of the company. Necessary countermeasures need to be taken so that these threats can be identified and eliminated.
There are different modes of unauthorized access as discussed further:
- Unauthorized disclosure of information: The disclosure of sensitive information might be intentional or accidental. Whatever the cause maybe, the results are always same. The individual get the information that they were not intended to access. A large part of the access control is about preventing such types of incidents from taking place. People might use different kinds of media for sharing information around the organization such as hard drives, floppy disks, shares on servers and so on. These media might contain sensitive information that might get in to the hand of the people for which it is not intended. Also new employees might be assigned old computers to work up on which might also contain some sensitive information stored by the former employees. Object reuse is one example where some object containing sensitive information might be used by other subjects.
- Emanation security: Attackers can even intercept electrical signals for stealing the information. The signals are radiated by the computers and other devices which can be intercepted by attackers by means of some specialized equipment. Using the right software as well as hardware, this information can be reconstructed without coming into the knowledge of the users using it. Main countermeasures include control zones, white noise and the TEMPEST.
- Man – in – the – middle attacks: An intruder drops in to a conversation going on between two hosts and intercepts the messages. Sequence numbers and digital signals can be used as countermeasures.
- Sniffing: This is a type of passive attack when the network is monitored by the intruder for gaining info concerning the victim and is used for attacking later. Data encryption can be done for preventing all this.
- War dialing: This is a kind of brute force attack where a program is used by the attacker using which it dials a large bank of phone numbers. This is done to check which phone belongs to a particular modem. Using this, the attacker can gain access to the network. Not publicizing telephone numbers is one countermeasure.
- Ping of death: In this Denial – of – service attack, the attacker sends oversized ICMP packets to the victim host. If the host is not familiar with how to handle such large packets, it may reboot or freeze. Implementing ingress filtering and patching the systems are some counter measures for detecting such oversized ICMP packets. Another type of DoS attack is the WinNuk.


Friday, December 13, 2013

Some white box testing techniques – part 2

This post is a continuation of the discussion on white box testing techniques from part 1 of this article (link).

- Path testing: This is a typical application program that consists of 100s of paths between two points of the program i.e., the entry and the exit. The number of paths goes on multiplying by the number of decisions that are encountered during the execution. This number is again increased by the execution of the loops. Each of these paths has to be tested to ensure there are no bug. This seems quite easy for a straight line having 50 – 100 lines of code but what about a large program? Counting gets difficult. With a limited number of resources (and most software projects have a limited number of resources), all the possible paths have to be covered. Path testing can be made a little easy by not consider the redundant code in the program as far as possible. Path testing is responsible for checking whether or not each and every path is getting executed at least once. Today we also have a new type of path testing, called the basis path testing which is a mixture of branch testing and path testing. Even the path testing makes use of the control flow graph. It involves calculating the cyclomatic complexity. For each & every path, a test case has to be designed.

- Statement coverage: We also know this white box testing technique by other names such as segment coverage or line coverage. This coverage methodology provides coverage only to the conditions that are true. The false conditions are neglected. This helps in the identification of the statements that are often executed in the program. It also helps in finding out the areas of the program where there is no data flow because of no execution. But all the lines of code are examined and executed. Statement coverage is important for verifying whether the code is serving its desired purpose or not. In other words it gives us a measure of the code quality. We can consider it as a partial form of path testing. Disadvantages include: false conditions are left untested; it gives no report about the loop termination and does not consider the logical operators. One line of code might be a full statement, a partial one or blank or may contain many statements. There are nested statements too. The statement coverage has to be measured first and then the test cases should be designed.

- Decision coverage: Another name for decision coverage is “all – edges coverage”. Here all the conditions that result because of a logical decision are considered. It does not matter whether the conditions are true or false. This is just the opposite of statement coverage. The outcome of a decision is nothing but a branch. So the branch coverage gives a measure of extent up to which the decisions in a program have been tested. This is why most of us often get confused with branch coverage and decision coverage and use them interchangeably. But it is more than statement coverage. The decision statements include loop control statements, if statements, if – else statements, switch – case statements i.e., the statements that result in either a true or a false value based up on the logical condition in that statement. This white box testing technique is helpful in validation of branch coverage that is whether or not all branches have been covered. It helps in ensuring that no branch gives an abnormal result that may disturb the operation of the whole program. There are certain drawbacks of the statement coverage as we saw above. These drawbacks are eliminated by decision testing. 


Thursday, December 12, 2013

Some white box testing techniques – part 1

In this article we shall give a brief description of the various techniques that are used in white box testing.

- Control flow testing: This is more of being similar to structural testing and is based on the control flow model of the application that is being testing. In control flow testing certain paths among all are selected judiciously so as to provide maximum coverage. The paths are chosen so that a decided thoroughness in the testing process is maintained. This means that paths that have been selected should be enough so as to cover all the statements at least once. It is usually applied to the new software systems that have to be put under unit testing (unit level white box testing). This technique assumes that there are correct specifications, correctly assessed and defined data, no bugs in the program except those in the control flow. The number of control flow bugs is fewer in the programs that are written using object – oriented or structured programming languages. This technique makes use of a flow graph usually called the control flow graph that represents the control structure of the program.

- Data flow testing: Flowing data in a program is what keeps a program active. If there is no data flow, the program won’t be able to perform any operation. The program’s data flow needs to be tested to ensure that it is constant and efficient. Data flow testing also requires a data flow graph. This graph is required to know where and why the data is going or if it is going to correct destination or not. This testing helps in uncovering any anomalies that might restrict the flow of data in a program. Knowing these anomalies also helps in branch and path testing. A number of methods and techniques go in to data flow testing for exploring various events that are taking place in the program, whether right or wrong. It is used for checking whether all the data objects have been initialized or not and makes sure that they are used at least once in the whole program. Arrays and pointers are considered to be the two major elements that play a critical role in data flow testing. These two elements cannot be neglected. Data flow testing might include static anomaly detection, dynamic anomaly detection and anomaly detection through compliers.

- Branch testing: As the name suggests, this testing technique is used for testing all the branches of all the loops within a program. Here branch coverage plays a great role. It has to be made sure that each and every branch in the program is executed once. Thus the test cases are designed in such a way that all the branches are covered. This technique is just for complementing the white box testing at unit testing level. The programmers aim that their test cases must provide 100 percent branch coverage. Also the coverage has to be proper otherwise it can lead to other potential problems such as removal of the code that was actually correct and insertion of faulty code. But providing 100 percent coverage is not possible and we are always left with some bugs and errors that never come to the scene. Branch testing lets you uncover errors which lie in those parts of the program that are least executed or never executed. But there is a potential drawback too; which is; that it is very ineffective in uncovering errors in interactions of structures and decisions. Because of this drawback, the testers usually prefer to go for path testing.

Read more about this in Part 2 - White Box testing techniques.


Sunday, December 8, 2013

What are some of the different cyber security standards?

Over a period of time, there have been many security standards developed that have lead to organizations being able to increase their level of security and preventing because of which the organizations are becoming more capable of safely practicing security techniques. These standards are termed as cyber security standards and are meant to minimize the chance of successful attacks on organizations and increase their cyber security. In these guides, a general outline of cyber security is given along with the specific techniques that should be implemented. There are certain standards for which an accredited body can grant a cyber-security certification. With cyber security certification one gets many advantages and one of them is benefits in terms of cyber security insurance.
Nowadays a lot of sensitive and critical information is stored on networks, clouds and computers and this is one of the reasons behind the creation of these standards. Different cyber security standards are:

- ISO/ IEC 27002: This calls for assurance and security of the information. A part of the security management practice is given in the ISO/ IEC 27002. It is also known as BS7799. It serves as a guide for good cyber security management. This is a very high level explanatory guide. This standard emphasizes that confidentiality, integrity and availability characterize the information security. It consists of 11 control areas namely:
- Security policy
- Organizing information security
- Asset management
- Human resources security
- Physical and environmental security
- Communications and operations
- Access controls
- Information systems acquisition, development and maintenance
- Incident handling
- Business continuity management
- compliance

- ISO/ IEC 27001: This is a part of the BS7799 standard which offers guidance on framework for certification or we can say that the part 2 of this standard has been replaced by ISO/ IEC 27001 standard. This standard is backward compatible so that any organization using BS7799 part 2 faces no problem in implementing this. This framework is a management system that is used in implementation of the control objectives ISO 27002 that are incorporated in to ISO 27001.

- SoGP (standard of good practice): This standard is essentially a list of best information security practices published by ISF i.e., the information security forum. It also provides a comprehensive SoGP benchmark program.

- NERC (north American  electric reliability corporation) offers many standards such as NERC 1300, NERC 1200, CIP – 002 – 1 to provide security to the bulk electric systems.

- NIST: Has provided the following standards:
- 800 – 12: Overview of the computer security along with control areas, importance of these controls, ways of implementation.
- 800 – 14: Lists most common security principles that are followed everywhere, description of computer security policy, suggestions for improvement and development of new practices.
- 800 – 26: Offers advice for the management of IT security, risk assessments and self-assessments.
- 800 – 37: Introduced a new approach for the application of risk management framework for the federal information systems.
- 800 – 53: It provided a guide for the assessment of the security controls for the federal information systems.
- ISO 15408: The common criteria are developed by this standard. it permits a number of software applications for integration and testing.
- RFC 2196: This is more like a memorandum about the development of security procedures and policies for the information systems that are connected through internet.
- ISA/ IEC – 62443: It provides all the related information such as standards and technical reports for defining the implementation of the control systems especially IACS (industrial automation and control system). The guidance is also for security practitioners, end users, control system manufacturers and so on.


Thursday, December 5, 2013

What are the advantages of network security?

The major advantage of having network security in place is that you keep all your things such as personal information, data and other files safe against people who are looking to steal these or destroy them (and it may not be somebody who is directly against you, but just people who are looking for networks where security is weak and they can get in). Or these may be unauthorized people who want to misuse this information. Unauthorized users may be from the same network or some other network. We have listed the advantages of having strong network security below as well as having proper security protocols:
- It provides protection to the client’s personal data on the network.
- It provides protection to information that is exchanged between hosts during transmission, from eavesdroppers.
- It provides protection to computer systems which can be otherwise rendered useless if attacked with a malicious virus or a trojan that keeps on passing out information.
- Prevents any attempts of doing harm to your system by spyware and malware attacks or hacking.
- Takes care of the access rights assigned to the users at different levels in a network such as in accounting systems.
- It is because of network security that private networks actually exist even if their information is passed over public networks.
- It helps in closing private networks and protecting them against intruders and other attacks.

Data in a private network is also not safe since it can be altered and hampered by people in the same network who may be doing so for many different reasons. The possibilities of attacks vary proportionally with the size of the network. Nowadays various organizations offer anti-virus software free of cost to people who are accessing this network. This has helped a big deal in reducing the threats of attacks.
As a large number of the users suffer from danger of viruses or other attacks, it also increases the danger for the organizations whose websites these users access on a regular basis. Thus the organizations distribute free anti-virus to keep this danger at bay to some extent. Network security is important as it provides protection against malicious viruses, spyware, worms and Trojans. It also guards the system against its potential vulnerabilities. Network security policy means a systematic process for the enforcement of protection policies for data, applications, hosts etc. and it provides guidance as to how the digital identities should be maintained. The security infrastructure may vary from one host to another and from one network to another. With network security the network administrator gets a centralized control for all of them when they are based in one virtual organization.
There are a number of issues that must be addressed by network security in terms of keeping viruses and other such attacks at bay. For preventing the virus from infecting your system or network, these security measures must automatically keep its data base updated on all the user machines. Another measure that can be taken is to install scanners on every machine and device accessing the network include newer devices like tablets. These scanners work well for keeping out e-mails infected with Trojans, worms and viruses.
At the same time, it is also important that users have education about the need for network security and what not to do. Without appropriate knowledge you won’t know as to what security options should be selected for enforcement. You might land up with a security policy that barely protects your system. For example, if you receive an email whose source you don’t know or you don’t trust just don’t open it. Possibilities are that it might contain some malicious file which if downloaded can eat up your data.
It is true that anti-virus software are effective in guarding against the viruses but these are developed only after the virus has been developed. Anti-viruses lag behind from viruses. Antiviruses are available only for the viruses that exist and not for those that have been newly created and hence user awareness and security safeguards are very important.


Tuesday, December 3, 2013

What is Orthogonal Array testing? - An explanation

There are a number of black box testing techniques; the one that is being discussed in this post is the orthogonal array testing. This technique provides a systematic as well as statistical strategy for testing software. The number of inputs to the system in this technique is small but large enough for testing each and every possible input as in exhaustive testing. This technique has proved to be quite helpful in discovering errors that show a faulty logic in the software systems. Orthogonal arrays can be applied in various testing types such as following:
- User interface or UI testing
- System testing
- Regression testing
- Configuration testing
- Performance testing and so on.

The permutations for the factor levels that consist of only one treatment have to be chosen in an uncorrelated way, so that every single treatment gives you a piece of information that is different from the others. The advantage of organizing testing in such a way is that a minimum number of experiments are required for gathering same information. Orthogonality is a property exhibited by the orthogonal vectors. Properties exhibited by the orthogonal vectors are mentioned below:
- Information conveyed by each vector is different from the information by other vectors in the sequence. That is, as we mentioned information conveyed by each treatment is unique to it. This is important otherwise there will be redundancy.
- It is easy to separate these signals on a linear addition.
- All the vectors are statistically independent of each other which means that there is no correlation between them.
- When these individual components are added linearly, the result is an arithmetic sum.

Suppose a system is having 3 parameters each of which has 3 values. We require total 27 test cases for testing all of the parameter combinations which is quite time consuming. So we use an orthogonal array for selecting a combination subset from these combinations. As a result of using the orthogonal array testing, maximization of the test coverage area is possible. At the same time this minimizes the number of test cases that have to be considered for testing. The pair that is selected is assumed to have the maximum number of defects. The technique works based up on this assumption. These many combinations are sufficient for catching the fault. The interaction of the input parameters between themselves has also to be considered. The array is said to be orthogonal because the occurrence of all pair wise combinations is once. The results of the test cases are assessed as follows:
- Single mode faults
- Double mode faults
- Multimode faults

Below mentioned are the major benefits of using this technique:
- The testing cycle time is reduced.
- The analysis process gets simpler
- Test cases are balanced which means that the defect isolation and performance assessments are straightforward.
- Saves up on costs when compared to the pair-wise testing. The coverage to all the defects can only be provided by testing all the combinations that are possible. But our schedule and budget often do not permit this. Therefore we are forced to select only a sample of the combinations from the test domain. Orthogonal array testing is a means for generating samples that provide high coverage for the validation of test domain effectively. This has made the technique particularly useful in the integration testing and testing of the configurable options. Software testers often face a dilemma during selection of the test cases. The quality of the software cannot be tested but only the defects can be detected. And the exhaustive testing is difficult even in the small systems. 


Monday, December 2, 2013

Some advantages and disadvantages of white box testing

White box testing can be applied at the following levels in software testing process:
- Unit testing
- Integration testing
- System testing

However, commonly it is carried out at the unit level. White box testing at this level is used for exploiting the paths through the program units, and for testing paths between different units, and on the system level for testing paths between the various sub-systems. In this article we discuss about the advantages and disadvantages of using this testing methodology.

Advantages: This testing methodology ranks among the widely used testing methodologies, with an increasing number of people using it, although it does require technical knowledge and capabilities. We have three major advantages from it.
First, in this technique it is beneficial to have knowledge about the source code unlike the other methodologies where the tester should not know much about the source code. This helps in thoroughly testing the application.
Second, this methodology has made it possible to optimize the code by helping in uncovering the hidden errors and removing them.
Third, it provides an opportunity for introspection.
Another plus point is that the testing can be started with just one developed unit at hand. We don’t have to wait for the whole program to be ready with GUI. Majority of the paths are covered in white box testing.

Disadvantages: Just as white box testing has advantages; it has its own minus points too.
The first major disadvantage is that it gets very complex once you start with it. This happens so because every path in the program has to be tested and so all the paths have to be identified which can become very time consuming and difficult indeed. For this the programmer as well as the tester must have a great deal of knowledge in this level of detail.
Second major disadvantage is that white box testing consumes too much time. In every project it is not possible to testing each and every single path. It is certain that some of the paths will go unnoticed. This is why the white box test cases are very complex and can only be implemented with a thorough knowledge of the application and code. Maintenance of the test scripts proves to be a burden as the complexity increases and changes have to be made in the implementation.

This testing methodology requires many testing tools which might not be available at hand instantly. The work of a mechanic is analogous to white box testing i.e., the programmer examines the source code as to see why it is not working.
First the tester or the programmer must analyze the source code with the help of comprehensive software documentation, code and the samples.
Second, the tester needs to think for the ways or methods by which he can disrupt the normal functioning of the application or what are the input factors that can cause the program to go awry? Now based up on these assessments, the white box testing techniques can be implemented. The assessments have to be made carefully for white box testing to be successful. In simple words white box testing is just a means for the verification of the source code.
The logic and the structure of the code must be known to the person who is testing it. Logical decisions are implemented. Each logical decision tests a different path. In white box testing programmers can only substitute for testers i.e., they only can test the application. If other testers are hired, they will take some time in understanding the source code of the program or need a high degree of technical knowledge.


Facebook activity