Subscribe by Email


Friday, December 13, 2013

Some white box testing techniques – part 2

This post is a continuation of the discussion on white box testing techniques from part 1 of this article (link).

- Path testing: This is a typical application program that consists of 100s of paths between two points of the program i.e., the entry and the exit. The number of paths goes on multiplying by the number of decisions that are encountered during the execution. This number is again increased by the execution of the loops. Each of these paths has to be tested to ensure there are no bug. This seems quite easy for a straight line having 50 – 100 lines of code but what about a large program? Counting gets difficult. With a limited number of resources (and most software projects have a limited number of resources), all the possible paths have to be covered. Path testing can be made a little easy by not consider the redundant code in the program as far as possible. Path testing is responsible for checking whether or not each and every path is getting executed at least once. Today we also have a new type of path testing, called the basis path testing which is a mixture of branch testing and path testing. Even the path testing makes use of the control flow graph. It involves calculating the cyclomatic complexity. For each & every path, a test case has to be designed.

- Statement coverage: We also know this white box testing technique by other names such as segment coverage or line coverage. This coverage methodology provides coverage only to the conditions that are true. The false conditions are neglected. This helps in the identification of the statements that are often executed in the program. It also helps in finding out the areas of the program where there is no data flow because of no execution. But all the lines of code are examined and executed. Statement coverage is important for verifying whether the code is serving its desired purpose or not. In other words it gives us a measure of the code quality. We can consider it as a partial form of path testing. Disadvantages include: false conditions are left untested; it gives no report about the loop termination and does not consider the logical operators. One line of code might be a full statement, a partial one or blank or may contain many statements. There are nested statements too. The statement coverage has to be measured first and then the test cases should be designed.

- Decision coverage: Another name for decision coverage is “all – edges coverage”. Here all the conditions that result because of a logical decision are considered. It does not matter whether the conditions are true or false. This is just the opposite of statement coverage. The outcome of a decision is nothing but a branch. So the branch coverage gives a measure of extent up to which the decisions in a program have been tested. This is why most of us often get confused with branch coverage and decision coverage and use them interchangeably. But it is more than statement coverage. The decision statements include loop control statements, if statements, if – else statements, switch – case statements i.e., the statements that result in either a true or a false value based up on the logical condition in that statement. This white box testing technique is helpful in validation of branch coverage that is whether or not all branches have been covered. It helps in ensuring that no branch gives an abnormal result that may disturb the operation of the whole program. There are certain drawbacks of the statement coverage as we saw above. These drawbacks are eliminated by decision testing. 


Thursday, December 12, 2013

Some white box testing techniques – part 1

In this article we shall give a brief description of the various techniques that are used in white box testing.

- Control flow testing: This is more of being similar to structural testing and is based on the control flow model of the application that is being testing. In control flow testing certain paths among all are selected judiciously so as to provide maximum coverage. The paths are chosen so that a decided thoroughness in the testing process is maintained. This means that paths that have been selected should be enough so as to cover all the statements at least once. It is usually applied to the new software systems that have to be put under unit testing (unit level white box testing). This technique assumes that there are correct specifications, correctly assessed and defined data, no bugs in the program except those in the control flow. The number of control flow bugs is fewer in the programs that are written using object – oriented or structured programming languages. This technique makes use of a flow graph usually called the control flow graph that represents the control structure of the program.

- Data flow testing: Flowing data in a program is what keeps a program active. If there is no data flow, the program won’t be able to perform any operation. The program’s data flow needs to be tested to ensure that it is constant and efficient. Data flow testing also requires a data flow graph. This graph is required to know where and why the data is going or if it is going to correct destination or not. This testing helps in uncovering any anomalies that might restrict the flow of data in a program. Knowing these anomalies also helps in branch and path testing. A number of methods and techniques go in to data flow testing for exploring various events that are taking place in the program, whether right or wrong. It is used for checking whether all the data objects have been initialized or not and makes sure that they are used at least once in the whole program. Arrays and pointers are considered to be the two major elements that play a critical role in data flow testing. These two elements cannot be neglected. Data flow testing might include static anomaly detection, dynamic anomaly detection and anomaly detection through compliers.

- Branch testing: As the name suggests, this testing technique is used for testing all the branches of all the loops within a program. Here branch coverage plays a great role. It has to be made sure that each and every branch in the program is executed once. Thus the test cases are designed in such a way that all the branches are covered. This technique is just for complementing the white box testing at unit testing level. The programmers aim that their test cases must provide 100 percent branch coverage. Also the coverage has to be proper otherwise it can lead to other potential problems such as removal of the code that was actually correct and insertion of faulty code. But providing 100 percent coverage is not possible and we are always left with some bugs and errors that never come to the scene. Branch testing lets you uncover errors which lie in those parts of the program that are least executed or never executed. But there is a potential drawback too; which is; that it is very ineffective in uncovering errors in interactions of structures and decisions. Because of this drawback, the testers usually prefer to go for path testing.

Read more about this in Part 2 - White Box testing techniques.


Sunday, December 8, 2013

What are some of the different cyber security standards?

Over a period of time, there have been many security standards developed that have lead to organizations being able to increase their level of security and preventing because of which the organizations are becoming more capable of safely practicing security techniques. These standards are termed as cyber security standards and are meant to minimize the chance of successful attacks on organizations and increase their cyber security. In these guides, a general outline of cyber security is given along with the specific techniques that should be implemented. There are certain standards for which an accredited body can grant a cyber-security certification. With cyber security certification one gets many advantages and one of them is benefits in terms of cyber security insurance.
Nowadays a lot of sensitive and critical information is stored on networks, clouds and computers and this is one of the reasons behind the creation of these standards. Different cyber security standards are:

- ISO/ IEC 27002: This calls for assurance and security of the information. A part of the security management practice is given in the ISO/ IEC 27002. It is also known as BS7799. It serves as a guide for good cyber security management. This is a very high level explanatory guide. This standard emphasizes that confidentiality, integrity and availability characterize the information security. It consists of 11 control areas namely:
- Security policy
- Organizing information security
- Asset management
- Human resources security
- Physical and environmental security
- Communications and operations
- Access controls
- Information systems acquisition, development and maintenance
- Incident handling
- Business continuity management
- compliance

- ISO/ IEC 27001: This is a part of the BS7799 standard which offers guidance on framework for certification or we can say that the part 2 of this standard has been replaced by ISO/ IEC 27001 standard. This standard is backward compatible so that any organization using BS7799 part 2 faces no problem in implementing this. This framework is a management system that is used in implementation of the control objectives ISO 27002 that are incorporated in to ISO 27001.

- SoGP (standard of good practice): This standard is essentially a list of best information security practices published by ISF i.e., the information security forum. It also provides a comprehensive SoGP benchmark program.

- NERC (north American  electric reliability corporation) offers many standards such as NERC 1300, NERC 1200, CIP – 002 – 1 to provide security to the bulk electric systems.

- NIST: Has provided the following standards:
- 800 – 12: Overview of the computer security along with control areas, importance of these controls, ways of implementation.
- 800 – 14: Lists most common security principles that are followed everywhere, description of computer security policy, suggestions for improvement and development of new practices.
- 800 – 26: Offers advice for the management of IT security, risk assessments and self-assessments.
- 800 – 37: Introduced a new approach for the application of risk management framework for the federal information systems.
- 800 – 53: It provided a guide for the assessment of the security controls for the federal information systems.
- ISO 15408: The common criteria are developed by this standard. it permits a number of software applications for integration and testing.
- RFC 2196: This is more like a memorandum about the development of security procedures and policies for the information systems that are connected through internet.
- ISA/ IEC – 62443: It provides all the related information such as standards and technical reports for defining the implementation of the control systems especially IACS (industrial automation and control system). The guidance is also for security practitioners, end users, control system manufacturers and so on.


Thursday, December 5, 2013

What are the advantages of network security?

The major advantage of having network security in place is that you keep all your things such as personal information, data and other files safe against people who are looking to steal these or destroy them (and it may not be somebody who is directly against you, but just people who are looking for networks where security is weak and they can get in). Or these may be unauthorized people who want to misuse this information. Unauthorized users may be from the same network or some other network. We have listed the advantages of having strong network security below as well as having proper security protocols:
- It provides protection to the client’s personal data on the network.
- It provides protection to information that is exchanged between hosts during transmission, from eavesdroppers.
- It provides protection to computer systems which can be otherwise rendered useless if attacked with a malicious virus or a trojan that keeps on passing out information.
- Prevents any attempts of doing harm to your system by spyware and malware attacks or hacking.
- Takes care of the access rights assigned to the users at different levels in a network such as in accounting systems.
- It is because of network security that private networks actually exist even if their information is passed over public networks.
- It helps in closing private networks and protecting them against intruders and other attacks.

Data in a private network is also not safe since it can be altered and hampered by people in the same network who may be doing so for many different reasons. The possibilities of attacks vary proportionally with the size of the network. Nowadays various organizations offer anti-virus software free of cost to people who are accessing this network. This has helped a big deal in reducing the threats of attacks.
As a large number of the users suffer from danger of viruses or other attacks, it also increases the danger for the organizations whose websites these users access on a regular basis. Thus the organizations distribute free anti-virus to keep this danger at bay to some extent. Network security is important as it provides protection against malicious viruses, spyware, worms and Trojans. It also guards the system against its potential vulnerabilities. Network security policy means a systematic process for the enforcement of protection policies for data, applications, hosts etc. and it provides guidance as to how the digital identities should be maintained. The security infrastructure may vary from one host to another and from one network to another. With network security the network administrator gets a centralized control for all of them when they are based in one virtual organization.
There are a number of issues that must be addressed by network security in terms of keeping viruses and other such attacks at bay. For preventing the virus from infecting your system or network, these security measures must automatically keep its data base updated on all the user machines. Another measure that can be taken is to install scanners on every machine and device accessing the network include newer devices like tablets. These scanners work well for keeping out e-mails infected with Trojans, worms and viruses.
At the same time, it is also important that users have education about the need for network security and what not to do. Without appropriate knowledge you won’t know as to what security options should be selected for enforcement. You might land up with a security policy that barely protects your system. For example, if you receive an email whose source you don’t know or you don’t trust just don’t open it. Possibilities are that it might contain some malicious file which if downloaded can eat up your data.
It is true that anti-virus software are effective in guarding against the viruses but these are developed only after the virus has been developed. Anti-viruses lag behind from viruses. Antiviruses are available only for the viruses that exist and not for those that have been newly created and hence user awareness and security safeguards are very important.


Tuesday, December 3, 2013

What is Orthogonal Array testing? - An explanation

There are a number of black box testing techniques; the one that is being discussed in this post is the orthogonal array testing. This technique provides a systematic as well as statistical strategy for testing software. The number of inputs to the system in this technique is small but large enough for testing each and every possible input as in exhaustive testing. This technique has proved to be quite helpful in discovering errors that show a faulty logic in the software systems. Orthogonal arrays can be applied in various testing types such as following:
- User interface or UI testing
- System testing
- Regression testing
- Configuration testing
- Performance testing and so on.

The permutations for the factor levels that consist of only one treatment have to be chosen in an uncorrelated way, so that every single treatment gives you a piece of information that is different from the others. The advantage of organizing testing in such a way is that a minimum number of experiments are required for gathering same information. Orthogonality is a property exhibited by the orthogonal vectors. Properties exhibited by the orthogonal vectors are mentioned below:
- Information conveyed by each vector is different from the information by other vectors in the sequence. That is, as we mentioned information conveyed by each treatment is unique to it. This is important otherwise there will be redundancy.
- It is easy to separate these signals on a linear addition.
- All the vectors are statistically independent of each other which means that there is no correlation between them.
- When these individual components are added linearly, the result is an arithmetic sum.

Suppose a system is having 3 parameters each of which has 3 values. We require total 27 test cases for testing all of the parameter combinations which is quite time consuming. So we use an orthogonal array for selecting a combination subset from these combinations. As a result of using the orthogonal array testing, maximization of the test coverage area is possible. At the same time this minimizes the number of test cases that have to be considered for testing. The pair that is selected is assumed to have the maximum number of defects. The technique works based up on this assumption. These many combinations are sufficient for catching the fault. The interaction of the input parameters between themselves has also to be considered. The array is said to be orthogonal because the occurrence of all pair wise combinations is once. The results of the test cases are assessed as follows:
- Single mode faults
- Double mode faults
- Multimode faults

Below mentioned are the major benefits of using this technique:
- The testing cycle time is reduced.
- The analysis process gets simpler
- Test cases are balanced which means that the defect isolation and performance assessments are straightforward.
- Saves up on costs when compared to the pair-wise testing. The coverage to all the defects can only be provided by testing all the combinations that are possible. But our schedule and budget often do not permit this. Therefore we are forced to select only a sample of the combinations from the test domain. Orthogonal array testing is a means for generating samples that provide high coverage for the validation of test domain effectively. This has made the technique particularly useful in the integration testing and testing of the configurable options. Software testers often face a dilemma during selection of the test cases. The quality of the software cannot be tested but only the defects can be detected. And the exhaustive testing is difficult even in the small systems. 


Monday, December 2, 2013

Some advantages and disadvantages of white box testing

White box testing can be applied at the following levels in software testing process:
- Unit testing
- Integration testing
- System testing

However, commonly it is carried out at the unit level. White box testing at this level is used for exploiting the paths through the program units, and for testing paths between different units, and on the system level for testing paths between the various sub-systems. In this article we discuss about the advantages and disadvantages of using this testing methodology.

Advantages: This testing methodology ranks among the widely used testing methodologies, with an increasing number of people using it, although it does require technical knowledge and capabilities. We have three major advantages from it.
First, in this technique it is beneficial to have knowledge about the source code unlike the other methodologies where the tester should not know much about the source code. This helps in thoroughly testing the application.
Second, this methodology has made it possible to optimize the code by helping in uncovering the hidden errors and removing them.
Third, it provides an opportunity for introspection.
Another plus point is that the testing can be started with just one developed unit at hand. We don’t have to wait for the whole program to be ready with GUI. Majority of the paths are covered in white box testing.

Disadvantages: Just as white box testing has advantages; it has its own minus points too.
The first major disadvantage is that it gets very complex once you start with it. This happens so because every path in the program has to be tested and so all the paths have to be identified which can become very time consuming and difficult indeed. For this the programmer as well as the tester must have a great deal of knowledge in this level of detail.
Second major disadvantage is that white box testing consumes too much time. In every project it is not possible to testing each and every single path. It is certain that some of the paths will go unnoticed. This is why the white box test cases are very complex and can only be implemented with a thorough knowledge of the application and code. Maintenance of the test scripts proves to be a burden as the complexity increases and changes have to be made in the implementation.

This testing methodology requires many testing tools which might not be available at hand instantly. The work of a mechanic is analogous to white box testing i.e., the programmer examines the source code as to see why it is not working.
First the tester or the programmer must analyze the source code with the help of comprehensive software documentation, code and the samples.
Second, the tester needs to think for the ways or methods by which he can disrupt the normal functioning of the application or what are the input factors that can cause the program to go awry? Now based up on these assessments, the white box testing techniques can be implemented. The assessments have to be made carefully for white box testing to be successful. In simple words white box testing is just a means for the verification of the source code.
The logic and the structure of the code must be known to the person who is testing it. Logical decisions are implemented. Each logical decision tests a different path. In white box testing programmers can only substitute for testers i.e., they only can test the application. If other testers are hired, they will take some time in understanding the source code of the program or need a high degree of technical knowledge.


Sunday, December 1, 2013

Testing: A brief summary of white box testing

We all know white box testing by many other names as well such as glass box testing, clear box testing, structural testing, transparent box testing etc. This testing methodology is used for testing the internal structure of the software or to be precise to test how the application actually works and whether it works as desired or not. This is just the opposite of black box testing that is responsible for testing the system’s functionality only, not getting into the internal structure. The test cases for the white box testing are designed based up on the programming skills and internal perspective from which the system is viewed. From the various inputs, some are selected for exercising all paths through the program and determining whether they are producing the desired outputs or not.
A number of errors can be discovered using this testing methodology. But it is not capable of detecting the parts of the specification that have not been implemented or are missing. The following testing techniques are included under the white box testing:

- Data flow testing
- Path testing
- Decision coverage
- Control flow testing
- Branch testing
- Statement coverage

This testing methodology is used for testing the source code of the application using the test cases that are derived using the above mentioned techniques. All these techniques are used as guidelines for the creation of an error free environment. Any fragile piece of code in the whole program can be examined using white box testing. These techniques act as the building blocks for white box testing and its essence lies in carefully testing the source code so that no errors occur later on. Using these different testing techniques all the paths through the source code that are visible can be exercised and an error free environment can be created. But the tester should have the ability to know which path of the code is being tested and what the output should be. Now we shall discuss about the various levels in white box testing:

- Unit testing: White box testing done during this phase ensures that code is working as desired before it is integrated with the units that have already been tested. This helps in catching the errors at an early stage so that later it does not becomes difficult to find them when the units are integrated and the system complexity increases.
- Integration testing: Test cases at this level are meant to test how the interface of various units interact with each other i.e., the system’s behavior is tested in an open environment. During such testing any interaction that is not familiar to the programmer comes to light.
- Regression testing: This phase uses the recycled test cases from white box testing.

For the white box testing to be effective the tester should understand the purpose of the source code so that they are tested well. The programmer must understand the application so that he/ she can create effective and appropriate test cases.
The programmer working on preparing the test cases needs to know the application very well in order to ensure that he / she can prepare cases to handle every visible path for testing purposes. With the source code having been understood, this understanding of the course code lets the team use this analysis of the source code for preparing test cases. And for making these test cases, these are the steps that need to happen in white-box testing:
- Input, that may include functional requirements, specifications, source code etc.
- Processing unit, that carries out the risk analysis, prepares a test plan, executes tests and gives results.
- Output, that provides the final result.


Saturday, November 30, 2013

Security - What are the principal ways to secure a wireless network?

Securing a wireless network is as important as securing wired networks, and in many cases even more since it can be easier to tap into a wireless network. One or the other time all of us might have used a WiFi network which might be unsecure (highly unsecure, or may have recent holes that are not yet patched). But this would not do much harm if you are just honestly looking for connecting to the internet. If you own an unsecure wireless network, you should know that everyone is not honest as you are. Attackers with bad intentions can know what activities are taking place in your network and how your network resources can be exploited.  This problem can be fixed by following some basic principles of securing your wireless network:
- WEP and WPA encryption: Encryption is the first line of defense that you can call up for the security of your network. The data that your PC transmits to the wireless router is encoded. But usually what happens is that in most of the routers this option is disabled. You first need to check if it is enabled or not. If you keep it disabled, it will expose your network to several vulnerabilities. You should keep the encryption in enable mode and use the strongest form that is supported by your computer. WPA2 is more sophisticated when compared to WPA. WEP can be easily cracked and so it has been replaced by the most recent version of WPA i.e., the WPA2. One thing to be taken care of is that all the devices should have either WEP or WPA if you are using either of them. These two protocols cannot be mixed and used. The WEP uses the same key every time but this is not the case with the WPA. Here the keys keep on changing dynamically. This makes it almost impossible to hack. The encryption key must have a strong password like a combination of numbers and letters of more than 14 characters. If your computer has an old router that supports only WEP, use the 128-bit WEP key as it is the safest. But you should continuously keep checking for a firmware update at the manufacturer’s website. This update will provide WPA support to WEP. If no update is available, you can replace the old routers and the adapters with their new models that provide support for WPA. It’s better to go with hybrid version of the routers that support both WPA and WPA2. This will provide stronger encryption at the same time while maintaining compatibility with the other adapters.
It should be made sure that the default network name as well as the password have been changed. Doing so will make it difficult for the hackers to break int o the system and change its configuration. Even if you do have a firewall in the router, additional security measures have to be taken. The firewall does not lets the hackers break in to the system. But it does not stop people that lie in to the geographical range of wi-fi from accessing the network. There are readily available tools that can be used for sniffing the traffic through your wireless networks. To supplement the security, the software firewall should also be installed on the computer. Public hotspots are typically very unsecure.  If there are no precautions it should be assumed internet traffic whether incoming or outgoing is visible to the attackers. Before connecting to the network always make sure that is a legitimate one, make sure that the firewall is enabled. And keep the file sharing option to off. You can check whether you have selected the appropriate security options in the firewall settings. These are some tips to increase your security level when dealing with WiFi.


Thursday, November 28, 2013

Security - What are some of the different ranges of wireless security measures?

When you get to be serious about wireless security, there are several mechanisms / measures that you can take, here are some details of the problem and solutions:
First – generation wireless networking has made it hard to decide whether or not you should deploy a wireless local area network (WLAN) even though there are many shortcomings such as rampant threats, vulnerabilities of the protocol and so on. Sometimes you might feel like banning the WLAN neglecting its advantages in business due to a fear of rogue AP (access points) cropping up. In either of the cases it’s a no- win situation. However, over a period of time, wireless protocols have been revised with some improvements that have made them more secure. Given the various threats (some of which can be innovative), wireless security has to be taken seriously like other types of network threats.
A WLAN security suite should be installed for providing security. The Wireless security can be more enhanced if we have a proper knowledge about how to correctly integrate wireless devices with wired networks, upgrading the existing security tools and after a due selection of the appropriate security technologies. We should be sure that security solutions for virtual private networks are based on the present generation of the encryption and authentication protocols. Because threats can come in new and improved methods, on a continuous basis, you need to keep monitoring the health of your network for keeping it secure. Attackers are always waiting for seeing an unprotected WLAN and then invading and turning it.
It is quite easy to record wireless traffic and eventually break in, getting such valuable info such as proprietary information, login details, server addresses and so on (nowadays, stealing credit card details seems to have become a business for the attackers). In addition to stealing information, the attackers can also take control of networks and use them for transmitting spams, steal bandwidth, or use this network as a Launchpad for attacking other networks. The traffic can be recorded and modified, and the consequences can be legal or financial.
A business can be disrupted even by an attacker with low technology skills with packaged scripts that make it easy to attack networks and hunt for weak points (for example, a known security hole has not be fixed and the script uses that hole to get inside and eventually gain access). The attacker can flood your internet uplinks, wired networks and access points with wireless packets. You should known from what you are defending your systems and why protecting different possible points of entry. If you don’t know this, then you don't really have a chance, at sometime or the other, you will have your network without protection, and all the security measures are in vain.
The identification of assets and the impact of the loss is critical for security analysis. If you are using connection methods such as DSL, dial up or wireless, the access requirements should be defined by your security policy. If your system follows a remote access policy for the telecommuters, it should be expanded to incorporate wireless. If there is no such policy, one should be created. The scenarios unique to the wireless network must be included. The rules of wireless network are different for the employees and office visitors. The public areas have jacks that are typically associated with some known addresses and are sometimes disabled. But the PDAs and the laptops can be easily connected to the wireless stations and access points in the nearby location. This serves as both opportunity and a threat.
For guests the peer-to-peer networking should be prohibited and sessions should be permitted through certain access points with limited bandwidth and duration. After the identification of the assets, enumeration of the risks should be done. The last step is the quantifying of the risks. In security it is always important to weigh the risk against the cost. Once you have got this right, the other WLAN alternatives can be considered. Before setting up the access points, you should take a survey of the WLAN using a discovery tool. Some set up wizards have made it possible for the employees to deploy rogue access points through which the corporate’s info and assets can be exposed to the outside world. It can also introduce disturbance in to the WLAN. These rogue apps must be eliminated. With such surveys, you can also find workstations that are not authorized to access the internet. 


Wednesday, November 27, 2013

How are Smart cards, USB tokens, and software tokens used for security?

In this article we discuss about how smart cards, USB tokens and other software tokens are used for implementing security.

Smart card: This is a type of ICC (integrated circuit card) incorporated in to a pocket-sized card along with other embedded circuits. They are made up of plastic (usually polyvinyl chloride). These are used for the purpose of authentication, identification, and application processing and data storage.  These cards serve as a strong means for authentication within large organizations for SSO i.e., single sign-on. These are also used as ATM cards, SIM in mobile phones, fuel cards, pre-payment cards, access control cards and high-security identification cards, phone payment cards, public transport payment cards and so on. Sometimes they are also used as electronic wallets i.e., funds can be loaded in to it for paying when needed to merchants, retailers, vending machines, parking meters and so on. It does not require establishing a connection to the bank. The card can also be used by someone who is not its owner. This exchange of money is protected by the cryptographic protocols. Some cards such as the German Geldkarte are used for age verification. Some commonly known cards are:
- Visa
- MasterCard
- American express
- Discover

Security token or USB token: This is a physical device used for the user authorization by the security system so that there is no difficulty in authentication process. These devices verify the identity of the user electronically. These normally replace the passwords (or can be used along with the password) and use a key for gaining access. These tokens might be used for storing for cryptographic keys which include biometric data, digital signature etc. some come with tamper resistant packaging, while others have a small keypad for entering the PIN. Some tokens have a USB connector and so called a USB token. Some come with a wireless Bluetooth interface. With such interfaces the generated key number sequence can be transferred to the system. A token can stored 4 types of passwords:
- Static password token
- Synchronous dynamic password token
- Asynchronous password token
- Challenge response token

Tokens consist of chips whose functions can be very simple or at the same time to very complex. They use multiple authentication methods in the latter case. Simple tokens do not need to be connected to the system.

Software tokens: This is a two-factor authentication security device used for the authorization of the computer services. These tokens are stored in the electronic devices such as mobile phone, PDAs, PC, laptop etc. this is totally opposite of the hardware tokens that are stored on some hardware device dedicated to it. Both these types of tokens are quite vulnerable to man-in-the-middle attacks or other phishing attacks. However these tokens do have some benefits over the smart cards and USB tokens. Firstly you don’t require carrying them nor do they run on batteries that might run out. They are less expensive when compared to the hardware tokens. These tokens have two primary architectures namely the public-key cryptography and the shared secret. In the second architecture type the configuration file is given to each end-user by the administrator containing the user ID, PIN and the secret key. This type is open to many kinds of vulnerabilities. Attackers can compromise the stolen file. On top of this, these configuration files are subject to offline attacks and these are also difficult to be distributed. The latest software tokens use the public-key cryptography architecture to overcome most of the drawbacks of the shared secret architecture. 


Tuesday, November 26, 2013

Security - What is meant by a spoofing attack?

A spoofing attack can be described as a situation in which a program is successfully masqueraded by another person or program in the area of network security. This is done by falsification of inbound data through which the masquerading program gains an advantage, of the illegitimate kind. A number of TCP/ IP protocols do not have mechanisms for the source and destination authentication of the messages. This makes them too much vulnerable to the spoofing attacks. Thus some extra precautions have to be taken by the applications for verification of the sending and receiving host identity. A source IP address is forged using which IP packets are created. This is done for impersonation of identity of some other computer system and to conceal the sender’s identity. Thus, IP protocol is the basic one that is used for sending data across the networks. Each packet consists of numerical addresses. The header field of the packet is usually forged so that it appears as if it is from someone else.
The man-in-the-middle attacks against the network’s hosts are often carried out with the help of two types of spoofing namely ARP spoofing and the IP spoofing.
The implementation of firewalls having capability of inspecting the packets deeply can prevent the spoofing attacks from taking advantage of the TCP/ IP protocols. This can also be done by taking measures for the verification of the message sender and the recipient’s identity. There are sites which are pay sites and they can be accessed only through a certain log-in page that is approved by them. This enforcement is made by referrer header checking in the HTTP request. This is so because the referrer header can be changed by the unauthorized users to gain access to the site content. This is called referrer spoofing.
Sometimes the copyright holders also use spoofing for inserting un-listenable and distorted versions of works on networks where file is shared. This is termed as poisoning the file – sharing networks. Another type of spoofing attack is the caller ID spoofing. Caller ID info is often provided by the public telephone networks including the name and number of the caller. VoIP (voice over IP) is one such technology in which the caller ID info can be forged by the callers so as to present names and numbers that are false. This false information is then forwarded by the gateways that connect public networks and allow spoofing.
It is also possible that the origination of the spoofed call might be some other country. In that case the laws in the country of the recipient might not be applicable to the caller. This has also limited the effectiveness of the laws against the caller ID spoofing. This results in a lot of scams. Another type is email spoofing or email address spoofing. The information of the sender that you see in the emails can be easily spoofed. Spammers use this technique quite often for hiding their information. This creates problems such as spam backscatter, misdirected bounces and so on.
A GPS receiver can be deceived by GPS spoofing attacks. In this the counterfeit GPS signals are broadcasted that have been structured to appear same as the normal GPS signals. This can also be done with original signals and rebroadcasting them at some other point. Because of the receiver will estimate its position wrongly. One variant of GPS spoofing attack is the carry off attack. This attack involves synchronization and broadcasting of the signals and genuine signals together. This gradually increases the power of the counterfeit signals which causes them to drift away from the genuine signals.


Monday, November 25, 2013

Security - What is meant by smurf attack?

A type of denial-of-service attack is the smurf attack. This attack involves broadcasting a large number of ICMP (internet control message protocol) packets to a computer network with the spoofed IP address of the victim through an IP broadcast address. Most of the devices online on that network respond to this broadcast by replying to the IP address of the source. Now, since the number of devices connected to the network and replying to this broadcast is very large, the system of the victim will get flooded with incoming traffic. This results in a slow down of the victim’s system and it becomes impossible to work on it. The attack was named after the name of the program’s source code called the ‘smurf.c’ which was released by TFreak in the year of 1997. At that time a lot of IP networks were vulnerable to this attack. But today most networks are immune to such attacks and very few are still vulnerable to it.
Now let us talk about the mitigation of these attacks. It can be fixed in two steps as mentioned below:
- The individual routers and hosts should be configured so that they do not respond to such broadcasts and the ICMP requests.
- Routers should be configured to not forward the packets to the destination address. The 1999 standards configured the routers for default forwarding of such packets. In the same year, these standards were changed.

Another solution to this problem is the network ingress filtering. This sort of filtering is implemented for rejecting those ICMP packets based up on the source address that has been forged. An example of router configuration that won’t allow packet forwarding in cisco routers is:
Router (config – if) # no ip directed – broadcast

Even though this example prevents a network from participating in the smurf attack, it does not prevent it from becoming its target. There are computer networks that lend themselves to be used in the attacks. Such networks are termed as the smurf amplifiers. They tend to worsen the smurf attack since their configuration is such that a lot of replies to the ICMP addresses will be generated from them at the spoofed IP address or the victim computer.

A variation of the smurf attack is the ‘fraggle attack’. In this attack a large UDP traffic along with the victim’s IP address is sent to an IP broadcast address by the attack at ports 7 and 19 i.e., echo and chargen respectively. The way of working of this attack is quite similar to the original smurf attack. All the devices on the network will send the traffic to the victim address causing the same kind of flooding as in the case of smurf attacks. The source code for this attack was also released by TFreak called the fraggle.c.
Smurf attacks are a way of exploiting the IP broadcast addressing for creating a denial – of – service attack. The affected networks becomes inoperable. ICMP is usually used by network administrators for exchanging info about the network state. During the attack, these messages are used to ping the devices on the network to see if they are in a functional state. If a device is functional it returns a response to this message. When there are a large number of pings as well as replies to them, a large traffic is created which renders the network unusable. Since the IP broadcast addressing is seldom used it can be disabled at the network routers. This is a suggestion given by CERT for coping with the problem of smurf attacks. 


Thursday, November 21, 2013

Security: What is meant by heap overflow?

There are two types of overflows in computer programming, namely buffer overflow and heap overflow. In this post, our focus is on the second one i.e., the heap overflow. This is nothing but a variant of the buffer overflow, and this type of overflow occurs in the data area of the heap. The manner in which these overflows can be exploited is quite different from the exploitation methods of the stack – based overflows. The allocation of the heap memory takes place dynamically during the execution time of the application. It usually stores the program data. Specific ways are used for corrupting the data during exploitation. This results in the program overwriting the internal structures like the pointers in linked lists.
There is a technique called the canonical heap overflow technique that can be used for overwriting the malloc Meta data which is dynamic memory allocation linkage. The pointer exchange resulting because of this overwriting is used for overwriting the pointer of a program function. As an example for this, consider two Linux buffers allocated adjacent to each other on the data area of heap. When the data is written across the boundary of the first one, it causes the Meta data in the second to be overwritten. Here the in – use bit of the second buffer can be set to 0 and the length can be set to a negative value that is small enough to copy the null bytes. When the free() is called by the program with the first buffer, it will try to merge the two buffers as one. The buffer that will be freed will then hold the pointers BK and FD in 8 bytes. The FD contains the BK and can be used in pointer overwriting. But there are several reasons as to why this is not possible.
Below mentioned are the heap overflow consequences:
- Accidental overflow can cause the data to corrupt or the program to behave in an unexpected way. This can be caused by any process that uses the memory area that is affected from the overflow problem.
- The operating systems that have no protection for memory can be affected by any process.
- A deliberate exploitation of the overflow can cause the alteration of the data and the way a program using that data executes. An example is of the Microsoft JPEG GDI+ vulnerability MS04 – 028. Heap overflows are often used by the iOS jail breaking for gaining code to utilize it for the kernel exploitation.

There are three ways in windows and linux following which can prevent the occurrence of the heap overflows. The other operating systems do not provide all these three. These ways are:
- Preventing the payload execution by means of code and data separation with hardware features.
- Introducing randomization so that there is no fixed offset for the heap.
- Introducing sanity checks in the heap manager.

The GNU libc from version 2.3.6 and onwards comes with built – in protection for the heap overflows. It also has capability of detecting the overflows. For example, when the unlink function is called, it checks for the consistency of the pointer. However these protections hold good only for the old – mannered exploitations and so are not perfect for the modern operating systems. Linux includes support for the NX – bit and ASLR since 2004. The Microsoft OS comes with protection against heap overflows in windows XP, server 2003 and service packs. The later OS include the following:
- Heap entry meta data randomization
- Removal of the data structures that are commonly targeted.
- Randomized heap base address
- Algorithm variation etc. 


Wednesday, November 20, 2013

Security - What is meant by buffer overflow?

You might have heard of some hacks happening from time to time that are caused due to buffer overflow. Buffer overflow is also known as buffer overrun in computer security and programming terminology. It can be considered as an anomaly where the boundary of the buffer is overrun by the program while writing the data to it. When this happens, the adjacent memory is written by the program. Buffer overrun is a special case in which the memory safety rules are violated. Some inputs have been designed for executing the code or changing the way the program works. These inputs can trigger the buffer overflows. This can cause the program to behave in an erratic manner such as causing memory access errors, giving incorrect outputs, causing crash, breaches in the security system. Therefore these are considered to be a source of a number of software vulnerabilities which can be exploited very badly. C and C++ are the most common programming languages that suffer from buffer overflow problems. This is so because these languages do not come with in– built protection against overwriting of data or accessing it in some other part of memory.
These languages don’t have an automatic check on the data that is written in to some array which is more like the in – built type of buffer which lies within the array boundaries. Buffer overflows can be prevented by implementing the bound checks. When the data is written to the buffer, it may also corrupt the data stored in the adjacent memory address destinations because of lack of insufficient checking of boundaries. This can cause a buffer overflow. It may also occur while data is being copied from one buffer to another one without checking whether the data will fit in to it or not. Techniques are available for exploiting the buffer overflow vulnerability. These techniques are different for different architectures, memory region and operating systems. For example, there is a lot of difference between the exploitation on call stack and the exploitation on heap. The below mentioned protective counter measures can be taken:
- Choice of programming language: The language being used does have a profound impact on the buffer overflow occurrence. As mentioned above C and C++ have no built – in protection against this problem but their libraries do provide a number of ways for safe buffering of data and techniques to avoid them. There are languages that provide runtime checking as well as compile time checking, which checks for the possibilities when the program might overwrite the data. Examples are Eiffel, Ada, and Smalltalk etc.
- Use of safe libraries: It is necessary to avoid buffer overflows in order to maintain the degree of correctness of the code. Therefore, standard library functions that are not bound checked should be avoided. There are certain abstract data type libraries that are well tested and centralized enough for performing the buffer management automatically.
- Buffer overflow protection: This mechanism checks for the alteration of the stack when the function returns. If some modification has been made, the program makes an exit with a segmentation fault. Examples of such systems are the stackguard, libsafe, propolice and so on.
- Pointer protection: Buffer overflow involves manipulation of the pointers along with their stored addresses. A compiler extension called the point guard was developed for preventing the attackers from manipulating the pointers and the addresses stored in them reliably. However this extension was not released commercially. A similar version of it was implemented in the Microsoft window’s OS.
- Executable space protection: This method prevents the code execution on heap or stack as an approach to buffer overflow protection. The buffer overflows can be used by the attackers for insert random code in to the program memory. When the executable space protection is in place, the execution of the program will be halted by an exception. 


Tuesday, November 19, 2013

What are the different types of attacks that network face?

With a lack of security measures and checks in the right place, we put our data to risk of various types of attacks, with many of these attacks of the level that there could be significant data loss, as well as the data could be stolen (and when this data is something sensitive such as credit card numbers or social security numbers, then it is a very serious matter).
Attacks are of two types namely active attacks and passive attacks. The active attacks involve altering the information with an intention of destroying or corrupting the network and the data. If you do not have a security plan in place your network and data are vulnerable to these types of attacks. In this article we discuss about few of such attacks:
- Eavesdropping: Generally most of the network communications occur in a format that is very unsecure (i.e., clear text). This gives a chance to the attacker to gain access to all the available data paths in that network for interpreting or listening to the traffic. Eavesdropping on someone’s communication is referred to as snooping or sniffing. The eavesdropper gets a great chance for monitoring the whole network which has become a great cause of concern for the administrator of an enterprise. There are services that are based on cryptography and can prevent this type of attack. With a lack of strong encryption data can be read or traversed by the eavesdropper.
- Data modification: After the data has been read by the attacker or eavesdropper, altering this data is his/ her next step. Without coming to the knowledge of the receiver and the sender, the data in the packet can be modified by the attacker. Even if confidentiality is not required in all the communications, it is a must that any of the messages should not get modified in the transition.
- IP address spoofing (identity spoofing): The computer’s IP address is used by most of the operating systems and the network for identifying whether an entry is valid or not. In some cases, a false assumption of the IP address is possible. This is called identity spoofing. Some special programs might be used by the attacker for constructing the IP packets that might seem to come from the systems that are inside the intranet of the corporate. After the attacker gains the access to a network having a valid IP address, he/ she might reroute, delete or modify the data.
- Attacks based up on passwords: Password based access control is a common denominator of many network security plans and operating systems. By this we mean that your user ID and password determine your access rights. However, it is possible that protection to this identity information is not provided by older applications as they might be validated when passed through the network. This might give a chance to the eavesdropper who poses as an authorized user for gaining access to the data. Whenever a valid user account is found by the attacker, he/ she gets the exact rights which are possessed by the real user. Now suppose if the user is admin of the network, then attacker gets the same rights as the admin and can create accounts for subsequent use. After gaining access to an account, the attacker can get lists of the authorized users and network info. He can make changes in the configurations, routing tables and access controls of the networks and servers.
- Denial – of – service attack: This attack prevents a valid user from using the network or the computer. By means of this attack the attention of the staff can be diverted from the internal information systems so that they don’t get to know about the intrusion. In the meantime attacker can make more attacks. Invalid data can be sent to the network services or applications. He can even overload the whole network so that it shut down.


Thursday, November 14, 2013

How is security management done in medium sized businesses?

There are a number of security risks that affect businesses, whether these businesses be small, medium or large. Something common to handling such risks and preventing these risks from causing major loss to the businesses is through the design of proper risk management principles. These are handled through several stages - Firstly the risks have to be identified along with the causes for these risks; Secondly the consequences of the risks coming true are identified (and this could even mean going to the worst case scenario); thirdly, the impact of the risks on security is determined and the risks are prioritized based upon this assessment.
There are two types of security threats namely external security threats and the internal security threats.

External security threats include:
- Attacks from competitors who want access to intellectual property or want to determine other secrets of the organization
- Hackers who want to get into the company and can then cause huge amount of damages
- In today's world, risks include external worms or other attackers from getting access to the internal infrastructure of the organization.

The internal threats include:
- Employees trying to get access to areas of the organization that they should not have access to.
- Usage of buggy software or those that contain trojans by employees, that increases the risk to the infrastructure of the company.
- Data being lost to hard disk crashes or the like.
- Securing data transfers such as is being increasingly used for cloud based transactions.

Now let us see how security management is done in the medium businesses. The medium sized businesses can use the following:
- A unified threat management system can be designed & implemented with an expert in charge.
- A strong firewall can be used.
- For the purpose of authentication, strong passwords can be used. These passwords should be changed on a monthly or bi – weekly basis as required.
- A robust password must be used for a wireless connection.
- An optional network analyzer or network monitoring software can be used.
- A virtual private network or VPN can be used for maintaining communication between the satellite offices and the main office. There are many advantages of using a VPN. The expenses of leased data lines are reduced. Also it provides a very secure network for communication. It very well imitates the private line that has been leased. What makes this network private is that the encryption of the links. This makes it very convenient to use. This is a very good choice for medium sized businesses who need such connectivity and want security.
- Clear employee guidelines should be followed for accessing the non – work related websites, internet, and sending and receiving info.
- All the accounts must be monitored for accountability so as to monitor the individuals logging on to the intranet of the company.
- A back up policy should be created for recovering the data in case the hardware or software fails or a security breach occurs that affects the data in a wrong way.


Saturday, November 9, 2013

How is security management done in large businesses?

Security management is very much required, in fact essential, if you are doing a large scale business or responsible for the security. In this article we discuss about some steps that can be considered for increasing the security (and you might have issues with some of the steps, or perform some additional steps):
- There might be a lot of unwanted people from whom you wish to keep your network and database safe. For this purpose a strong network guard must be used with an equally strong firewall and proxy.
- Here the basic anti – virus software would not work. You have to go for strong antivirus packages. There are separate internet security software packages also.
- Stronger passwords can be used for authentication purpose and it should be changed on a bi – weekly or a weekly basis if a wireless connection is being used. The password must be robust and follow the protocols to prevent the password from being guessed.
- A network analyzer can be created for the purpose of monitoring the network. It can be used as and when required.
- There are certain physical security precautions that can be exercised for the employees.
a) Physical security management techniques can be implemented such as the closed circuit television for the zones that are restricted with security viewing these videos.
b) The perimeter of the company can be marked by security fencing backed up closed circuit television cameras.
c) The security rooms and the server rooms are fire – sensitive and so they should be equipped with fire extinguishers.
d) Physical security can be maximized with the security guards who have been given specific protocols to follow.
Some of the above points hold good for large govt. institutions and schools too. School networks can put up a firewall and proxy that is adjustable for restricting outsiders from accessing the database. Schools too need to use strong internet security software packages, also because students tend me to be the most curious and prone to using software that may have viruses or worms. Librarians, administrators, and teachers should constantly supervise the network to provide guarantee protection against security threats. An internet usage policy that is easy to understand, accept and enforce for differentiating between the personally owned and school owned devices. for the institutes that provide higher education must implement the FERPA compliance. Large govt. agencies should also use stronger firewalls and proxy for keeping the intruders at bay. Strong encryption must be done for safe–guarding the communication. The wireless connection must be authorized in whitelist. Others should be blocked. All of the networking hardware must be deployed in secure zones. A private network should be created up on which all the hosts should reside after which they won’t be visible to the outsiders. Security management procedures that are used by various organizations include risk analysis, risk assessment, classification of information, and categorization of assets, and rating the vulnerabilities of the system. These measures are followed for the implementation of the effective controls. The principles of the risk management are followed for managing the security threats. The types of the security threats can be classified in to two broad categories namely the external security threats and the internal security threats.
Avoiding the possibility of creating any opportunity for attackers is the best thing to do in the first place. The effectiveness of the controls that are used for controlling these threats is assessed. The consequences of the risks are also assessed. The risks have to be prioritized as per the impact they can have on the security system. 


Security management practices followed in home and small businesses

As there are different kinds of networks and different scales, there are different types of security management for them. In this article we shall talk about how security management is done in the home and small businesses. Given that the complexity is lower in these cases, only basic security is required for a small office or at home. When you compare this with higher scales, where a lot of effort and maintenance is required for the large businesses and large institutions. In the home and small businesses, regularly used hardware and software is used (and not the sophisticated ones when compared to the sophisticated hardware and software that is used for the prevention of spamming, hacking and other kinds of malicious attacks in larger installations). Here we list some basic points for security management at home and small office:
- A basic firewall can be installed or even a unified threat management system can be used.
- A basic antivirus software will do the task if you are working in the windows environment (as long as regular data patches and software updates are installed).
- Other software that can be installed for security include anti – spyware programs. A number of anti – virus and anti – spyware software are available in the market.
- If you are using a wireless connection, you must take care to secure your system with a robust password. A number of security methods are supported by the wireless devices, so try to use the strongest of those methods such as the AES, WPA2. A wide range of devices are supported by the TKIP. But they can only be used in the cases where there is no compliance with the AES.
- While using wireless networks, the default SSID name of the network must be changed. Another security measure that can be taken is to disable the SSID broadcast as this is not required for the home use. This can be easily bypassed by the use of modern technology and if the attacker has some knowledge regarding how the wireless traffic can be detected.
- You can enable the MAC address filtering for keeping track of all the MAC devices that are on that network connected to your router. Even though strictly this is not a security feature, it does can be used for limiting and monitoring the DHCP address pool for the attackers by both AP association and exclusion. However, it does make for more settings to be done by the home or small business, which can start to become complex.
- Static IP addresses can be assigned to the devices connected to the network. This is done for complementing the other security features and to make the AP less desirable to the attackers.
- The ICMP ping on the router must be disabled.
- You can even review the logs of the router and the firewall for the identification of any abnormal traffic or connection if any is there.
- Passwords must be set for all the accounts (and not common passwords such as pass1234, etc; make these hard to guess with a combination of upper and lower case letters, number and special characters). You can set these up randomly - for example, one of my passwords is 5Gtf$&^hsTF23%3G. Such random passwords cannot be guessed and more sophisticated techniques would need to be used to break such passwords (and don't use the same passwords for multiple services).
- If you are using a windows operating system, you can create multiple accounts for the family members to limit all the activities.
- Children of the family must be given lessons about the information security.

Security management is about identifying the important assets of the user that of course includes the information assets and checking whether the policies protecting these assets are implemented properly. It is also about protecting these assets from loss. It identifies the critical assets and focuses on protecting them first. The potential threats to the system are assessed. Then measures are taken for eliminating or minimizing these threats. The security risks are managed by the virtue of the risk management principles. It involves identification of the risks, assessment of the effectiveness of the control strategies, determination of the consequences. The risks are identified by means of the impact they can have. The identified risks are classified and appropriate response is selected for each. 


Friday, November 8, 2013

Quick detail of some network security tools

Every web application and site can face pretty intense security threats such as cross site scripting, account hacking and so on, with new ones emerging on a regular basis. The load on the security providing vendors is increasing day by day for building products that offer more security while being able to respond quickly to new threats. As we develop new security measures and tools, the attackers also develop new methods for hampering the security. Some of the network security tools have to be paid for while others are open source tools (that can help you a lot and are effective). To a great extent these tools perform the task exactly as you like it but sometimes their settings have to be customized as per the security needs of the structure of the network. Some examples of the open source tools are Ettercap, nikto, Nessus etc. discussed below:
1. Wireshark: This is a multi – platform network protocol analyzer which is available as an open source tool. Using it the data can be examined from a file captured on the disk or from a live network. The data can be browsed and the exact details can be obtained. It comes with very useful features such as filter language with a rich display, and a view of the reconstructed TCP session stream. It also comes with support for a number of media types and protocols.
2. Metasploit: This one is also an open source tool but with advanced features for development, and testing of the exploit code. Metaspoilt framework is now being used as an exploitation research outlet because of the extensible models which is used for integrating the encoders, exploits, payloads and no – op generators. This tool makes it easy for you to write your own exploits. An official java based GUI is now included with the framework.
3. Nessus: This tool provides excellent capabilities for scanning the potential vulnerabilities of the unix systems. Initially it was an open source tool till 2008. It now comes for a good price and is still ahead of many of its competitor. A licensed version is also available for use in the home network. The tool boasts of having a whopping 46000 plugins. Some features are embedded scripting language that allows you to write your own plugins, client – server architecture having a web – based interface, local as well as remote security checks.
4. Aircrack: This is a tool suite developed especially for the 802.11 a/b/g WEP and WPA cracking. This tool makes use of the well-known cracking algorithms for recovering the wireless keys. This it does only after the encrypted packets have been gathered. Some of the tools in this suite are airodump, aircrack, airdecap, aireplay and so on.
5. Snort: This tool has proved very good in detecting and preventing network intrusions. This is a very effective tool for analysis of traffic and packet logging on the networks. The tool has capability of detecting 1000s of worms by means of content searching, protocol analysis, pre – processors and so on. It is also capable of port scanning, vulnerability exploit attempts etc. it is based up on a rule – based language which is quite flexibility.
6. Cain and Abel: This is a tool that has been developed for handling the windows – only password recovery and for handling various other tasks as well. It is capable of performing the following functions:
- Recovery of the password by sniffing the network.
- Cracking the passwords that are encrypted by means dictionary.
- Cryptanalysis and brute – force attacks.
- Recording the VoIP conversations
- Revealing the password boxes.
- Decoding the scrambled passwords.
- Analyzation of the routing protocols.
The tool comes with proper documentation.

There are others as well, this is a quick summary of some of them. If you use others or have some feedback, do let me know via comments.


Thursday, October 31, 2013

Supporting a previous version of the product - level of support

Suppose you are part of an application that has released several versions. What do you do about support for previous versions ? There is a strong temptation to set it such that most of the effort is focused towards creating great new versions for the ongoing and future versions, and if there is any support for the previous versions, a lot of that support will be focused on how to move the users to the latest version rather than really supporting them while they are using the previous version. After all, the money is to be made when users move to the latest version, and the more the support for the previous version, the more the cost of doing so.
However, this philosophy has underdone some changes in the last many years. No organization will directly accept that they do not provide good support to uses of previous versions, but at the same time, there are sizable costs associated with supporting users who are on previous versions. What are some of these costs ?

- One of these costs involve releasing patches and updates for these previous versions. When software has been developed and released, you would expect that there would be no real defects in software after it has been released, and hence there would not be the need for software patches on a regular basis. However, this is far from true. Once a software has been released, defects that were not found during the testing phase will pop up, either found by the team in the new version (but which also existed in the previous version), or found by users and affecting some of their workflows. Some of these defects would be serious enough that they need to be fixed - and in today's world where users can share problems and report lack of support by the organization on various social forum, it can be critical to continue to interact with users. As a result, the organization needs to evaluate such issues and be seen to be responsive.
And then there is the overall environment. There can be updates issued by the operating system or updates to other files that can cause problems. Once for example, an update issued by a popular anti-virus software caused some files in the application to get frozen, and that crippled a section of the software (which had been released 2 versions back). Like many such problems, these were reported by users who came across these issues (typically the users are able to see the problem, but to figure out the problem, there is a need to have some intense technical research done by the support team, sometimes backed up by the product team). If the product team allows such problems to continue on user forums without some obvious research or effort undertaken, it can lead to user dis-satisfaction. The organization could pretend that these were for previous versions of software and such problems are fixed in the latest versions, but such a defense is not easy to undertake and few users are convinced about such a reason. In fact, such a defense tends to put off users, who would think that they would also not be supported when such a problem is faced by them later on.

Read more about this complex topic in the next post on this series (TBD).


Wednesday, October 30, 2013

Ensuring coordination with respect to 3rd party icons such as from Facebook / Youtube .. (contd)

I wrote a post about this topic in a previous post (using icons from external services such as Facebook or Twitter). One of the unsaid conclusions from the previous post was that it is essential that such planning be done well in advance, so that one does not run into issues near the end of the schedule - tracking of such items where multiple parties are involved can be time consuming and frustrating when getting into the ending stage of a schedule. So what are some of the points to keep involved when dealing with using icons from 3rd party services such as Youtube, Facebook and Twitter:
- If you have used the icons in a previous version of the software, ensure that when you get into a new version of the software, you verify about whether there is a requirement to update the icon. If not, then it would be easiest to just ensure that you continue with the old icon.
- When you are doing some re-design on your end, and need to get a different icon, many of these have multiple icons available with different sizes also available for use. However, if none of these icons really fit into the UI of your application, things can get tricky. The terms and conditions of most of these services do not allow you to use an icon other than the ones that they have supplied, so modify your UI accordingly.
- If there are multiple products within the organization that use these services, then it would make sense to ensure that these products collaborate with each other to understand their use of these services. We had a classic case where one team had a relationship with the product manager of one of these services (through one of the team-members, and this contact helped in refining the use of some icons).
- Make sure that the legal team is well conversant with the usage of these icons (and also overall with the incorporation of connectivity with these services). Even though these services seem present everywhere, they do come with their terms and conditions, which need to be met by the products that are interacting with these services.
- Most of these services have an active development community. It is important to ensure that atleast one of the development team members is on this community since these communities are the first places to be notified when there are any change in policies of the external service provider. This is pretty important. We were connecting with one of these services, and then it turned out that there was a notification about a change in the API that was being used, and we did not know about this; when did we get to know ? When our connectivity to the service was lost.


Thursday, October 24, 2013

How is security management done in home and small businesses?

As there are different kinds of networks, so there are different types of security management for them. In this article we shall talk about how security management is done in the home and small businesses. Only basic security is required for a small office or at home. On the other hand a lot of maintenance is required for large businesses and large institutions. Also here, normally used hardware and software is used when compared to the sophisticated hardware and software that is used for the prevention of spamming, hacking and other kinds of malicious attacks. Here we list some basic points for security management at home and small office:

- A basic firewall can be installed or even a unified threat management system can be used.
- A basic antivirus software will do the task if you are working in the windows environment.
- Other software that can be installed for security include anti – spyware programs. A number of anti – virus and anti – spyware software are available in the market.
- If you are using a wireless connection, you must take care to secure your system with a robust password. A number of security methods are supported by the wireless devices. so try to use the strongest of those methods such as the AES, WPA2. A wide range of devices are supported by the TKIP. But they can only be used in the cases where there is no compliance with the AES.
- While using wireless, the default SSID name of the network must be changed. Another security measure that can be taken is to disable the SSID broadcast as this is not required for the home use. This can be easily bypassed by the use of modern technology and if the attacker has some knowledge regarding how the wireless traffic can be detected.
- You can enable the MAC address filtering for keeping track of all the MAC devices that are on that network connected to your router. Even though strictly this is not a security feature, it does can be used for limiting and monitoring the DHCP address pool for the attackers by both AP association and exclusion.
- Static IP addresses can be assigned to the devices connected to the network. This is done for complementing the other security features and to make the AP less desirable to the attackers.
- The ICMP ping on the router must be disabled.
- You can even review the logs of the router and the firewall for identification of any abnormal traffic or connection if any is there.
- Passwords must be set for all the accounts.
- If you are using a windows operating system, you can create multiple accounts for the family members to limit all the activities.
- Children of the family must be given lessons about the information security.

Security management is about identifying the important assets of the user that of course includes the information assets and checking whether the policies protecting these assets are implemented properly. It is also about protecting these assets from loss. It identifies the critical assets and focuses on protecting them first. The potential threats to the system are assessed. Then measures are taken for eliminating or minimizing these threats. The security risks are managed by the virtue of the risk management principles. It involves identification of the risks, assessment of the effectiveness of the control strategies, determination of the consequences. The risks are identified by means of the impact they can have. The identified risks are classified and appropriate response is selected for each. 


Tuesday, October 22, 2013

What are different types of attacks that network face?

With a lack of security measures and checks in the right place, we put our data to risk of various types of attacks. Attacks are of two types namely active attacks and passive attacks. The active attacks involve altering the information with an intention of destroying or corrupting the network and the data. If you do not have a security plan in place your network and data are vulnerable to these types of attacks. In this article we discuss about few of such attacks:
- Eavesdropping: generally most of the network communications occur in a format that is very unsecure (i.e., clear text). This gives a chance to the attacker to gain access to all the available data paths in that network for interpreting or listening to the traffic. Eavesdropping on someone’s communication is referred to as snooping or sniffing. The eavesdropper gets a great chance for monitoring the whole network which has become a great cause of concern for the administrator of an enterprise.  There are services that are based up on cryptography. With a lack of strong encryption the data of these services can be read or traversed by the eavesdropper.
- Data modification: after the data has been read by the attacker or eavesdropper, altering this data is his/ her next step. Without coming to the knowledge of the receiver and the sender the data in the packet can be modified by the attacker. Even if confidentiality is not required in all the communications, it is a must that any of the messages should not get modified in the transition.
- IP address spoofing (identity spoofing): the computer’s IP address is used by most of the operating systems and the network for identifying whether an entry is valid or not. In some cases, a false assumption of the IP address is possible. This is called identity spoofing. Some special programs might be used by the attacker for constructing the IP packets that might seem to come from the systems that are inside the intranet of the corporate. After the attacker gains the access to a network having a valid IP address, he/ she might reroute, delete or modify the data.
- Attacks based up on passwords: the password based access control is a common denominator of many network security plans and operating systems. By this we mean that your user ID and password determine your access rights. Always protection to identity information is not provided by the old applications as they might be validated when passed through the network. This might give a chance to the eavesdropper who poses as an authorized user for gaining access to the data. Whenever a valid user account is found by the attacker, he/ she gets the exact rights which are possessed by the real user. Now suppose if the user is admin of the network, then attacker gets the same rights as the admin and can create accounts for subsequent use. After gaining access to an account, the attacker can get lists of the authorized users and network info. He can make changes in the configurations, routing tables and access controls of the networks and servers.
- Denial – of – service attack: this attack prevents a valid user from using the network or the computer. By means of this attack the attention of the staff can be diverted from the internal information systems so that they don’t get to know about the intrusion. In the meantime attacker can make more attacks. Invalid data can be sent to the network services or applications. He can even overload the whole network so that it shut downs. 


Sunday, October 20, 2013

Ensuring coordination with respect to 3rd party icons such as from Facebook / Youtube ..

The process of software product development is a complex one, with a number of different items to manage. In addition to the many internal complexities that need to be managed (requirements / development and testing schedules, etc), there are a whole load of external dependencies that need to be managed, and the amount of complications involved in these external dependencies are greater than than those of the internal kind. One of the complications that need to be handled has increased with the usage of more and more 3rd party services. With the increasing use of social networks by people all over the world, they being Twitter, Facebook, Flickr, Google+, and numerous other social and sharing sites, life is more complicated now.
One cannot build applications without the connection with such sharing sites, but the actual logistics of getting this done can be complex. One of these areas revolve around all the icons and others used in the application. Now, for the most part, most applications with a large number of user interfaces use their own customs icons and graphics, these having been made to seem to fit with the application (in terms of the image that the application seeks to present, the customer profile that it tries to fit, as well as the functionality that is being done in the application - an application that is dealing with money or finances would have more icons that have some sort of representation of money or cash, while applications dealing with images would show more of cameras or images, etc). Another use of these custom icons is that they present the same set even if the application is installed on Windows or on the Mac (while the application system icons on these different OS's are very different).
However, when interfacing with these 3rd party icons, there needs to be careful coordination with the team that is developing the custom icons. As part of the public API's available for most of these networks, they also have a list of icons that client applications need to use, and in some cases, the icon may drastically vary from the icons used in the application. It is typically not possible to get variations in the icons provided by these 3rd party services, although some of them will provide multiple icon sets at different sizes to ensure that one of them is suitable, but that may not be the case. Further, the team designing the custom icons may not be aware that there is a legal requirement to use only the service icons, and they should be kept informed so that they do not try to develop customs icons for these networks when only the service provided icons can be used.
In addition, if the icons provided by the services are striking on their own and different from those in the application, it would help if the designer team already knows this in advance, and hence can ensure that the icon set that they are designing for the application is made in a way that the service provided icons fit with the service. But, from time to time, these 3rd party services also go in for a rebranding, and require all clients using their services to also change the icons that they are using, which means that somebody needs to keep track of communication from the external services regarding their branding and icons.


Friday, October 18, 2013

How to cope with team members (or functional teams) who do not easily follow deadlines / schedules ?

Aah, this is one of the most difficult posts to write, since there is no magic bullet answer. Let me take a situation where you have some team members or functional folks who are not really up to working with schedules, or are disciplined about schedules as the development or testing members of the team. What happens typically ? You have a schedule that has been worked out pretty diligently, that takes into account the work of different team members and functional folks, and gets them all integrated with each other to have a schedule that promises, if you follow the schedule, that you will have your software application ready.
So if everything is going well, if the project management structure of the team is tracking the schedule and the entry and exit of each of the team members as per the schedule, then everything can work out. Further, a lot can be done to help in the process, by setting up systems that provide advance schedule information to the team members, before their tasks are due to start, as well as close to when their tasks are about to end. This can be followed by status meetings and other to ensure that team members have all the information that they need to get their tasks done, and any problems that they are having can be followed up.
However, if everything went so smooth, schedules would always work and there would never be any kind of problems regarding risks, and so on. One of the biggest elements of risks in the entire schedule, just from the scope of delivery from the different functional teams is regarding adherence to the schedule. When you consider the functional requirements of the schedule, you will have requirements and feature details from the product management, followed by elements of design (architectural, workflow and others) from the development and user interface designers, and then the actual development and testing. One of the biggest problems that I have seen is with regard to the more creative elements of this process, namely the user interface designer or the workflow designer. We had an interesting interface designer who would just tell us that to give him a final date by which the entire design would be needed, and not bother about interim dates since that was not the way that he worked. There is some justification, since the user interface designers tend to be creative folks who are not really as bound to a schedule as the rest of the development and testing teams. However, this can screw up the entire schedule, since the assumption is that there are dates by which the designer needs to submit a first draft and there are iterations through which this draft is discussed, and then finally agreement is reached on the final draft to be used for the product development.
So, what do you do ? Well, here are some techniques that we used:
- Ensure that there is ongoing communication with the designer. Typically, it was during a regular phone call that we would find out about some issue that the designer was running into, but which the designer had not told us about via email.
- Remind the designer on a regular basis when the schedule for either the start of the tasks or the end of the tasks was coming up to ensure that this was on the top of the mind
- Add some buffer to the schedule of the user interface designer, and start harassing from the first schedule so that you know that you can expect to get the work by the buffer date


Thursday, October 17, 2013

What happens when you find a serious defect right before you release ??

The end game of a software development schedule can be very critical. It is the timeframe when you are hoping that there are no critical problems that pop up - given that the time involved in turning around for solving critical problems is less, and the amount of tension that such a problem causes to everyone in the team can be enough to give a coronary to everybody involved. Once we had a defect come up 2 days before we were supposed to release the product, and the complications were very bad (in terms of decision making - we had to decide whether the defect needed to be taken, we had to get somebody very safe to diagnose that defect, we had to evaluate the fix to see that there was nothing else that could get broken, we had to then roll out the fix into the product and test the hell out of the fix to ensure that there was nothing that was getting broken). All of this caused a huge amount of tension, and we had management on our heads, wanting to know the progress, and more worryingly, why this defect was not caught before, and whether we were confident that we had done enough testing to ensure that we had caught all other such defects such as this one.
Typically, when you reach a situation like this, you need to ensure that you are thinking through all the options. It is all right to brazen it out and hope that everything will go well and say that you are fine with releasing the product. But, without having done a proper analysis, that would not be the correct option. If you want to get the product released in this haste, then you might be reaching a situation where user find serious defects after the product has been released, and that is something that no one wants. Such a situation, if it happens more than once, can cause loss of user confidence in the product, and to some extent in the organization, and have a lot of serious consequences. However, in most cases, I know teams tend to brazen it out even if they see much more problems later.
But, on the other hand, you cannot suddenly decide that you are willing to take a delay on the product release date to get some extra confidence of the testing (which would have been shaken due to the recent serious defect found). This sounds good, but there are costs involved with such a decision. A product delay causes a loss in revenue, also can cause some customer confidence problems if the organization suddenly has to announce a delay in release, and cause a huge impact on the team morale as well because of management involvement. However, it may be less of an impact than if the product is released and the customers find many problems.
So how do you make such a decision ? Well, that is the million dollar question. And there are no easy answers. To a large extent, whatever decision is made has a number of risks, but it is important to get genuine feedback from the testing team about what they feel, especially from the test managers (and this needs to be done in environment where there are fewer recriminations). Finally, the team manager needs to own the decision and be able to justify this in front of management.


Tuesday, October 15, 2013

What are uses of WiMax technology?

- The WiMax technology has been used since a long time to provide assistance to the communication process.
- This area has seen major deployment of wimax technology especially in Indonesia during the calamity of tsunami in the year of 2004. 
- The WiMax technology brought in the possibilities of providing broadband access that helped a big deal in regeneration of the communication. 
- The organizations such as FEMA and FCC (federal communications commission) felt the need of wimax in their communication process. 
- The WiMax applications with high efficiency are available today.
- It is known to offer a broad base for the customers and the services had been improved by adding mobility feature to them.
- The service providers use the WiMax technology for providing various services such as mobile and Internet access, voice, video and data. 
There are other advantages of using wimax technology.  
- You get to save a lot of prospective cost and at the same time you get efficiency in services.
- It is even capable of allowing the video making, VOIP calling and data transfers at high speeds.
- The mobile community has been upgraded so much with the coming of the WiMax technology.
- However, there are three main applications offered by WiMax namely backhaul, consumer connectivity and business.
- The real augmentation has been drawn to communications through WiMax technology because of which they can benefit both from the data transmission and video apart from voice. 
- This has facilitated quick response from the applications as per the situation.  
- A temporary communication services can be deployed by a client using WiMax technology.
It can even speed up the network according to the circumstances and events.  
- This has got us access to visitors, employees and media on a temporary basis.  
- If we are located in the range of the tower, it is quite easy for us to gain access to the equipment of the premises of for the events.

The factors that make the wimax technology so powerful are the following:
> high bandwidth
> high quality services
> security
> deployment
> full duplex consisting of DSL
> reasonable cost

For some applications, the wimax technology is used exclusively as in the following:

1. A means of connecting for the small and medium sized businesses.  - This technology has enabled these businesses to progress day by day.
- The connectivity offered by WiMax technology is good enough to attract clients.  
- It then provides them a number of services such as that of hotspots and so on.  
- Therefore, this application has gotten into spot light.

2. Backhaul
- The most important application of the WiMax technology is the range.
- This is so because using WiMax tower can be used as a means to connect with the other WiMax towers through line-of-sight communication which involves using microwave links. 
- This connectivity between two towers is called as backhaul.  
- It is capable of covering up to 3000 miles. 
- The WiMax network is even sufficient for covering remote and rural areas.


3. The nomadic broadband is another application of wimax technology which can be considered as an extended plan of wifi.
- The access points provided by WiMax technology might be less in number but they offer very high security.  
- Many companies use the WiMax base station for the development of the business.


Facebook activity