Subscribe by Email


Showing posts with label Principles. Show all posts
Showing posts with label Principles. Show all posts

Sunday, October 13, 2013

What are two fundamental cryptography principles?

In this article we shall discuss about the two fundamental principles that govern a cryptographic system. 

1. Redundancy
- Some redundancy must be there in all the encrypted messages. 
- By redundancy here, we mean the information that is not required for understanding the message reducing the chances for a passive intruder to make attacks. 
- Passive intruder attacks involve putting the stolen information to misuse without understanding it. 
- This can be more easily understood by an example of a credit card. 
- The credit card number is not alone sent over the internet rather it is accompanied by other side info such as the DOB of the card holder, its validity date and so on. 
- Including such info with the card number cuts down on the changes for making up the number. 
- Adding a good amount of redundancy prevents the active intruders from sending garbage values and then getting it verified as some valid message. 
The recipient must be capable of determining whether the message is valid or not by  doing some inspection and simple calculation. 
- Without redundancy the attackers would simply send junk message and the recipient will decode it as a valid message. 
- However, there is a little concern also with this. 
- N number of zeroes must not be put at the beginning or the end of the message for redundancy because such messages become easy to be predicted thus facilitating the crypt analysts work.
- Instead of zeroes, a CRC polynomial can be used because it proves to be more work. 
- Using cryptographic hash might be even better.
- Redundancy has also got a role to play in quantum cryptography. 
Some redundancy is required in the messages for the bob to determine if the message has been tampered. 
- Repetition of the message twice is a crude form of redundancy.
- If the two copies are found to be identical, the bob states that somebody is interfering with the transmission or there is a lot of noise. 
- But such repetition process to be expensive. 
- Therefore, for error detection and correction the methods used are reed Solomon and hamming codes.

2. Update
- Measures must be compulsorily taken for the prevention of the attacks by active intruders who might play back the old messages. 
- The longer an encrypted message is held by an active intruder, the more is the possibility that he can break in to it. 
- One good example of this is the UNIX password file.
- For anybody who has an account on the host, the password is accessible. 
- A copy of this file can be obtained by the intruders and they can then easily de-crypt the password.
- Also, the addition of the redundancy allows the simplification of the messages’ decryption.
- It must be checked whether the message has been sent recently or is an old one. 
- One measure for doing so is including a time stamp of few seconds in the message. 
- This message then can be saved by the recipient for that many seconds and can be used for comparing with the incoming messages and filtering the duplicates.
- Messages which exceed this time period will be rejected as being too old.

Apart from the above two principles the following are some other principles of cryptography:
Ø Authentication: For ensuring that the message was generated by the sender itself and no one else so that no outsider can claim as being the owner of the message.
Ø Integrity: In cryptography, the integrity of the messages must be preserved while sending the message from one host to another. This involves ensuring that the message is not altered on the way. Using cryptographic hash is a way to achieve this.
Ø  Non-repudiation


Tuesday, August 27, 2013

What are general principles of congestion control?

- Problems such as the loss of data packets occur if the buffer of the routers overflows.
- This overflow is caused by the problem of the congestive collapse which is a consequence of the network congestion. 
- If the packets have to be re-transmitted more than once, it is an indication that the network is facing the problem of congestion. 
- Re-transmission of the packets is the treatment of only this indication but not for problem of the network congestion. 
- In the problem of congestive collapse, there are a number of sources that make attempts for sending data and that too at a quite high rate. 
- For preventing this problem of the network congestion, it requires mechanisms that are capable of throttling the sending node if in case the problem of network congestion occurs. 
- Network congestion is a real bad thing as it manifests in the network’s performance that the upper layer applications receive. 
- There are various approaches available for preventing and avoiding the problem of network congestion and thus implementing proper congestion control. 
- When the capacity of the network is exceeded by the demands for the resources and too much queuing occurs in the network causing loss of packets, congestion of packets is said to occur. 
- During this problem of network congestion, the throughput of the network might drop down to zero and there might be a high rise in the path delay. 
Network can recover from the state of congestive collapse using a congestion control scheme. 
- A network can operate in a region where there is high throughput but low delay with the help of the congestion avoidance scheme.
- These schemes keep the network away from falling in to a state of congestive collapse. 
- There is a big confusion over congestion control and congestion avoidance. Most of us think it is the same thing but it is not. 
- Congestion control provides a recovery mechanism whereas the congestion avoidance provides a prevention mechanism. 
- Today’s technological advances in the field of networking have led to a rise in the network links’ bandwidth. 
- In the year of 1970, ARPAnet came in to existence and built using the leased telephone lines that had a 50 kbits/second bandwidth. 
- LAN (local area network) was first developed in the year of 1980 using token rings and Ethernet and offered a bandwidth of 10 mbits/ second. 
- During the same time many efforts were made for standardizing the LAN using the optical fibers providing a 100 mbits/seconds and higher bandwidth. 
- Attention to the congestion control has been increased because of the increase in the mismatching that occurs between the various links composing the network. 
- Routers, IMPs, gateways, intermediate nodes links etc. are the hot-spots for the congestion problems. 
- It is at these spots that the bandwidth of the receiver falls short for accommodating all the incoming traffic. 
- In the networks using the connection-less protocols, it is even more difficult to cope with the problems of network congestion. 
- It is comparatively easy in the networks using the connection-oriented protocols.
- This happens so because in such networks, the network resources are kept under advance reserve during setting up the connection.
- One way for controlling congestion problems is preventing the setting up of new connections if congestion is detected anywhere in the network but it will also prevent the usage of the reserved resources which is a disadvantage. 


Thursday, March 28, 2013

What is the basic principle behind Dynamic synchronous transfer mode (DTM)?


- Dynamic synchronous transfer mode or DTM is one of the most interesting of all the networking technologies. 
- The basic objective behind implementing this technology is to achieve high speed networking along with the transmissions of top quality.
- It also possesses the ability of adapting the bandwidth in varying traffic conditions quickly. 
- DTM was designed with the purpose of being used in integrated service networks including both the one to one communication and distribution.
- Furthermore, it can be used in application to application communication. 
- Nowadays, it has also found its use as a carrier for IP protocols (i.e., high layer protocols). 
- DTM is a combination of 2 basic technologies namely packet switching and circuit switching. 
- It is because of this that the DTM has many advantages to offer. 
- It also comes with a number of services access solutions for the following fields:
Ø  City networks
Ø  Enterprises
Ø  Residential as well as other small offices
Ø  Content providers
Ø  Video production networks
Ø  Mobile network operators

Principles of Dynamic synchronous transfer mode (DTM)

 
- This mode has been designed to work up on a unidirectional medium. 
- This medium also supports multiple access i.e., all the connected nodes can share it. 
- It can be built up on various topologies such as:
  1. Ring
  2. Double ring
  3. Point – to – point
  4. Dual bus and so on.
- TDM or time division multiplexing is what up on which the DTM is based. 
- Here, a fiber link’s transmission capacity is broken down in to smaller units of time. 
- The total link capacity is broken down in to frames of fixed size of 125 microseconds. 
The frames are then further subjected to division in to time slots of 64 bit. 
- How many time slots will be there in one frame is determined by its bit rate. 
- These time slots consist of many separate control slots and data slots. 
- In some cases more control slots might be required, then the data slots can be turned in to control slots or vice versa.
- The nodes that are attached to the link possess the right to write both the kinds of slots. 
As a consequence of this, same time slot position will be occupied by the all the time slots within each frame. 
- Each node possesses the right to at least one slot which can be used by the node for transmitting control messages to the other nodes. 
- These messages can also be sent when requested by the user as a response to messages sent by the other nodes or for some purpose of network management.
- A small fraction of the whole capacity is constituted by the control slots, while a major part is taken by the data slots that carry payload. 
- With the number of control slots, the signaling overhead in DTM varies though it is usually very low.
- Whenever a communication channel is established, a portion of the available data slots is allocated to the channel by the node. 
- There has been an increasing demand of the network transfer capacity because of the globalization of the network traffic and integrated audio, video and data transmission. 
Optical fibers’ transmission capacity is increasing by great margins when compared to any other processing power. 
- DTM still holds the promise for providing full control to the network resources.


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Wednesday, January 23, 2013

What kind of testing engines are supported by Fitnesse?


Fitnesse testing tool is typically a testing method designed for providing a highly usable interface around the frame work called FIT. Therefore, it is mostly intended to support the agile style of regression testing and acceptance testing. In this style of regression and acceptance testing, all the functional testers collaborate together with the developers in order to come up with a good testing suite. 

- Fitnesse testing revolves around the idea of the black – box testing. 
- The system under test is taken to be a black box. 
- Output generated in response to the already defined inputs is used to perform tests against the considered black box.
- The job assigned to the functional tester is to design the tests in function specific sense and use fitnesse testing tool to express them. 
- On the other side a software developer is assigned the job of making a connection between the SUT and the Fitnesse tool so that the execution of the tests can take place and the two outputs could be compared. 
- The basic idea behind this whole testing process is to develop a forced collaboration among the developers and the testers so that the mutual understanding of the system’s requirements can be improved.
- Four components constitute the testing process through fitnesse namely:
  1. A wiki page
  2. A testing engine
  3. A test fixture
  4. System under test

Types of Testing Engines supported by FitNesse

- A piece of java code is used to establish a link between the SUT and the generic testing engine.
- Testing engine is responsible for carrying out most of the mapping and invoking the fixture methods. 
- Two engines are majorly supported by the fitnesse testing tool:

FIT Testing Engine 
- This engine serves more than just a testing engine. 
- It is more like a testing frame work in itself that makes use of a combination of functionality for invoking tests; carry out an interpretation of the wiki pages and generation of the output pages. 
- This is the testing engine around which the testing engine was built originally to serve initially as a user interface.
- And this is the story behind the name of the tool ‘fitnesse’. 
- This testing engine combines a number of responsibilities in to one unlike others that divide one responsibility in to many smaller ones.
- But software developers need to pay price for this since in this an FIT engine requires an inheritance from the base classes of the FIT frame work.
- However, java counterpart can experience some sort of difficulty since one chance of class inheritance of a developer is claimed by the frame work. 
- This also coveys a fact that the test fixtures are actually heavy weight constructs in nature. 
- Because of these considerations only, it was decided to adopt an alternative to FIT testing engine called SLIM testing engine.

SLIM Testing Engine
- Simple list invocation method is abbreviated to SLIM and serves as the best alternative to FIT testing engine. 
- It does not focuses on the combination of the wiki – based testing elements rather it emphasizes up on the invocation of the testing fixtures. 
- Unlike the FIT testing engine, it is invoked under the remote control of the Fitnesse wiki engine and then runs as an individual server. 
- Interpreting a wiki page and generating its result is now considered to be a part of the wiki testing engine. 
- Further the light weighted fixtures are allowed in the SLIM testing engine, which are nothing but simple POJOs. 
- One does not need to extend the fixtures or put any other frame work classes in to use. 


Monday, January 21, 2013

How is test execution done by Fitnesse Testing Tool?


FitNesse testing tool based on the integration testing frame work is now widely used for acceptance testing more than unit testing since it has the ability to facilitate a description of the function in a detailed manner. 
In this article, we shall discuss about how the execution of the tests is carried out using the Fitnesse testing tool. 
The testing process using the fitnesse testing tool involves 4 major components for every test:
  1. Wiki page that is used to express the test in the form of a decision table.
  2. A testing engine for interpreting this wiki page.
  3. A test fixture that is invoked by the engine and itself invokes the SUT or system under test in turn.
  4. SUT(system under test) that is currently under test.
- Out of the above 4 mentioned components two are produced by the software development team namely the fixture and the wiki page. 
- The team produces the system under test but it is not considered from the point of view of the black box test.
- Decision tables which express a test are included in the wiki page.
- The test fixture is the link that is made between the SUT and the generic testing through a piece of code written in java. 
- Mapping that is done between the fixture and the wiki page is an example of simple convert – to – camel case mapping. 
- Such kind of mapping is applicable to almost all the headings and is used for the identification of the name of the class of the fixture and its methods as well. 
- Whenever a heading ending is encountered, its value is to be considered to be read from the fixture while values from the other headers are considered as input for the fixture.
- The column order of the table i.e., left to right order is followed for calling the methods of the fixtures. 
- Testing engine is the component that actually carries out the mapping process. 
- It also invokes majority of the fixture methods. 

What kind of engines are supported by Fitnesse Testing Tool?

Two types of engines are supported by the fitnesse testing tool:

The FIT Engine: 
- This is more like a frame work rather than just being an engine. 
- It carried out the following purposes:
a)   It combines the functionality for invoking the tests,
b)   Interpretation of the wiki pages,
c)   Generation of the output pages.
- This engine was named so because the tool originally developed around this. - This engine works by combining the responsibilities rather by separating them.

The SLIM Engine: 
- SLIM stands for simple list invocation method and is used as an alternative to the FIT engine. 
- This engine implements the slim protocol. 
- Like FIT, SLIM does not combine all of the elements of the wiki – based testing, rather its focus is on invocation of the fixture. 
- This engine works as a separate server that is invoked by the fitnesse wiki engine remotely. 
- Simple POJOs like light–weighted fixtures are allowed by this engine. 
- These fixtures neither use any frame work classes nor extend it. 
- This leads to a simplified design thus allowing the designer to focus up on the call of the SUT properly and in as simple way as possible.
- This also keeps the ways for inheritance open, thus allowing the fixtures to be developed whenever necessary. 
- Inputs and expected output are coupled together to form the tests described in Fitnesse. 
- These couplings are considered to be a kind of variation of the tables. 
- A number of such variations are supported by the Fitnesse testing tool.


Explain FitNesse testing tool? What are principles of Fitnesse?


- FitNesse is an automated testing tool that has been developed to serve as a wiki and a web server for the development of software systems and applications. 
- This testing tool is entirely based up on the frame work for integrated testing that has been developed by Ward Cunningham. 
- It has been designed to support acceptance testing more than unit testing. 
- It comes with the feature of facilitating the description of the system functions in detail.
- With FitNesse testing tool, the users using a developed system can enter the input when it has been specially formatted i.e., non – programmers are able to access this format. 
- FitNesse tool interprets this input and automatically creates all the tests. 
- The system then execute these tests and returns the output to the users. 
- The main advantage of following this approach is that a very fast feedback can be obtained from the users. 
- Support in the form of classes called ‘fixtures’ is provided by the developer of the SUT i.e., system under testing. 
- The credit of writing the code for the fitNesse tool goes to Robert C. Martin and his colleagues in java language. 
- Since the program was developed in java, therefore initially it supported only java but now over the time it has got versions in a number of languages such as Python, C++, Delphi, ruby, C# and so on. 

Principles of FitNesse Testing Tool

This software works on certain principles which we shall discuss now:

FitNesse as a testing method: 
- Originally, it was designed as an interface using the fit frame work and it proved to be highly usable. 
- As such it is known to support the regression tests and black box acceptance tests in an agile style.
- This style of testing involves all the functional testers working in collaboration with the software developers in a software development project in an effort of developing a testing suite. 
- FitNesse testing revolves around the notion of black box testing.
- This involves considering the system to be a black box and testing it in the terms of the output that is automatically generated by the tool in accordance with the given inputs. 
- The responsibility of the functional tester is to design tests in a sense of functionality and expressing the same in the fitNesse tool. 
- On the other hand, the responsibility of the software developer is to connect the tool to the SUT so that tests can be executed and the actual output can be compared to the expected one. 
- The idea that drives this tool is forcing the functional testers and software developers to come up with a common language for an improved collaboration which will eventually lead to an improved mutual understanding of the SUT.

Fitnesse as a testing tool:
- Fitnesse defines the tests as inputs and outputs coupled together. 
- These inputs and outputs coupled together are expressed as variations of a decision table. 
- It supports a number of variations that range from tables that execute queries to tables for literal decision and to tables that express the testing scripts. 
- A free form table is the most generic variation that the designers can interpret in any way they like. 
- However, some sort of table is always used to express the tests. 
- The primary focus of FitNesse is on the easy creation of the tests, thus allowing the testers to maintain a high quality for the tests rather focusing on how the tests are to be executed. 
- Three factors are involved in the creation of the tests through fitNesse:
a)   Easy creation of the tables.
b)   Easy translation of the tables in to calls to SUT.
c)   Maintaining flexibility in the documentation of the tests. 


Tuesday, January 15, 2013

What is a Cleanroom approach?


In this article we discuss the cleanroom approach in detail. The size of the team is usually small and is divided in to following three sub – teams:
  1. Specification team: This team is responsible for the development and maintenance of the specifications.
  2. Development team: This team is responsible for the development and verification of the software.
  3. Certification team: This team is responsible for the development of statistical tests and reliability growth models. 
The incremental development is always carried out under statistical quality control so that the performance can be assessed at the end of every iteration using the following measures:
  1. Errors per KLOC
  2. Rate of growth in MTTF
  3. Number of sequential error free tests.
The software development in cleanroom approach is purely based up on the mathematical principles whereas the testing is based up on the statistical principles. 
- Firstly, the system to be developed is formally specified and an operational profile is created. This profile and the formal specifications are then used to define the software increments which are then used for the two purposes namely:
  1. Construction of a structured program
  2. Designing of statistical tests: These tests also contribute to the first purpose.
- The constructed program is then formally verified and integrated with the increment.
Below mentioned is the flow of cleanroom approach:
  1. Software requirements specification
  2. Software design and development
  3. Incremental software delivery
  4. Incremental statistical testing
  5. Regression testing
  6. Software reliability measurement
  7. Process error diagnosis and correction
- The incremental development planning is divided in to two parts namely:
  1. Functional specification: It involves formal design correctness verification.
  2. Usage specification: It involves statistical test case generation.
- Both these processes then merge down to statistical testing which then follows quality certification model and MTTF estimates.
- The whole cleanroom project develops around the incremental strategy. 
- Requirements are gathered from the customers and elicited and refined via the traditional methods.
- The definition of the data, its behavior and procedures are isolated and separated by the box structures at every level of refinement. 
- Specifications or the black boxes when iteratively refined become state boxes i.e., architectural designs and clear boxes i.e., the component–level designs.
- Formal inspections are carried out to make sure that the code confirms to standards, it is syntactically correct and its correctness has been verified. 
- Statistical usage planning involves creation of tests cases that match with the probability distribution of the usage pattern
- In place of the exhaustive testing, a sample of all the test cases is employed. 
- Once the programmers are done with all 3 activities (i.e., verification, inspection, usage testing, and defect removal) the increment is considered to be certified and ready to be integrated. 
- For developing a right system, customer feedback and involvement are 2 necessary elements throughout the process. 
- Increment planning is required so that the customer’s system requirements can be clarified. 
- There is a requirement of management of resources and control of complexity which is also achieved through incremental planning.
- In order to develop a quality product a control over the software development cycle and process measurement is very much required.
- Following are the benefits of concurrent planning:
  1. Concurrent engineering
  2. Step wise integration
  3. Continuous quality feedback
  4. Continuous customer feedback
  5. Risk management
  6. Change management
- All of the above benefits are achieved respectively by:
  1. Certification and scheduling parallel development
  2. Testing cumulative increments
  3. Statistical process control
  4. Through actual use
  5. Treatment of the high risk elements in early phases
  6. Systematic accommodation of the changes
Design verification advantage allows the cleanroom teams to verify each and every line of code. 


Tuesday, January 8, 2013

What is Cleanroom Software Engineering?


Cleanroom software engineering is one of the fastest emerging software development processes and has been designed for the production of the software systems and applications with a reliability of certificate level. The credit for the development of this process goes to Harlan Mills and a couple of his colleagues among which one was Alan Hevner at the IBM Corporation. 

What is Cleanroom Software Engineering?

- The cleanroom software engineering is focused on defect prevention rather than their effective removal. 
- The process was named so since the word cleanroom evoked the sense of cleanrooms that are used by the electronics industry for preventing the defects from entering the semiconductors during the fabrication process. 
- The first time when the cleanroom process was used was in the late 80s. 
- This process began to be used for military demonstration process in the early 1990s. 

Principles of Cleanroom Approach

Cleanroom process has its own principles which we have discussed below:
  1. Development of software systems and applications based up on formal methods: Box structure method is what that is used by the cleanroom development for specifying and designing a software product. Later, team review is used for carrying out verification of the design i.e., whether it has been correctly implemented or not.
  2. Statistical quality control through incremental implementation: An iterative approach is followed in the cleanroom software engineering process i.e., the software system is evolved through increments in which the implemented functionality gradually increases. Pre–established standards are used for measuring the quality of all the increments for making verification that the process is making acceptable process. In case the process fails to meet the quality standards, testing of the current increment is stopped and the process is returned to the designing phase.
  3. Statistically sound testing: Software testing in cleanroom development process is carried as a disguise of a statistical experiment. A subset that represents software’s i/p and o/p trajectories is selected and then subjected to testing. The sample so obtained is  then considered for statistical analyzation so as to get an estimation  of the software’s reliability and level of confidence.

Features of Cleanroom Software Engineering

Software products developed using the cleanroom software engineering process have zero defects at the delivery time. Below mentioned are some of the characteristics features of the cleanroom software engineering:
  1. Statistical modeling
  2. Usage scenarios
  3. Incremental development and release
  4. Separate acceptance testing
  5. No requirement of unit testing and debugging
  6. Formal reviews with verification conditions
The defects rate was recorded as follows:
  1. <3 .5=".5" and="and" delivered="delivered" kloc="kloc" o:p="o:p" per="per">
  • 2.7 Per KLOC between first execution and first delivery.
  • Basic technologies thus used can be listed as:
    1. Incremental development: Each increment is carried out from end – to – end and in some cases there is overlapping development of the increments. This whole process takes around 12 – 18 weeks and partitioning though being critical proves to be difficult.
    2. Function – theoretical verification: A parser may check the constructed program for syntax errors but it cannot be executed by the program developer. Verification conditions drive the team review for verification. Verification is improved by 3- 5 times than debugging. Formal inspections also fall under this category.
    3. Formal specifications: This further includes:
    a)    Box structured specification: It includes 3 types of boxes namely:
    Ø  Black box
    Ø  State box
    Ø  Clear box
    b)    Verification properties
    c)    Program functions
    1. Statistical usage testing: It helps in implementing cost effective orientation and process control. It provides a stratification mechanism to deal with situations that are critical. 


    Facebook activity