Subscribe by Email


Showing posts with label Functions. Show all posts
Showing posts with label Functions. Show all posts

Saturday, October 5, 2013

What is a transposition cipher method?

- The transposition cipher method is one of the cryptography methods used for securing the communication from eavesdroppers. 
- This method of encryption shifts the positions of the units or letters of the plain text based up on some regular system so that a permutation of the plain text is generated. 
- This permuted plain text is termed as the cipher text. 
- Thus, the cipher text is generated by changing the order of the units. 
Mathematically the following functions are used:
Ø  Bijective function: For encryption of the character’s position and
Ø  Inverse function: For decrypting the message

Now we shall see about some of the implementations of the transposition cipher:

1. Rail fence cipher: 
- This form of the transposition cipher has been named so because of the way that it follows for encoding.
- Here, the characters of the plain text are written on the successive rails in a downwards manner of some imagined fence.
- Then, we move upwards once getting to the bottom. 
- For reading the message, it is taken in rows.

2. Route cipher: 
- In this form of transposition cipher, a grid of given dimensions is taken on which the characters of the plain text are written out. 
- Then, the message is read based up on the pattern mentioned in the key. 
- For example, the pattern might be inwards spiral in clockwise direction starting from topmost right.
- The route ciphers may use many keys unlike the rail fence cipher. 
- In fact, the number of keys used for enumerating the messages of reasonable length by modern machinery might be too great. 
- Also, it is not necessary that all the keys might be good in equal terms. 
Excessive chunks of the plain text might be left if bad routes are chosen. 
- Also, the plain text might be simply reversed, thus giving a clue to the crypt analysts about the routes. 
- The union route cipher is a variation of the traditional route cipher. 
- The difference between the two is that this one transposed the whole words unlike route cipher which transposed individual letters.
- But since transposing the whole words could expose them, they were first hidden by a code.
- The entire null words might be added for adding humor to the cipher text.

3. Columnar transposition: 
- In this form of transposition cipher, a fixed length is determined for the rows in which the message is written. 
- But for reading the message a column by column approach is followed where some scrambled order if followed for choosing the columns. 
- A keyword is chosen which is used for defining the permutation of the columns as well as the width of the rows. 
- The spare spaces might be filled with the null characters in case of the regular columnar transposition. 
- On the other hand, in these spaces are left as such in the irregular columnar transposition cipher. 
- The keyword specifies some order following which the message is read column - wise. 
- The column lengths have to be worked out by the recipient for deciphering the message. 
- This is done based up on division of the length of the message specified by the key length.

4. Double transposition: 
- A single columnar transposition is vulnerable to attacks since the possible lengths of the column and anagrams can be guessed. 
- Therefore, a stronger version of it i.e., the double transposition is followed. 
- This is a two-time application of the columnar transposition. 
- For both the transpositions, either the same key might be used or different keys.
- This was the most complicated cipher before the coming of the VIC cipher. 
- It offered reliable operation under difficult conditions. 


Tuesday, July 16, 2013

What are the characteristics of network layer?

- The network layer comes at number three in the OSI model of networking. 
The duty of this layer is to forward and route the packets via the intermediate routers. 
- It comes with functional as well as procedural means for the transfer of data sequences with variable length from a source host to a destination host and across one or more networks. 
- During the transfer it also takes the responsibility for the maintenance of the services functions’ quality. 

There are many other functions of this layer such as:

Ø Connection-less communication: In IP, a datagram can be transmitted from one host to another without any need for the receiving host to send an acknowledgement. Protocols that are connection oriented are used on the higher levels of the OSI model.

Ø  Host addressing: Every host in the network is assigned a unique address that determines its location. A hierarchical system is what that assigns this address. These are the addresses that are known as the IP (internet protocol) addresses.

Ø  Message forwarding: The networks are sometimes divided in to a number of sub – networks which are then connected to other networks for facilitating wide – area communication. Here specialized hosts called routers or gateways are used for forwarding the packets from one host to another.

Characteristics of Network Layer

Encapsulation:
- One of the characteristics of the network layer is encapsulation. 
- Network layer ought to provide encapsulation facilities. 
- It is necessary that the devices must be identified with the addresses. 
- Not only the devices but the network layer PDUs must be assigned such addresses. 
- The layer 4 PDU is supplied to the layer 3 during the process of encapsulation. 
- For creating the layer 3 PDU, a layer 3 label or header is added to it. 
- In reference to the network layer, this PDU thus created is referred to as a packet. 
- On creation of a packet, the address of the receiving host is included in the header. 
- This address is commonly known as the destination address. 
- Apart from this address the address of the source or the sender host is also stored in the header. 
- This address is termed as the source address. 
- Once the encapsulation process is complete, the layer 3 sends this packet to the data link layer for preparing it to be transmitted over the communication media.

Routing: 
- The services provided by the network layer for directing the packets to the destination addresses define this characteristic. 
- It is not necessary that the destination and the source hosts must always be connected to the same network.
- In actual, the packet might have to go through a number of networks before reaching the destination. 
- During this journey the packet has to be guided to reach the proper address. - This is where the routers come in to action. 
- They help in selecting the paths for guiding the packets to the destination. 
This is called routing. 
- During the course of routing of the packet, it may need to traverse a number of devices.
- We call the route taken by the packet to reach one intermediate device as “hop”. 
- The contents of the packet remain intact until the destination host has been reached.


De-capsulation: 
- On the arrival of the packet at the destination address, it is sent for processing at the third layer. 
- The destination address is examined by the host system for verifying whether the packet is meant for itself or not. 
- If the address is found to be correct, the decapsulation process is carried out at the network layer. 
- This layer passes the layer 4 PDU to the transport layer for appropriate servicing. 


Tuesday, April 16, 2013

What are the basic functions of an operating system?


Operating system is the program that takes care of all the computer operations. It acts like a software link between the computer hardware and you. The link that it provides is nothing but an interface via which several other programs are managed. Computer systems come pre-installed with an OS. The place of storage of the operating system is the hard disk drive of the computer system. As soon as you boot in or turn on the computer system, the operating system is the first thing to be loaded in to the memory. Bootstrap loader is the program responsible for carrying out this task and this whole process is termed as booting. The bootstrap loader resides permanently in the electronic circuitry of the computer i.e., on the ROM chip to be more precise. There are various functions of an operating system about which we will be discussing in this article. 

Every system has an OS and every OS has some basic functions that do not depend up on its size or complexity.

1. Management of the resources: 
- Every OS has some managing resources through which it manages all the resources attached to a computer such as keyboard, mouse and monitor (at which you are looking presently) etc. plus it also manages the memory. 
- A file structure is created on the hard drive of the system which becomes a place for storage and retrieval of data. 
- Whenever a file is created, it is named by the OS and assigned an address in order to remember where it has been stored.
- This makes it easy to be accessed in the future. 
- This system is called the file system and this is usually hierarchical in nature. - Here the files are organized in directories or folders.

2. Provides a user interface: 
- Through user interface, the user is able to interact with the hardware resources and other software applications in a system.
- Almost all the operating systems that we have today come with a GUI or graphical user interface.
- In such as interface icons are the graphical objects that represent most of the features.

3. Execution of the processes: 
- It is the operating system that is responsible for the execution of the applications. 
- Multi – tasking is a major feature of today’s operating systems.
- Multi –tasking is the ability of an OS for running a number of tasks simultaneously. 
- Whenever a program is requested by the user, it located by the OS and loaded in to the system’s main memory i.e., RAM. 
- As more and more programs are requested, OS allocates resources to them.

4. Provides support for the utility programs: 
- Utilities are the programs that perform the repair and maintenance tasks on a computer system. 
- With these programs, back up of data can be taken, damaged files can be repaired, lost files can be located, other problems can be identified. 
- One example of such utility is the disk de-fragmenter.

5. Controls the hardware: 
- Operating system lies between the application software and the basic input and output system or BIOS. 
- This is the system that maintains a control over the hardware resources and their functioning. 
- All the hardware processes need to undergo processing via the OS. 
- Device driver help OS access the hardware via BIOS.

The nature of the OS required depends up on the application for which it is required. For example, OS required for running an airline seat reservation system differs from that required by the scientific experiments. And so its design is also defined by the application. 


Sunday, March 24, 2013

What are types of artificial neural networks?


In this article we discuss the types of artificial neural networks. These models simulate the real life biological system of nervous system.
1. Feed forward neural network: 
- This is the simplest type of neural network that has been ever devised. 
- In these networks the information flow is unidirectional; therefore the data moves only in forward direction. 
- From input nodes data flows to the output nodes via hidden nodes (if there are any). 
- In this model there are no loops or cycles. 
- Different types of units can be used for constructing feed forward networks for example, McCulloch – pitts neurons.
- Continuous neurons are used in error back propagation along with the sigmoidal activation.
2. Radial basis function network: 
- For interpolating in a multi – dimensional space radial basis functions are the most powerful tools. 
- These functions can be built in to criterion of distance with respect to some center.
- These functions can be applied in the neural networks. 
- In these networks, sigmoidal hidden layer transfer characteristic can be replaced by these functions.
3. Kohonen self–organization network: 
- Un–supervised learning is performed with the help of self – organizing map or SOM. 
- This map was an invention of Teuvo Kohonen.
- Few neurons learn mapping points in the input space that could not coordinate in the output space. 
- The dimensions and topology of the input space can be different from those of the output space. SOM makes an attempt for preserving these.
4. Learning vector quantization or LVQ: 
- This can also be considered as neural network architecture. 
- This one also was a suggestion of Teuvo Kohonen.  
- In these prototypical representatives are parameterized along with two important things namely, a classification scheme based - up on distance and a distance measure.
5. Recurrent neural network: 
- These networks are somewhat contrary to the feed forward networks. 
- They offer a bi–directional flow of data.
- On a feed forward network data is propagated linearly from input to output. 
- Data from later stages of processing is also transferred to its earlier stages by this network. 
- Sometimes these also double up as the general sequence processors. 
- Recurrent neural networks have a number of types as mentioned below:
Ø  Fully recurrent network
Ø  Hopfield network
Ø  Boltzmann machine
Ø  Simple recurrent networks
Ø  Echo state network
Ø  Long short term memory network
Ø  Bi – directional RNN
Ø  Hierarchical RNN
Ø  Stochastic neural networks
6. Modular neural networks: 
- As per the studies have shown that human brain works actually as a collection of several small networks rather than as just one huge network, this ultimately helped in realizing the modular neural networks where smaller networks cooperate in solving a problem. 
- Modular networks are also of many types such as:
Ø  Committee of machines: Different networks that work together on a given problem are collectively termed as the committee of machines. The result achieved through this kind of networking is quite better than what is achieved with the others. The result is highly stabilized.
Ø  Associative neural network or ASNN: This is an extension of the previous one. And extends a little beyond the weighted average of various models. This one is a combined form of the k- nearest neighbor technique (kNN) and the feed forward neural networks. Its memory is coincident with that of the training set.
7. Physical neural network: 
- It consists of some resistance material that is electrically adjustable and capable of simulating the artificial synapses.
There are other types of ANNs that do not fall in any of the above categories:
Ø  Holographic associative memory
Ø  Instantaneously trained networks
Ø  Spiking neural networks
Ø  Dynamic neural networks
Ø  Cascading neural networks
Ø  Neuro – fuzzy networks
Ø  Compositional pattern producing networks
Ø  One – shot associative memory


Monday, March 4, 2013

What are Software Process Improvement resources?


A supportive and effective infrastructure is required for facilitating the coordination of various activities that place during the course of the whole program. In addition to these qualities the infrastructure should be quite flexible so as to be able to support the changing demands of the software process improvement with time. 
Resources for this program include:
  1. Infrastructure and building support
  2. Sponsorship
  3. Commitment
  4. Baseline activities
  5. Technologies
  6. Coordinate training resources
  7. Planning expertise
  8. Baseline action plan and so on.
- When this program is initiated, a primitive infrastructure is put in to place for the management of the activities that would be carried out by the organization under SPI. 
- The resources mentioned above are also the initial accomplishments that tell how well the infrastructure has been performing. 
- It is the purpose of the infrastructure to establish a link between the program’s vision and mission, to monitor it and guide it and obtaining resources and allocating them.
- Once the SPI program, a number of improvement activities will be taking place across the different units of the organization. 
- These improvement activities cannot be performed serially rather they take place in parallel. 
- The configuration management, project planning, requirements management and reviews etc. are addressed by the TWGs (technical working groups). 
- But all these activities are tracked by the infrastructure.
- Support for the following issues must be provided by the infrastructure:
  1. For a technology that is to be introduced.
  2. Providing sponsorship
  3. Assessment of the organization impact
- As the program progresses, the functions to be performed by the infrastructure increase. 
- There are 3 major components of the SPI program:
  1. SEPG or software engineering process group
  2. MSG or management steering group
  3. TWG or technical work group
- It is third component from which most of the resources are obtained including:
  1. Human resources
  2. Finance
  3. Manufacturing
  4. Development
- However, the most important is the first one and is often called the process group. 
- It provides sustaining support for the SPI and reinforcing the sponsorship. 
The second component i.e., the MSG charters the SEPG.
- This is actually a contract between the SEPG and the management of the organization. 
- Its purpose is to outline the roles and the responsibilities and not to forget the authority of the SEPG. 
- The third component is also known as the process improvement team or process action team. 
- Different work groups created focus on different issues of the SPI program. 
- A software engineering domain is addressed by the technical work group. 
- It is not necessary for the TWGs to address the technical domains; they can address issues such as software standardization, purchasing, travel reimbursement and so on. 
- The team usually consists of the people who have both knowledge and experiencing regarding the area under improvement. 
- The life of TWGs is however finite and is defined in the charter. 
- Once they complete their duties, they return back to their normal work. 
- In the early stages of SPI program, the TWGs might tend to underestimate the time that would be required for the completion of the objectives assigned to them. 
- So the TWGs have to request to the MSG for allotting them more time. 
- Another important component could be the SPIAC or software process improvement advisory committee. 
- This is created in organizations where there are multiple functioning SEPGs. 


Wednesday, January 16, 2013

What kinds of functions are used by Cleanroom Software Engineering approach?


Harlan Mills and his colleagues namely Linger, Poore, Dyer in the year of 1980 developed a software process that could promise building zero error software at IBM. This process is now popularly known as the Cleanroom software engineering. The process was named in accordance with an analogy with the manufacturing process of the semiconductors. 

The Clean room software engineering process makes use of the statistical process and its control features. The software systems and applications thus produced have certified software reliability. The productivity is also increased as the software has no defects at delivery. 
Below mentioned are some key features of the Cleanroom software engineering process:
  1. Usage scenarios
  2. Incremental development
  3. Incremental release
  4. Statistical modeling
  5. Separate development
  6. Acceptance testing
  7. No unit testing
  8. No debugging
  9. Formal reviews with verification conditions
Basic technologies used by the CSE approach are:
  1. Incremental development
  2. Box structured specifications
  3. Statistical usage testing
  4. Function theoretic verification
- The incremental development phase of the CSE involves overlapping of the incremental development and from beginning of specification to the end of the test execution it takes around 12 – 18 weeks.
- Partitioning of the increments is critical as well as difficult. 
Formal specification of the CSE process involves the following:
  1. Box structured Designing: Three types of boxes are identified namely black box, state box and clear box.
  2. Verification properties of the structures and
  3. Program functions: These are one kind of functions that are used by the clean room approach.
- State boxes are the description of the state of the system in terms of data structures such as sequences, sets, lists, records, relations and maps. 
- Further, they include specification of operations and state in-variants.
- Each and every operation that is carried out needs to take care of the invariant. 
- The syntax errors present in a constructed program in clean-room are checked by a parser but is not run by the developer.
- A team review is responsible for performing verification which is driven by a number of verification conditions. 
- Productivity is increased by 3–5 times in the verification process as compared to the debugging process. 
- Proving the program is always an option with the developers but it calls for a lot of math intensive work.
- As an alternate to this, clean room software engineering approach prefers to use a team code inspection in terms of two things namely:
  1. Program functions and
  2. Verification conditions
- After this, an informal review is carried out which confirms whether all conditions have been satisfied or not. 
- Program functions are nothing but functions describing the prime program’s function.

- Functional verification steps are:
1.    Specifying the program by post and pre-conditions.
2.    Parsing the program in to prime numbers.
3.    Determining the program functions for SESE’s.
4.    Defining verification conditions.
5.    Inspection of all the verification conditions.
- Program functions also define the conditions under which a program can be executed legally. Such program functions are called pre-conditions.
- Program functions can even express the effect the program execution is having up on the state of the system. Such program functions are called the post conditions.
- Programs are mostly expressed on terms of the input arguments, instance variables and return values of the program. 
- However, they cannot be expressed by local program variables. 
- The concept of nested blocks is supported by a number of modern programming languages and structured programs always require well nesting. 
- The process determining SESE’s also involves parsing rather than just program functions.


Monday, January 14, 2013

What are requirements for Cleanroom Software Engineering? What is the need for CSE?


The process got its name from the term cleanroom which is part of the process that fabricates the semiconductors. The basic ideology behind the cleanroom software engineering process is that it is more focused up on avoiding the defects rather than removing them.  It makes use of a combination software quality methods and formal methods. Cleanroom software engineering has shifted the individual craftsmanship to peer reviewed process, from sequential development to incremental one, from informal designing to discipline specification and designing, from informal coverage to statistical usage testing and so on.
Cleanroom software engineering is actually an integration of the following three practices:
  1. Program verification
  2. Software engineering modeling
  3. Statistical software quality assurance
The correctness of the design specifications is verified via mathematically based proof solutions. The role of the statistical usage testing here is to dig out the high impact errors. 
The following the stages through the incremental development:
1.   Establishing the requirements
2.   Formal specifications
3.   Development of the increment
4.   Delivery of the software
5.   Requirements change request

Why Cleanroom Software Enginnering is required?

- Cleanroom software engineering is required to develop software systems and application with zero defects. 
- Though it takes a lot of time to be implemented but the payoff is quite high and both quality and productivity are increased. 
- The actual cleanroom process begins after the formal specification has been done. 
- The focus then shifts towards building a more explicit design. 
- Next, the design is verified as per the specifications. 
- Two things namely statistical and mathematical reasoning are combined and used together by the cleanroom approach for test generation and testing as well. 
- Cleanroom software engineering has proven to be the first practical attempt for developing the software with statistical quality control and delivering the software product with a known MTTF.

Requirements of Cleanroom Software Engineering

- The key requirements of cleanroom approach are nothing but specifications, requirements and verification methods.
- These formal specifications and verifications are used for developing software of high quality and with a minimum of errors. 
- Statistical usage testing is also required for the evaluation of the reliability of the software product. 

Cleanroom software engineering is followed by a number of benefits:
  1. Zero failures: This is the objective itself and the software is developed with a minimum number of errors if zero not possible.
  2. Short development cycles: These are the resultant of the incremental process. The rework is avoided and therefore new teams get a chance to experience productivity increase that has increased by two folds.
  3. Longer product life: The product life is kept longer with the help of a usage model and investments detailed specifications.
- It is also a fact that the cleanroom techniques are not much used since some people believe them to be too mathematical, theoretical, radical etc. to be used in the real development processes. 
- Also, these techniques rely heavily on the statistical quality control and correctness verification rather than relying on unit testing. 
- This means that there is a large deviation from the traditional software development approach. 
- Proving the program is always there in the option list but it requires a lot of intensive sophisticate mathematical work. 
- Therefore, in place of this, cleanroom uses an alternative.
- A team code inspection is structured in terms of verification conditions and program functions.
-  Then an informal review is carried out which confirms whether or not all the verification conditions have been satisfied. 


Monday, October 22, 2012

What is the built-in recovery system in Silk Test?


With the in – built recovery system of the silk test automation tool it has been made possible to restore the application under test to its stable state which it was in before it crashed, failed or hanged. The stable state to which the recovery system of the silk test restores the application under test or AUT is called the base state of the application software. 

The test automation counterpart of the silk test i.e., the winrunner does not support this feature. The recovery system comes in to action whenever the application under test fails ungracefully. 

For client server based applications there are 4 major tasks that are carried out by the built – in recovery system of the silk test. Those 4 major tasks have been listed below:
  1. Ensuring the continuous running of the web browser.
  2. Restoring the application back to its default size if it has been minimized by the user.
  3. Setting the working status of the application as active.
  4. Closing the other dialog boxes or child windows that have popped up.
On the other hand the below mentioned are the tasks that are carried out by the built – in recovery system of the silk test for the application software that are browser based:
  1. Waiting for the browser to restore itself to the active mode if it has been inactive for a long time.
  2. Ensuring that the various below mentioned elements are displayed:
a)   Status bar
b)   Text field
c)   Location
d)   Browser tool bars and so on.

- The data regarding this built – in recovery system is stored in a file by the name of defaults.inc which can be found in the same directory where the silk test has been installed. 
- Most of the actions that are carried out on the application under test are when the application is in the default base state. 
- In this state, all the actions are based up on the default properties.
- So, whenever a test case or script is executed the recovery system gets invoked automatically.
- However, the flow of control is different when the tests are run based up on some main function or a test plan. 
- Here, when a test case starts executing via the silk organizer the same test case gets the control first. 
- But before the execution of any test case, a function is called namely “default test case enter”. 
- This function is called with the purpose of calling the set app state function which in turn will invoke the default function “default base state”. 
- Following this the execution of the test case begins. 
- In any of the either cases the control is passed to the default test case exit function:
  1. If the test case finishes with its execution or
  2. If the test case encounters an error while execution
- The default test case exit function keeps the record of the logs for the exceptions in the test case execution and later calls the set base state function which in turn calls the default base state function. 
- Whenever the tests are run via a main function instead of one two recovery functions are invoked in the same way. 
- But here the difference is that instead of calling default test case center the function called is “default script enter” before the scripts start running. 
- The value of this function is NULL by default. 
- When the last test case has finished executing the “default script exit” function is called. 
- The purpose of this function is to log the errors or faults that occurred outside the test case. 


Facebook activity