Subscribe by Email


Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Friday, September 13, 2013

What is Portability Testing?

- Portability Testing is the testing of a software/component/application to determine the ease with which it can be moved from one machine platform to another. 
- In other words, it’s a process to verify the extent to which that software implementation will be processed the same way by different processors as the one it was developed on.  
- It can also be understood as amount of work done or efforts made in order to move software from one environment to another without making any changes or modifications to the source code but in real world this is seldom possible.
For example, moving a computer application from Windows XP environment to Windows 7 environment, thereby measuring the efforts and time required to make the move and hence determining whether it is re usable with ease or not.

- Portability testing is also considered to be one of the sub parts of System testing as this covers the complete testing of software and also it’s re-usability over different computer environments that include different Operating systems, web browsers.

What needs to be done before Portability testing is performed (pre requisites/pre conditions)? 
1.   Keep in mind portability requirements before designing and coding of software.
2.   Unit and Integration Testing must have been performed.
3.   Test environment has been set up.

Objectives of Portability Testing
  1. To validate the system partially i.e. to determine if the system under consideration fulfills the portability requirements and can be ported to environments with different :-
a). RAM and disk space
b). Processor and Processor speed
c). Screen resolution
d). Operating system and its version in use.
e). Browser and its version in use.
To ensure that the look and feel of the web pages is similar and functional in the various browser types and their versions.

2.   To identify the causes of failures regarding the portability requirements, this in turn helps in identifying the flaws that were not found during unit and integration testing.
3.   The failures must be reported to the development teams so that the associated flaws can be fixed.
4.   To determine the potential or extent to which the software is ready for launch.
5.   Help in providing project status metrics (e.g., percentage of use case paths that were successfully tested).
6.   To provide input to the defect trend analysis effort.



Wednesday, September 11, 2013

What are transport and application gateways?

- Hosts and routers are separated in TCP/IP architecture. 
- For private networks, more protection is required to maintain an access control over it. 
- Firewall is one of the components of this TCP/IP architecture. 
- Internet is separated from Intranet by this firewall.
- This means all the incoming traffic must pass through this firewall. 
- The traffic that is authorized is allowed to pass through. 
- It is not possible penetrate the firewall simply. 
Firewall has two components namely:
Ø  Filtering router and
Ø  Two types of gateways namely application and transport gateways.
- All the packets are checked by the router and filtered based up on any of the attributes such as protocol type, port numbers, and TCP header and so on. 
Designing the rules for filtering of the packets is quite a complex task. 
- A little protection is offered by this packet filtering since with the filtering rules on one side, it is difficult to cater to the services of the users on other side.

About Application Gateways
- Application layer gateways consist of 7 layer intermediate system designed mainly for the access control. 
- However, these gateways are not commonly used in the TCP/ IP architecture. 
- These gateways might be used sometimes for solving some inter-networking issues. 
- The application gateways follow a proxy principle for supporting the authentication, restrictions on access controls, encryption and so on. 
- Consider two users A and B. 
- A generates an HTTP request which is first sent to the application layer gateway rather than being send to its destination. 
- The gateway checks about the authorization of this request and performs encryption. 
- After the request has been authorized, it is sent to user B from the gateway just at it would have been sent by A.
- B responds back with a MIME header and data which might be de-crypted or rejected by the gateway.
- If the gateway accepts, it is sent to A as if from B. 
- These gateways are designed for all the protocols of application level.


About Transport Gateways
- The working of the transport gateway is similar to application gateway but it works at the TCP connection level. 
- These gateways are not dependent up on the application code but they do need client software so as to maintain awareness about the gateway. 
Transport gateways are intermediate systems at layer 4. 
- An example is the SOCKS gateways. 
- IETF has defined it as a standard transport gateway.
- Again, consider two clients A and B. 
- A TCP connection is opened by A to the gateway. 
- The SOCKS server port is nothing but the destination port. 
- A sends a request to this port for opening the connection to B indicating the port number of the destination. 
- After checking the request, the request for connection from A is either accepted or rejected. 
- If accepted, a new connection is opened to B. 
- The server also informs A that the connection has been established successfully. 
- The data relay between the clients is kept transparent. 
- But in actual there are two TCP connections having their own sequence numbers as well as acknowledgements. 
- The transport gateways are simpler when compared with the application layer gateways. 
- This is so because the transport gateways are not concerned with the data units at the application layer. 
- It has to act on the packets simply once the connection has been established. 
Also, this is the reason why it also gives higher performance in comparison with the application layer gateways. 
- But it is important that the client must be aware of its presence since there is no transparency here. 
- If between the two networks the only border existing is the application gateway, it alone can act as the firewall. 


Thursday, August 8, 2013

Flooding - a kind of static algorithm

An algorithm designed for the distribution of the material to each and every part of the graph is referred to as the flooding algorithm. 
- The algorithm has got its name from the concept that involves inundation caused by a flood. 
- The main application of these algorithms is in graphics and computer networking. 
- These algorithms also come very handy in solving a number of mathematical problems. 
- Examples of such problems that are graph theory problems, maze problems and so on. 
- Flooding algorithm though it sounds complicated is quite a simple one. 
- Here, every incoming packet is sent via every outgoing link. 
- Only the link through which the packet arrived is saved. 

This algorithm has many applications in the following:
  1. Systems that require bridging, systems like use net.
  2. Peer to peer file sharing.
  3. Used as a part of the routing protocols such as the DVMRP, OSPF etc.
  4. Used in protocols used for the adhoc wireless networks.
Nowadays the flooding algorithm is available with its many variants. The following are two main steps that each variant follows while working:
  1. Each node in the network might act as both receiver and the transmitter.
  2. The incoming message is forwarded by the receiving node to each of its neighboring nodes except the one which is the source code.
- This causes the message to be delivered to each and every part of the network that is reachable. 
- Algorithms are required as a precaution for avoiding the wastage of the infinite loops and the duplicate deliveries and for allowing the messages for expiring. 
- All these issues are addressed partially by a variant of the flooding algorithm called the selective flooding.
- The usual flooding algorithm sends the packets to all the routers that lie in the same direction. 
- But the selective flooding algorithm does not send the packet to each router rather it selects only few of them which lie in the right direction approximately. 

Advantages and Disadvantages of Flooding Algorithm

Advantages:
  1. If it is possible for delivering the packet, it will be delivered but a number of times.
  2. In flooding algorithm every path in the network is naturally utilized and so the shortest path is also used.
  3. The implementation of the flooding algorithm is quite simple.
Disadvantages:
  1. The cost of the flooding algorithm can be very high because a lot of bandwidth is wasted. Even if there is one destination of the message, it will be sent to all the hosts on the network unnecessarily. If in case there occurs a denial of service attack or a ping flood, the reliability of the whole network will be affected badly.
  2. In the computer network, the message might get duplicated. This in turn can increase the load on the bandwidth of the network. This will call for increasing the complexity of the processing for rejecting the duplicates of the messages.
  3. The packets that are duplicate might keep on circulating forever, if the following precautions are not taken:
Ø  Using a time to live count or a hop count to be included with every packet. The value of the count has to include the number of the nodes through which the packets have to be passed while on the way to destination.
Ø  Each and every node should be used for keeping track of every packet that passes through it and a packet should be forwarded only once.

Ø  The network topology must be enforced without any loops. 


Wednesday, July 17, 2013

What are network layer design issues?

- The network layer i.e., the third layer of the OSI model is responsible for facilitating the exchange of the individual information or data pieces between hosts over the network. 
- This exchange only takes place between the end devices that are identified. 
For accomplishing this task, 4 processes are used by the network layer and these are:
Ø  Addressing
Ø  Encapsulation
Ø  Routing
Ø  Decapsulation
In this article we focus up on the design issues of the network layer. 

- For accomplishing this task, the network layer also need s to have knowledge about the communication subnet’s topology and select the appropriate routes through it. 
- Another thing that the network layer needs to take care of is to select only those routers that do not overload the other routers and the communication lines while leaving the other lines and router in an idle state.

Below mentioned are some of the major issues with the network layer design:
  1. Services provided to the layer 4 i.e., the transport layer.
  2. Implementation of the services that are connection oriented.
  3. Store – and  - forward packet switching
  4. Implementation of the services that are not connection oriented.
  5. Comparison of the data-gram sub-nets and the virtual circuits.
- The sender host sends the packet to the router that is nearest to it either over a point-to-point carrier link or LAN. 
- The packet is stored until its complete arrival for the verification of the check sum. 
- Once verified, the packet is then transmitted to the next intermediate router. 
- This process continues till the packet has reached its destination. 
- This mechanism is termed as the store and forward packet switching.

The services that are provided to the transport layer are designed based up on the following goals:
  1. They should be independent of the router technology.
  2. Shielding from the type, number and topology of the routers must be provided to the transport layer.
  3. The network addresses that are provided to the transport layer must exhibit a uniform numbering plan irrespective of whether it’s a LAN or a WAN.
Now based up on the type of services that are offered, there is a possibility for two different organizations.

Offered service is Connection-less: 
- The packets are individually introduced in to the sub-net and the routing of the packets is done independently of each other. 
- It does not require any advance set up. 
- The sub-net is referred to as the data gram sub-net and the packets are called data-grams.

Offered service is connection-oriented: 
- In this case the router between the source and the destination must be established prior to the beginning of the transmission of the packets. 
- Here, the connection is termed as the virtual circuit and subnet as the “virtual circuit subnet” or simply VC subnet.

- Choosing a new router every time is a thing to be avoided and this is the basic idea behind the use of the virtual circuits. 
- Whenever we establish a connection, a route has to be selected from source to destination. 
- This is counted as a part of the connection setup only. 
- This route is saved in the routers tables that are managed by the routers and is then used by the flowing traffic. 
- On the release of connection, the VC is automatically terminated. 
- In case of the connection oriented service, an identifier is contained in each packet which tells the virtual circuit to which it belongs.

- In data-gram sub-net circuit setup is not required whereas it is required in the VC circuit. 
- The state info is not held by the routers in the data gram subnet whereas router table space is required for each VC for each connection. 


Wednesday, June 26, 2013

Encouraging a creative team member to assist in the UI designer duties

There are a certain number of resources that any particular software team can have assigned to them. Based on the amount of work that is under consideration of the team, the needs of the team are estimated in terms of the developers and testers vs. the amount of work done. If there is a lot of work and there are not enough developers or testers needed for this quantity of work, then the team has only 3 choices:
- Ask for and get more testers and/or developers, and take on this quantity of work
- Ask for and not get more testers and/or developers, and decline work beyond the amount that can be done with the team that you have
- The third one is the most problematic. The team does not get additional testers or developers, but there is a lot of pressure built to take on the additional work. One would not like to think of such a scenario, but it does happen, and eventually the team either gives up the additional work or takes on a lot of stress, and maybe even has a reduced quality in terms of their deliverable.
However, twice in the past 4 years, we have come across a situation which is not easily solvable, and for which we did not get any additional support. What was this case ? This was the case where the release we were doing had a number of features that required the support of a workflow designer / UI designer. In a typical release, we have a certain number of such resources assigned to the team, based on an expectation that the amount of workflow and UI required will be of a certain % (let us assume that 60% of the work being done by the team needs the support of the workflow / UI team - the reason for it being 60% is that the remaining 40% is where the team does some kind of tweaking / modification which does not require any workflow changes or UI changes).
However, this gets badly affected when there was a release where the estimation of the amount of work where the workflow / UI designer is needed was around 80%, and it was pretty clear that the team that was doing the workflow / UI design was not staffed for this extra work, and even if we had got allocation of budget for the extra work (which was not 100% certain by itself), it takes months to hire somebody with this skill set. Hence, there was no getting around the fact that we had a puzzle on our hands - we had estimated work for which we had enough developers and testers, we did not have enough designers. What to do ?
When we were discussing with the senior members of the team, we came across an interesting suggestion. Over the past, the team had noted that there were some members of the team who were more easily able to comprehend the designs put out by the designer team and understood the way that they were doing their reasoning. Given that we really did not have a choice, we went ahead with the open offer to team members who wanted to give open flow to their creative juices, and prepare the design, with rounds of review by the designer team (we found that this amount of effort could be accommodated), and then present to this team. One of the main persons we expected did not volunteer, but another person who was also seen a prospect volunteered, and we pulled her off her current responsibilities and got her temporarily assigned to the designer team. Over the next few weeks, we did a close watch on this arrangement, and while it was not as good as the design done by the designer team, our Product Manager was satisfied with this, and so was the design team, and we went with the design that she produced and the team accepted. Now, this was not a long term arrangement, but in the scenario described above, it seemed to work. 


Friday, June 21, 2013

Explain about the Paged Memory and Segmentation techniques?

Paging and segmentation, both are memory management techniques. 

What is Paging?

- This technique has been designed so that the system can store or retrieve data from the virtual memory or secondary memory of the system to be loaded in the main memory and used. 
- In this scheme, the data from the secondary memory is retrieved by the operating system in blocks of same size commonly known as the paging. 
- This is why the technique has been called the paging memory – management scheme. 
- This memory management scheme has a major advantage over the segmentation scheme. 
- The advantage is that non-contiguous address spaces are allowed. 
- In segmentation, non-contiguous physical address spaces are not allowed. 
Before the paging actually came in to use, the whole program had to be fitted in to the contiguous memory space by the systems. 
- This in turn led to a number of issues related to fragmentation and storage. 
Paging is very important for the implementation of the virtual memory in many of the operating systems that are general purpose. 
- With the help of paging memory management technique, the data that cannot be fitted in to the physical memory i.e., RAM can be easily used. 
- Paging actually comes in to play whenever a program makes an attempt for accessing the pages that have not been presently mapped to the main memory (RAM). 
- Such situation is termed as the page fault. 
- At this point the control is handed over to the operating system for handling the page fault.
- This is done in a way that it is not visible to the interrupt raising program. 

The operating system has to carry out the following instructions:
  1. Determining the location of the requested data from the auxiliary storage.
  2. Obtaining a page frame in the main memory that is empty to be used for storing the requested data.
  3. Loading the data requested in to the empty page obtained above.
  4. Making updates to the page table so that new data is only available.
  5. Returning the control interrupting program and retrying to execute the same instruction that caused the fault.

What is Segmentation?

- This memory management technique involves dividing the main memory in to various sections or segments.
- In the system that makes use of this management technique, a value identifying the segment and its offset is contained in the reference to that memory location. 
- Object files that are produced during the compilation of the programs make use of the segments when they have to be linked together to form an image of the program and this image has to be loaded in to the memory.  
- For different program modules, different segments might be created. 
- Some programs may even share some of the segments.
- In one way, memory protection is implemented by means of memory segmentation only.
- Paging and segmentation can be combined together for memory protection. 
- The size of memory segment is not always fixed and can be as small as a byte. 
- Natural divisions such as the data tables or the individual routines are represented by the segments.
This is to make the segmentation visible to the programmer. 
- With every segment, a set of permissions and length is associated. 
- A segment can be referred to by the process only in a way that is permitted by this set of permissions. 
- If this is not done, a segmentation fault is raised by the operating system. 
Segments also consist of a flag that indicates the presence of the segment in the main memory of the system. 


Tuesday, May 7, 2013

What is meant by Time sharing system?


In the field of computer science, sharing resources of a computer through techniques of multi-tasking and multi-programming by many other system users is termed as a time sharing system. 
- It was first introduced in the year of 1960 and eventually emerged as the most popular computing model of the 1970s. 
- With it, occurred a major shift in the technology of designing the efficient computers. 
- These types of systems allowed quite a large number of users for interacting with the same computer system at the same time. 
- Providing computing capabilities was a costly affair at that time. 
- Time sharing greatly brought down this cost by providing these capabilities at a very less cost. 
- Since time sharing allows multiple users to interact simultaneously with the same system, it has actually made it possible for the organizations and the individuals to use a system that they do not even own. 
- This has further led to the promotion of the computers to be used interactively and development of other applications with an interactive interface. 
- The earlier systems apart from being expensive were quite slow. 
- This was the reason why the systems could be dedicated only to one task at a time. 
- The task was carried out through the control panels from where the operator would enter small programs manually through switches so as to load and execute a new program series. 
- These programs would take even up to weeks for completing execution. 
- The realization of the interaction pattern was what that led to the development of time sharing systems. 
- Usually, the data entered by a single user was in small bursts of info and then a long pause. 
- But if there would have been multiple number of users working concurrently on the same system, there activities could fill up the pauses of the single user. 
The overall process could be made very efficient for a given size of the use group. 
- In the same way, the slice or share of time that was engaged in waiting for network input or tape or disk could be utilized by activities of other users. 
- A system that would be able to harness this potential advantage was difficult to be implemented.
- Even though batch processing was a high at that time, it could only make use of the time delay between two programs. 
- In the early times, the multiplexing of computer terminals in to main frame computer systems was seen.
- Such implementations were capable of sequentially polling those terminals to check for additional action and data requests made by the user of the system.

- Later, came the interconnection technology that was interrupt driven and made use of the IEEE 488 i.e., parallel data transfer technologies.
- Time sharing faded for some time with the advent of the micro computing but again it came back in to the scene with the rise of internet. 
- The corporate server farms cost in millions and are capable of hosting a large number of customers sharing the same resources.
- The operation of the websites using the early serial terminals was in bursts of activity that were followed by idle periods. 
- However, it is because of this bursting that the services of the web sites could be used by a large number of users simultaneously and with the advantage that the delays in communications won’t be noticed by them.
- However, if the server gets too damn busy they will surely start coming in to the notice.
- Earlier some time sharing services such as the service bureaus were offered by many companies. 
- Some examples of common systems that are used for time sharing are:
  1. SDS 940
  2. PDP – 10
  3. IBM 360


Wednesday, April 17, 2013

What are Real-time operating systems?


- The RTOS or a real time operating system was developed with the intention of serving the application requests that occur in real time. 
- This type of operating system is capable of processing the data as and when it comes in to the system. 
- This it does without making any buffering delays. 
- The time requirements are processed in 10ths of seconds or even on much smaller scale. 
A key characteristic feature of the real operating system is that the amount of time they take for accepting and processing a given task remains consistent. 
- The variability is so less that it can be ignored totally.

Real time operating systems also there are two types as stated below:
  1. The soft real –time operating system: It produces more jitter.
  2. The hard real – time operating system: It produces less jitter when compared to the previous one.
- The real time operating systems are driven by the goal of giving guaranteed hard or soft performance rather than just producing a high throughput. 
- Another distinction between these two operating systems is that the soft real time operating system can generally meet deadline whereas the hard real time operating system meets a deadline deterministic ally.
- For the scheduling purpose, some advance algorithms are used by these operating systems. 
- Flexibility in scheduling has many advantages to offer such as the cso (computer system orchestration) of the process priorities becomes wider.
- But a typical real time OS dedicates itself to a small number of applications at a time. 
- There are 2 key factors in any real –time OS namely:
  1. Minimal interrupt latency and
  2. Minimal thread switching latency.
- Two types of design philosophies are followed in designing the real  time Oss:
  1. Time sharing design: As per this design, the tasks are switched based up on a clocked interrupt and events at regular intervals. This is also termed as the round robin scheduling.
  2. Event – driven design: As per this design, the switching occurs only when some other event demands higher priority. This is why it is also termed as priority scheduling or preemptive priority.
- In the former designs, the tasks are switched more frequently than what is strictly required but it proves to be good at providing a smooth multi – tasking experience. 
- This gives the user an illusion that he/ she is solely using the machine. 
- The earlier designs of CPU forced us to have several cycles for switching a task and while switching it could not perform any other task. 
- This was the reason why the early operating systems avoided unnecessary switching in order to save the CPU time. 
- Typically, in any design there are 3 states of a task:
  1. Running or executing on CPU
  2. Ready to be executed
  3. Waiting or blocked for some event
- Many of the tasks are kept in the second and third states because at a time the CPU can perform only one task. 
- The number of tasks waiting to be executed in the ready queue may vary depending on the running applications and the scheduler type being used by the CPU. 
- On multi – tasking systems that are non – preemptive, one task might have to give up its CPU time to let the other tasks to be executed. 
- This leads to a situation called the resource starvation i.e., the number of tasks to be executed is more and the resources are less.


Friday, March 22, 2013

What is an Artificial Neural Network (ANN)?


- The artificial neural network or ANN (sometimes also called as just neural network) is a mathematical model that has got its inspiration from the biological neural networks. 
- This network is supposed to consist of several artificial neurons that are interconnected. 
- This model works with a connectionist approach for computing and thus processes information based up on this only. 
- In a number of cases, the neural network can act as an adaptive system that has the ability of making changes in its structure while it is in some learning phase. 
- These networks are particularly used in searching patterns in data and for modeling the complex relationships that exist between the outputs and inputs. 
An analogy to artificial neural network is the neuron network of the human brain. 
- In an ANN, the artificial nodes are termed as the neurons or sometimes as neurodes or units or the ‘processing elements’. 
They are interconnected in such a way that they resemble a biological neural network. 
- Till now, no formal definition has been given for the artificial neural networks. - These processing elements or the neurons show a complex global behavior. 
The connections between the neurons and their parameters is what that determines this behavior.
- There are certain algorithms that are designed for altering the strength of these connections in order to produce the desired flow of the signal. 
- The ANN operates up on these algorithms. 
- As in biological neural networks, in ANN also functions are performed in parallel and collectively by the processing units.
- Here, there is no delineation of the tasks that might be assigned to different units. 
- These neural networks are employed in various fields such as:
  1. Statistics
  2. Cognitive psychology
  3. Artificial intelligence
- There are other neural network models that emulate biological CNS and are part of the following:
  1. Computational neuroscience
  2. Theoretical neuroscience
- The modern software implementation of the ANNs prefers a more practical approach than biologically inspired approach. 
- This practical approach is based up on the signal processing and statistics. The former approach has been largely abandoned. 
- Many times parts of these neural networks serve as components for the other larger systems that are a combination of non – adaptive and adaptive elements.
- Even though a more practical approach for solving the real world problems is the latter one, the former has more to do with the connectionist models of the traditional artificial intelligence. 
- Well the common thing between them is the principle of distributed, non – linear, local and parallel processing and adaptation. 
- A paradigm shift was marked by the use of neural networks during the late eighties. 
- This shift was from the high level artificial intelligence (expert systems) to low level machine learning (dynamical system). 
- These models are very simple and define functions such as:
f: X à Y
- Three types of parameters are used for defining an artificial neural network:
a)   The interconnection pattern between neuron layers
b)   The learning process
c)   The activation function
- The second parameter updates the weights of the connections and the third one converts the weighted input in to output. 
- Learning is the thing that has attracted many towards it. 
- There are 3 major learning paradigms that are offered by ANN:
  1. Supervised learning
  2. Un – supervised learning
  3. Reinforcement learning
- Training a network requires selecting from a set of models that would best minimize the cost.
- A number of algorithms are available for training purpose where gradient descent is employed by most of the algorithms.
- Other methods available are simulated annealing, evolutionary methods and so on.


Facebook activity