Subscribe by Email


Showing posts with label Connection. Show all posts
Showing posts with label Connection. Show all posts

Tuesday, October 15, 2013

What are uses of WiMax technology?

- The WiMax technology has been used since a long time to provide assistance to the communication process.
- This area has seen major deployment of wimax technology especially in Indonesia during the calamity of tsunami in the year of 2004. 
- The WiMax technology brought in the possibilities of providing broadband access that helped a big deal in regeneration of the communication. 
- The organizations such as FEMA and FCC (federal communications commission) felt the need of wimax in their communication process. 
- The WiMax applications with high efficiency are available today.
- It is known to offer a broad base for the customers and the services had been improved by adding mobility feature to them.
- The service providers use the WiMax technology for providing various services such as mobile and Internet access, voice, video and data. 
There are other advantages of using wimax technology.  
- You get to save a lot of prospective cost and at the same time you get efficiency in services.
- It is even capable of allowing the video making, VOIP calling and data transfers at high speeds.
- The mobile community has been upgraded so much with the coming of the WiMax technology.
- However, there are three main applications offered by WiMax namely backhaul, consumer connectivity and business.
- The real augmentation has been drawn to communications through WiMax technology because of which they can benefit both from the data transmission and video apart from voice. 
- This has facilitated quick response from the applications as per the situation.  
- A temporary communication services can be deployed by a client using WiMax technology.
It can even speed up the network according to the circumstances and events.  
- This has got us access to visitors, employees and media on a temporary basis.  
- If we are located in the range of the tower, it is quite easy for us to gain access to the equipment of the premises of for the events.

The factors that make the wimax technology so powerful are the following:
> high bandwidth
> high quality services
> security
> deployment
> full duplex consisting of DSL
> reasonable cost

For some applications, the wimax technology is used exclusively as in the following:

1. A means of connecting for the small and medium sized businesses.  - This technology has enabled these businesses to progress day by day.
- The connectivity offered by WiMax technology is good enough to attract clients.  
- It then provides them a number of services such as that of hotspots and so on.  
- Therefore, this application has gotten into spot light.

2. Backhaul
- The most important application of the WiMax technology is the range.
- This is so because using WiMax tower can be used as a means to connect with the other WiMax towers through line-of-sight communication which involves using microwave links. 
- This connectivity between two towers is called as backhaul.  
- It is capable of covering up to 3000 miles. 
- The WiMax network is even sufficient for covering remote and rural areas.


3. The nomadic broadband is another application of wimax technology which can be considered as an extended plan of wifi.
- The access points provided by WiMax technology might be less in number but they offer very high security.  
- Many companies use the WiMax base station for the development of the business.


Saturday, October 12, 2013

What is WiMax technology?

Worldwide inter-operability for microwave access or wimax is standard developed for wireless communications that has been designed so as to deliver data rates of 30-40 mbps. The update in the technology in the year 2011 upgraded the technology to provide around 1 gbps for the stations that were fixed. 
- The Wimax forum is responsible for naming the technology as Wimax. 
- This forum was formed in the year of 2001 for the promotion of the inter-operability and conformity of this standard. 
- The Wimax has been defined by the forum as the technology based up on standards that enable the last mile wireless broadband delivery as alternative for the DSL and the cable thing. 
- The IEEE 802/ 16’s interoperability implementations are referred to as the WiMax. 
- The wimax forum has ratified this family of standards. 
- By virtue of the certification provided by this forum, the vendors are able to sell mobile and fixed products that are wimax certified. 
- This is done for ensuring that a level of inter-operability is maintained at par with the other products that have been also certified for the same profile. 
- The ‘fixed wimax’ is the name given to the original IEEE 802.16 standards.
- ‘Wifi on steroids’ is the term used to refer to WiMax sometimes. 

It has got a number of applications such as in:
Ø  Broadband connections
Ø  Cellular back-haul
Ø  Hot spots and so on.

- This technology shares some similarity with the Wifi technology however, this one is more capable of transmitting data at greater distances.
It is because of its range and bandwidth that the WiMax is suitable for the following applications:
Ø  Provides services such as the IPTV services and VoIP (telecommunications services).
Ø  Provides mobile broadband connectivity that is portable across the cities and countries and that can be accessed via different kinds of devices.
Ø  Provides an alternative for DSL and cable in the form of wireless last mile broadband access.
Ø  Acts as a source of internet connectivity.
Ø  Metering and smart grids.

- This technology can be used at home for providing internet access across the countries. 
- This has also caused a rise in the market competition. 
- The WiMax is even economically feasible. 
- Mobile wimax has been used as a replacement for the technologies like CDMA, GSM that are cellular phone technologies.  
- The technology has also been used as an overlay for increasing the capacity.  
The fixed wimax is now used for 2g, 3g and 4g networks as a wireless back-haul technology in almost all the nations whether they are developed or developing.  
- In some states of North America, this technology is provided through a numbered of copper wire line connections. 
- On the other hand, the technology is back hauled via satellites in case of the remote cellular operations.  
- While in other cases even microwave Links are used. 
- The bandwidth requirements of the WiMAX demand more substantial back-haul when compared to other legacy cellular applications. 
- In some of the cases, the sites have been aggregated by the operators by use of Wireless Technology.  
- The traffic is then introduced to the fiber networks as per the convenience.  
The technologies that provide triple play services are directed compatible with the WiMAX.  
- These services might include multi-casting and quality of service. 
- WiMax has been widely used for providing assistance in the communications. 
- The Intel Corporation has donated the hardware for WiMax technology for assisting the FCC (federal communications commission) and FEMA etc.
- The subscribers’ stations or SS are the devices which are used for connecting to a WiMAX Network. 
- These devices might be portable such as the following:
      > Handsets and smart phones
      > PC peripheral such as USB dongles, PC Cards and so on. 
      > Embedded devices in notebooks.



Monday, October 7, 2013

What is Wifi technology? How does it work?

- Wifi has emerged as a very popular technology. 
- This technology has enabled the electronic devices to exchange information between them and to share the internet connection without using any cables or wires. 
- It is a wireless technology. 
- This technology works with the help of the radio waves. 
- The Wifi is defined as a WLAN (wireless local area network) product by the wifi alliance that is based on the standards defined by IEEE (802.11 standards). 
Most of the WLANs are based upon these standards only and so this technology has been named as wifi which is the synonymous with the term WLAN. 
- The wifi-certified trademark might be used by only those wifi products which have the complete certification for the wifi alliance inter-operability. 
- A number of devices now use wifi such as the PCs, smart phones, video game consoles, digital cameras, digital audio players, tablet computers and so on. 
- All these devices can connect to the network and access internet by means of a wireless network access point. 
- Such an access point is more commonly known as a ‘hotspot’. 
- The range of an access point is up to 20 m. 
- But it has a much greater range outside.  
- An access point can be installed in a single room or in an area of many square miles. 
- This can be achieved by using a number of overlapping access points. 
However, the security of the wifi is less compared to the wired connections for example Internet.
- This is so because a physical connection is not required by an intruder. 
- The web pages using SSL have security but the intruders can easily access the non-encrypted files on the internet. 
- It is because of this, that the various encryption technologies have been adopted by the wifi. 
- The earlier WEP encryption was weak and so was easy to break.
- Later, came the higher quality protocols such as the WPA2 and WPA. 
- The WPS or the wifi protected set up was an optional feature that was added in the year of 2007. 
- This option a very serious flaw which is that it allowed the recovery of the password of the router by an attacker.
- The certification and the test plan has been updated by the wifi alliance for ensuring that there is resistance against attacks in all the devices that have been newly certified.
- For connecting to a wifi LAN, a wireless network interface controller has to be incorporated in to the computer system.
- This combination of the interface controller and the computer is often called as the station. 
- The same radio frequency communication channel is shared by all the stations.
- Also, all the stations receive any transmission on this channel. 
- Also, the user is not informed of the fact that the data was delivered to the recipient and so is termed as the ‘best–effort delivery mechanism’. 
- For transmitting the data packets, a carrier wave is used. 
- These data packets are commonly known as the ‘Ethernet frames’. 
Each station regularly tunes in to the radio frequency channel for picking up the transmissions that are available. 
- A device that is wifi enabled can connect to the network if it lies in the range of the wireless network. 
- One condition is that the network should have been configured for permitting such a connection. 
- For providing coverage in a large area multiple hotspots are required. 
- For example, wireless mesh networks in London. 
- Through wifi, services can be provided in independent businesses, private homes, public spaces, high street chains and so on. 
- These hotspots have been set up either commercially or free of charge. 
- Free hotspots are provided at hotels, restaurants and airports. 


Thursday, September 26, 2013

Differentiate between upward and downward multiplexing?

The process of multiplexing is carried out at the transport layer. Several conversations are multiplexed in to one connection or physical links or virtual circuit. For example, suppose the host has only one network address available for use. Then it has to be used by all the transport connections originating at that host. For multiplexing the following two main strategies are followed:
Ø  Upward multiplexing and
Ø  Downward multiplexing

Upward Multiplexing 
- In upward multiplexing, the different transport connections are multiplexed in to one network connection. 
- These transport connections are grouped by the transport layer as per their destinations. 
- It then maps the groups with the minimum number of network connections possible.
- The upward multiplexing is quite useful where the network connections come very expensive.

Downward Multiplexing 
- It is only used when the connections with high bandwidth are required. 
- In case of the downward multiplexing, the multiple network connections are opened by the transport layer and the traffic is distributed among them. 
- But for using downward multiplexing, it is necessary that this capacity must be handled well by the subnet’s data links.

Another Technique 
- In either of the cases it is not guaranteed that the segments will be delivered in order. 
- Therefore, another technique is adopted. 
- The segments are numbered sequentially. 
- Each octet is numbered by the TCP sequentially. 
- Segments are then numbered based up on the number of the first octet present in that segment. 
- The segments might get damaged in the transition or some may even fail to arrive at the destination. 
- This failure is not acknowledged by the transmitter. 
- However, the successful receipt of the segment is does acknowledged by the receiver. 
- Sometimes, the cumulative acknowledgements might be used. 
- If the ACK triggers a time out interrupt, the re-transmission of the segment is done. 
- Also the re-transmission is done when an ACK is lost. 
- The receiver must have the ability to recognize the duplicate ACKs. 
- If such thing occurs, the receiver assumes by itself the ACK might have been lost.
- This happens when the ACK duplicate is received before the connection is closed. 
- If the duplicate is received after the closure of the connection, the situation is dealt differently. 
- In this case, the sender and receiver are allowed to know about each other’s existence. 
- They negotiate about the parameters and the transport entity resources are allocated based up on some mutual agreement. 
The connection release is of two types:

Ø Asymmetric release: 
This is the one used in the telephone systems. However it does not works well for the network that use packet switching.

Ø  Symmetric release: 
- This is certainly better than the previous one.
- Here, all the directions are released independently with respect to each other. 
- The host continues receiving data after the disconnection TPDU has been sent. 
- But the symmetric release has another problem which is related with indirection levels and fake messages. 
- There are no proper solutions for this problem in case of the unreliable communication media. 
- Note that this has nothing to do with the protocol. 
- Putting a reliable protocol over an unreliable medium can actually guarantee the delivery of the message. 
- Another thing to be noted is that it the time limit within which the message will be delivered cannot be guaranteed by any protocol. 
- Error conditions might prolong the delivery period. 
- Restarting the connections can lead to the loss of all the state info and the connection might remain as half-open. 
- Since no protocol has been designed to deal with this problem therefore one has to go forward with the risks associated with releasing the connections. 


Wednesday, September 18, 2013

What are the advantages and disadvantages of datagram approach?

- Today’s packet switching networks make use of a basic transfer unit commonly known as the datagram. 
- In such packet switched networks, the order of the data packets arrival, time of arrival and delivery comes with no guarantee. 
- The first packet switching network to use the datagrams was CYCLADES. 
Datagrams are known by different names at different levels of the OSI model. 
- For example, at layer 1 we call it Chip, at layer 2 it is called Frame or cell, data packet at layer 3 and data segment at layer 4. 
- The major characteristic of a datagram is that it is independent i.e., it does not rely on any other thing for the information required for exchange.
- The duration of a connection between any two points is not fixed such as in telephone conversations. 
- Virtual circuits are just the opposite of the datagrams. 
- Thus, a datagram can be called as a self containing entity. 
- It consists of information sufficient for routing it from the source to the destination without depending up on the exchanges made earlier. 
- Often, a comparison is drawn between the mail delivery service and the datagram service. 
- The user’s work is to just provide the address of the destination. 
- But he/she is not guaranteed the delivery of the datagram and if the datagram is successfully delivered, no confirmation is sent to the user. 
- The data gram are routed to some destination without help of a predetermined path. 
- The order in which the data has to be sent or received is given no consideration. 
- It is because of this that the datagrams belonging to a single group might travel over different routes before they reach their common destination. 

Advantages of Datagram Approach
  1. Datagrams can contain the full destination address rather than using some number.
  2. There is no set up phase required for the datagram circuits. This means that no resources are consumed.
  3. If it happens during a transmission that one router goes down, the datagrams that will suffer will include only those routers which would have been queued up in that specific router. The other datagrams will not suffer.
  4. If any fault or loss occurs on a communication line, the datagrams circuits are capable of compensating for it.
  5. Datagrams play an important role in the balancing of the traffic in the subnet. This is so because halfway the router can be changed.
Disadvantages of Datagram Approach

  1. Since the datagrams consist of the full destination address, they generate more overhead and thus lead to wastage of the bandwidth. This in turn makes using datagram approach quite costly.
  2. A complicated procedure has to be followed for datagram circuits for determining the destination of the packet.
  3. In a subnet using the datagram approach, it is very difficult to keep congestion problems at bay.
  4. The any-to-any communication is one of the key disadvantages of the datagram subnets. This means that if a system can communicate with any device, any of the devices can communicate with this system. This can lead to various security issues.
  5. Datagram subnets are prone to losing or re - sequencing the data packets during the transition. This puts a great burden on the end systems for monitoring, recovering, and reordering the packets as they were originally.
  6. Datagram subnets have less capability of dealing with congestion control as well as flow control. This happens because the direction of the incoming traffic is not specified. In the virtual circuit subnets, the flow of the packets is directed only along the virtual circuits thus making it comparatively easy for controlling it.
  7. The unpredictable nature of the flow of the traffic makes it difficult to design the datagram networks


Monday, September 16, 2013

What are the differences between inter-network routing and intra-network routing?

- The individual networks when combined together form the inter-network. 
Intermediate inter networking devices are used for making connections between them. 
- All these networking elements combine to work as single large unit. 
- The creation of the internetworking has been made possible because of the packet switching technology. 
- The router is the most common and important device used for performing inter-network routing and intranetwork routing.
- Routing across various networks in the inter network is termed as internetwork routing and routing within the same network is termed intranetwork routing. 

In this article we discuss about the differences between internetwork routing and intranetwork routing. 

- Just like inter-network, intranetwork also uses IP (internet protocol) technology for computing services and sharing information. 
- But what makes it different from internetworking is that it is limited to some organization whereas internetwork extends beyond all i.e., it is not limited.
- Or we can put it in other words: Internetwork is spread across organizations and Intranetwork lies within an organization. 
- In some cases, the term intranetwork might mean only the internal website of the organization, but in other cases it might be a larger part of the IT infrastructure of the organization. 
- Sometimes, it may span over a number of LANs (local area networks). 
- The intranetwork is driven by the goal of minimizing the time, effort and cost of the individual’s desktop in order to make it more competitive, cost efficient, timely as well as productive.
- An intranetwork is capable of hosting multiple websites that are private to organizations and may even constitute an important part of the collaboration and communication between the members of the organization. 
- Intranetwork also makes use of various well known protocols such as the FTP, SMTP and HTTP. 
- The intranets are often incorporated with the technologies for lending a modern interface to the systems that host the corporate data. 
- These systems are known as the legacy systems. 
- We can see intranetwork to be a private analog of the internetwork. 
- It means the internetwork has been simply extended to an organization for its private use. 
- Extranetworks are a modified version of the intranetworks.
- Here, the website might be accessed by the non-members i.e., the suppliers, customers or some other approved third parties and so on. 
- Intranetworks are well equipped with a special protocol called the AAA protocol. 
- The 3 As stand for authentication, authorization and accounting. 
- There are a number of organizations who are concerned about the security of their intranetworks. 
- They have deployed a firewall and a network gateway for controlling the access to their services. 
The intermediate systems when connect together form the internetwork whereas they may bound together a part of the internetwork which might be an intranetwork
- The intranetwork routing involves routing between two routers which lie in the same network whereas in internetwork routing, routing is done between routers which reside across different networks. 
- Intranetwork routing is quite easy when compared to the internetwork routing. 
- Protocols used in both the types of routing are different.
- Interior gateway protocol is responsible for routing in the intranetworks whereas the exterior gateway protocol takes the responsibility of routing across the internetwork. 
- Most common example of interior gateway protocol is the OSPF or the open shortest path first protocol. 
- And most common example of exterior gateway protocol is the border gateway protocol or BGP. 
- Also, the routing graphs for both the types are different. 
- In the intranetwork’s graph, all the routers are simply linked to one another in the same network. 
- There is less mess.
- On the other hand, the inter network’s graph is quite tedious. 
- This is so because routers of different networks have to be inter-linked with one another. 


Wednesday, September 11, 2013

What are transport and application gateways?

- Hosts and routers are separated in TCP/IP architecture. 
- For private networks, more protection is required to maintain an access control over it. 
- Firewall is one of the components of this TCP/IP architecture. 
- Internet is separated from Intranet by this firewall.
- This means all the incoming traffic must pass through this firewall. 
- The traffic that is authorized is allowed to pass through. 
- It is not possible penetrate the firewall simply. 
Firewall has two components namely:
Ø  Filtering router and
Ø  Two types of gateways namely application and transport gateways.
- All the packets are checked by the router and filtered based up on any of the attributes such as protocol type, port numbers, and TCP header and so on. 
Designing the rules for filtering of the packets is quite a complex task. 
- A little protection is offered by this packet filtering since with the filtering rules on one side, it is difficult to cater to the services of the users on other side.

About Application Gateways
- Application layer gateways consist of 7 layer intermediate system designed mainly for the access control. 
- However, these gateways are not commonly used in the TCP/ IP architecture. 
- These gateways might be used sometimes for solving some inter-networking issues. 
- The application gateways follow a proxy principle for supporting the authentication, restrictions on access controls, encryption and so on. 
- Consider two users A and B. 
- A generates an HTTP request which is first sent to the application layer gateway rather than being send to its destination. 
- The gateway checks about the authorization of this request and performs encryption. 
- After the request has been authorized, it is sent to user B from the gateway just at it would have been sent by A.
- B responds back with a MIME header and data which might be de-crypted or rejected by the gateway.
- If the gateway accepts, it is sent to A as if from B. 
- These gateways are designed for all the protocols of application level.


About Transport Gateways
- The working of the transport gateway is similar to application gateway but it works at the TCP connection level. 
- These gateways are not dependent up on the application code but they do need client software so as to maintain awareness about the gateway. 
Transport gateways are intermediate systems at layer 4. 
- An example is the SOCKS gateways. 
- IETF has defined it as a standard transport gateway.
- Again, consider two clients A and B. 
- A TCP connection is opened by A to the gateway. 
- The SOCKS server port is nothing but the destination port. 
- A sends a request to this port for opening the connection to B indicating the port number of the destination. 
- After checking the request, the request for connection from A is either accepted or rejected. 
- If accepted, a new connection is opened to B. 
- The server also informs A that the connection has been established successfully. 
- The data relay between the clients is kept transparent. 
- But in actual there are two TCP connections having their own sequence numbers as well as acknowledgements. 
- The transport gateways are simpler when compared with the application layer gateways. 
- This is so because the transport gateways are not concerned with the data units at the application layer. 
- It has to act on the packets simply once the connection has been established. 
Also, this is the reason why it also gives higher performance in comparison with the application layer gateways. 
- But it is important that the client must be aware of its presence since there is no transparency here. 
- If between the two networks the only border existing is the application gateway, it alone can act as the firewall. 


Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Facebook activity