Subscribe by Email


Showing posts with label Protocol. Show all posts
Showing posts with label Protocol. Show all posts

Saturday, September 21, 2013

What are the services provided to upper layers by transport layer?

In the field of computer networking, the purpose of the 4th layer or the transport layer is to provide services for the end to end communication for the various operating applications. The services are provided within an architectural framework that consists of protocols and the components and is layered. It also offers convenient services such as the following:
Ø  Connection – oriented data stream support
Ø  Reliability
Ø  Flow control
Ø  Multiplexing and so on.

- Both the OSI (open systems interconnection) and TCP/ IP model include the transport layer. 
- The foundation of the internet is based up on the TCP/ IP model whereas for the general networking, the OSI model is followed. 
- However, the transport layer is defined differently in both of these models. Here we shall discuss about the transport layer in the TCP model since it is used for keeping the API (application programming interface) convenient to the internet hosts. 
- This is in contrast with the definition of the transport layer in the OSI model. 
TCP (transmission control protocol) is the most widely used transport protocol and so the internet protocol suite has been named after it i.e., the TCP/ IP. 
- It is a connection-oriented transmission protocol and so it is quite complex. 
This is also because it incorporates reliable data stream and transmission services in to its state-ful design. 
- Not only TCP there are other protocols in the same category such as the SCTP (stream control transmission protocol) and DCCP (datagram congestion control protocol).

Now let us see what all services are provided by the transport layer to its upper layers:
ØConnection-oriented communication: It is quite easy for the application for interpreting the connection as a data stream instead of having to cope up with the connectionless models that underlie it. For example, internet protocol (IP) and the UDP’s datagram protocol.
Ø Byte orientation: Processing the data stream is quite easy when compared with using the communication system format for processing the messages. Because of such simplification, it becomes possible for the applications to work up on message formats that underlie.
Ø  Same order delivery: Usually, it is not guaranteed by the transport layer that the data packets will be received in the same order in which they were sent. But this is one of the desired features of the transport layer. Segment numbering is used for incorporating this feature. The data packets are thus passed on to the receiver in order. Head of line blocking is a consequence of implementing this.
Ø  Reliability: During the transportation some data packets might be lost because of errors and problems such as network congestion. By using error detection mechanism such as CRC (cyclic redundancy check), the data might be checked by the transport protocol for any corruption and for the verification whether the correct reception of the data by either sending a NACK or an ACK signal to the sending host. Some schemes such as the ARR (automatic repeat request) are sometimes used for the retransmission of the corrupted or the lost data.
Ø  Flow control: The rate with which the data is transmitted between two nodes is managed for preventing a sending host with a fast speed from the transmission of data more than what the receiver’s data buffer can take at a time. Otherwise it might cause a buffer overrun.

Ø  Congestion avoidance: Traffic entry in to the network can be controlled by means of congestion control by avoiding congestive collapse. The network might be kept in a state of congestive collapse by automatic repeat requests. 


Saturday, September 14, 2013

Explain Border Gateway Protocol (BGP)?

- BGP or Border gateway protocol is the set of rules that is implemented for making the routing decisions at the core of the internet. 
- It involves the use of the IP networks table or we can say prefixes which are used for designating the reach-ability of the network to the autonomous systems. 
- This protocol falls under the category of the path vector protocol or sometimes classified as a variant of the distance vector routing protocols. 
- The metrics of the IGP or the interior gateway protocol are not used by the border gateway protocol rather paths, rule sets or polices are used for making decisions for routing. 
- This is why the border gateway protocol is often called a reach-ability protocol rather than being termed as a routing protocol. 
- The BGP has ultimately replaced the EGP or the exterior gateway protocol. 
This is so because it allows the full decentralization of the routing process for making transition between the ARPANET model’s core and the decentralized system that consists of a NSFNET backbone and the regional networks associated with it. 
- The present version of the BGP that is being used is the version 4. 
- The earlier versions were discarded for being obsolete. 
- The major advantage is of the classless inter-domain routing and availability of a technique called the route aggregation for making reductions in the routing size. 
- The use of the BGP has made the whole routing system a decentralized system.
- BGP is used by most of the internet service providers for establishing a route between them. 
- This is done especially when the ISPs are multi-homed. 
- That’s why even though it is not used directly by the users; it is still one of the most important protocols in networking. 
- The BGP is used internally by a number of large private IP networks. 
- For example, it is used to combine many large open shortest path first or OSPF networks where these networks do not have the capability to scale to the size by themselves. 
- BGP is also used for multi-homing a network so as to provide a better redundancy. 
- This can be either to many ISPs or to a single ISP’s multi access points. 
Neighbors of the border gateway protocol are known as the peers. 
- They are created by manually configuring the two routers so as to establish a TCP session on the port. 
- Messages called the 19 byte keep alive messages are sent to the port periodically by the BGP speaker for maintaining the connection. 
- Among the various routing protocols, the most unique is BGP since it relies up on TCP for transporting. 
- When the protocol is implemented in the autonomous system among two peers, it is called IBGP or the internal border gateway protocol. 
- The protocol is termed as the EBGP or the external border gateway protocol when it runs between many autonomous systems.
- Border edge routers are the routers that are implemented on the boundary for exchanging information between various autonomous systems.
- BGP speakers have the capability for negotiating with the session’s option capabilities such as the multi-protocol extensions and a number of recovery modes. 
- The NLRI (network layer reach-ability information) can be prefixed by the BGP speaker if at the time of the creation itself, the multi-protocol extensions are negotiated. 
- The NLRI is advertised along with some address family prefix. 
The family consists of the following:
Ø  IPv4
Ø  IPv6
Ø  Multicast BGP
Ø  IPv4/ IPv6 virtual private networks

- These days the border gateway protocol is being commonly employed as the generalized signaling protocol whose purpose is to carry information via the routes that might not form the global internet’s part. 


Saturday, August 24, 2013

Explain multicast routing?

- Multicast routing is also known as the IP multicast. 
- For sending the IP (internet protocol) data-grams to a group of receivers who are interested in receiving the data-grams, multicast routing is used.
- The data-grams are sent to all the receivers in just one transmission. 
Multicast routing has got a special use in the applications that require media streaming on private networks as well as internet. 
- Multicast routing is IP specific version. 
- A more general version is the multicast networking.
- Here, the multicast address blocks are especially reserved in IPv6 and IPv4. 
Broadcast addressing has been replaced by multicast addressing in IPv6. 
Broadcast addressing was used in IPv4. 
- RFC 1112 describes the multicast routing and in 1986 it was standardized. 

This technique is used for the following types of real – time communication over the IP infrastructure of the network:
Ø  Many – to – many
Ø  One – to – many

- It scales up to receiving population that is large enough and it does not require either knowledge regarding the receivers and the identity of the receivers. 
- Network infrastructure is used efficiently by the multicast efficiently and requires source sending packet to a large number of receivers only once. 
- The responsibility of the replication of the packet is of the nodes which are nothing but the routers and the network switches.
- The packet has to be replicated till it reaches the multiple receivers. 
- Also, it is important that the message is sent only once over the link.   
- UDP or the user data gram protocol is the mostly used protocol of low level. 
Even though if this protocol does not guarantees reliability i.e., the packets might get delivered or get lost. 
- There are other multicast protocols available that are reliable such as the PGM or the pragmatic general multicast. 

It has been developed for adding the following two things a top the IP multicast:
Ø  Retransmission and
Ø  Loss detection
The following 3 things are key elements of an IP multicast:
  1. Receiver driven tree creation
  2. Multicast distribution tree
  3. IP multicast group address
- The receivers and the sources use the last for sending as well as receiving the multicast messages. 
- The group address serves as the destination address of the data packets for the sources whereas it is used for informing the network whether or not the receivers want those packets.
- Receivers need a protocol for joining a group. 
- One most commonly used protocol for this purpose is the IGMP i.e., the internet group management protocol. 
- The multicast distribution trees are set up using this protocol. 
- Once a group has been joined by the receiver, the PIM (protocol independent multicast) protocol is used for constructing a multicast distribution tree for this group. 
- The multicast distribution trees set up with the help of this protocol are used for sending the multicast packets to the members of the multicast group. 

PIM can be implemented in any of the following variations:
  1. SM or sparse mode
  2. DM or dense mode
  3. SSM or source specified mode
  4. SDM or sparse – dense mode or bidirectional mode (bidir)

- Since 2006, the sparse mode is the most commonly used mode. 
- The last two variations are more scalable and simpler variations of PIM and are also popular. 
- An active source is not required for carrying out an IP multicast operation and knowing about the group’s receivers. 
- The receiver drives the construction of the IP multicast tree. 
- The network nodes which lie closer to receiver are responsible for initiating this construction.
- This multicast then scales to a receiver population that is large enough. 
- It is important for a multicast router to know which all multicast trees can be reached in the network. 
- Rather, it only requires knowledge of its downstream receivers. 
- This is how the multicast – addressed services can be scaled up. 


Thursday, October 25, 2012

What is Perl Testing?


Various testing methodologies have become a corner stone for many of the development processes and PERL testing is one such testing methodology. 

"Perl testing is one such testing that is highly involved with the creation of automated test suites".

The creation of automated test suites with regard to the perl projects is assisted by around 400 testing and quality modules which are now available on the CPAN.
Now you must be thinking why only automated test suites in perl? 
The answer is that the with an automated test suite the developers as well as the project managers get a sense of confidence in the ability of the code that it can very well carry out a specification. 

What is Perl Testing?

- Perl development ethos has always viewed software testing as its central and critical part since years. 
- Gradually, a testing protocol by the name of TAP or ‘test anything protocol’ was set up for the perl in the year of 1987. 
- This TAP protocol is now available for so many languages. 
- Many of the test modules on CPAN like 100s make use of this TAP protocol. 
With the help of this protocol it has been made possible to enable the following aspects:
  1. Testing of data base queries
  2. Testing of objects
  3. Testing of web sites and so on.
- Around 250,000 tests have been developed for the core Perl language plus there are a same number of tests for the libraries that are associated with it. 
There is one more advantage of the automated test suites which is that the additions are done to the code base as the changes in functional requirements are experienced. 
- But, while making the additions, re-factoring is required so as to avoid duplication. 
- Since, if there is enough code coverage the issues will automatically be highlighted by the test suite and then it becomes fairly easy to spot the changes that occur in the knock on effects of code. 
- The duty of the code coverage is to determine how much of the code has been tested since the execution of the test suite. 
- This metric however can be obtained from the developer and also the branches and sections of the code that are not being tested can also be reported. 
- Testers can combine the testing modules that are most frequently used and thus it serves a good starting point. 
- So, for the cases in which testing specific functionality is required, one only needs to add specific testing modules to the test.
- Perl has always recognized testing as a part of its culture. 
- With the TAP protocol, the communication between a test harness and several unit tests has been made possible. 
- The TAP producers can make communication regarding the test results to the testing harness in a way that is language agnostic. 
- Earlier the parser and producers were available for only this platform but now they are available for a variety of platforms. 
- The responsibility for the following purposes is taken up by the test anything web site:
  1. Development of TAP
  2. Standardization of TAP
  3. Writing of test consumers
  4. Writing of test producers
  5. Evangelization of the language and so on.
- In many other testing methodologies, writing the tests and verifying them seems like to be a daunting task but it is pretty easy with the perl test facilities. 
- It is not always necessary that a large perl project must have an automated test suite. 


Saturday, July 21, 2012

What is meant by DNS? What does it contain?


DNS or domain name system is a well known distributed system which is quite hierarchical in nature and is used for the following:
  1. Computers
  2. Services
  3. Resources that are connected to some private network or internet and so on.

What does DNS contain?


- With the aid of DNS, the domain names with various participating entities contains various information. 
- A domain name system is also known as domain name service and has taken up the responsibility of resolving the queries for the above discussed domain names into the corresponding IP addresses. 
- The basic purpose of this whole process is spotting the location of devices and computer services on the World Wide Web.
- The domain name system had lately become quite an essential part of the functionality of the internet because of the world wide service it provides regarding the distributed key word based redirection. 
- To put it simply it acts as a phone book in disguise for the internet. 
- It serves as a phone book in the way that it translates the human friendly computer host names in to their corresponding IP addresses. 
For example,
The domain name: www. Abc. Com translates in to the following IP address (say): 192. 0 . 34 . 11 (IPv4) and 2630 : 0 : 2c0 : 201 : : 10 (IPv6) etc.

- Though DNS serves all the purposes of an ideal phone book in terms of the internet, it differs from the phone book in one respect which is that the DNS can be frequently updated and these updates in turn can be distributed but in phone book these tasks cannot be performed so.
- With the help of such a process the location of a particular service on a network can be easily changed without having any affect on the end users who keep on continuing with the same host name. 
-This advantage is further reaped by the users while they recite the meaningful e- mail addresses as well as the URLs (uniform resource locators) without even knowing the way via which the services are actually located by the computers. 
With the help of domain name system, each and every domain in the network is assigned with an appropriate domain name and this domain name is mapped to corresponding IP addresses through the designation of the authoritative name servers for each and every domain. 
- These authoritative name servers hold the responsibility of their particular domains and also it helps in assigning the sub domains with their respective authoritative name servers.
- Such a mechanism has helped a lot in making the domain name system quite fault tolerant and distributed. 
- This mechanism in another way has eliminated the requirement of a single central register to be used continually for updating and consultation. 
- There is one more additional feature of the domain name system which is that the responsibility of the updating and maintenance of the master record of the domains is distributed among many domain name registrars.
- These domain name registrars are known for their competition for the domain owner’s and end user’s business. 
- The facility of moving the domains from one registrar to another has been very well provided in the domain name system.
- The technical functionality of the data base service as well as the DNS specification is also specified by the domain name system.
- This DNS protocol is a kind of detailed specification of the communication exchanges and data structures that are used in the domain name system which in turn forms a very important part of the whole internet protocol suite. 


Friday, July 20, 2012

Explain how the data is secured in HTTPS?


HTTP secure or HTTPS can be thought of as an extended version of the regular HTTP. This communication protocol is the widely used one next to the regular HTTP when it comes to having a secure communication path between the user and the server over a computer network. 
The HTTPS finds quite a wide deployment over the internet when compared to deployment over intranet. If we understand it deeply we will come to know that in actual it is not a protocol in itself as it seems so from outside. 
It is actually a regular hyper text transfer protocol (HTTP) simply layered over SSL/ TSL protocol. The SSL/ TSL protocol thus lends its security capabilities to the standard HTTP communications when HTTP is layered up on SSL/ TSL. 

In this article we discuss how the data is secured in HTTPS. As we mentioned above that it is quite deployed in the internet services and it is so because it provides a quite convenient means to authenticate the web site as well as the web server associated with it (with which the connection is being established).

How data is secured in HTTPS


Such an authentication is of much importance as it provides the protection against the man in middle attacks which usually occurs because of eavesdropping between our communications with the server. 
- Moreover, HTTPS provides bidirectional encryption of the communications or the data that is exchanged between the clients and the servers. 
- The ability of the bidirectional encryption by virtue of which it protects against tampering and eavesdropping which otherwise would forge the contents of the communications between the clients and the servers, makes it much necessary. 
- HTTPS comes with a reasonable guarantee that you get to communicate only with the web site which you intended to communicate with and with none else.  - Furthermore, a way to prevent the forgery of the contents of the communication that takes place between the users and the clients cannot be hampered or forged by any of the third parties is ensured by the http secure. 
In HTTPS, the entire HTTP is levied up on the top of the TSL or SSL thus enabling the total encryption of the HTTP communications content.
- This communications content includes:
  1. Request URL which states the particular web page that was requested.
  2. Query parameters
  3. Headers
  4. Cookies containing the identity information about the user and so on. 

Negative Points of HTTPS


Though the HTTPS has got many advantages, its minus point cannot be unseen.
-HTTPS cannot protect the disclosure of the communication content.
-This happens so because the addresses of the host web sites and port numbers form a necessary part of the TCP/ IP protocols that underlie the https. -To be seen practically, it means that the identity of the server can still be inferred by the eavesdroppers even on a correctly configured web server as well as the amount and duration of the communication.
-In the early years, the HTTPS was common to be used in the money transactions over the World Wide Web and other sensitive transitions like e- mails.
-In the recent years it has been known for the following:
  1. Authenticating the web pages,
  2. Providing security to the accounts,
  3. Maintaining the privacy of the user communications, web browsing and identity.
The HTTPS has also come to the rescue of the wi- fi since it is highly prone to attacks being un- encrypted. The importance of https is often more realized when the connections are made over tor or anonymity network.       


Tuesday, June 12, 2012

What is meant by CORBA architecture?


Software like hardware does not wear out but it has to be modified according to some changes in the needs of the users and advancement in technology. As a consequence of the modification of the software systems or applications, its degree of complexity increases proportionally which then leads to an increased rate of errors. 
It was suggested by some developers that in order to reduce this complexity and cut down on the maintenance costs and efforts, the development can be based up on the small and simple components. 
Initially, this proved to be very helpful as a means to tackle the software crisis but in the later years it soon developed in to what is called now “component based software development”. 

Following this software development methodology, large software systems and applications are built from the small and simple components that belong to the pre- existing software systems and applications. Over the years, this process has proved to be an effective approach for the enhancement of the maintainability and flexibility of the software systems that are built using it. The software system is assembled quickly and that too within quite a low budget.

The component based development is known to constitute of 4 activities namely:
  1. Component qualification
  2. Component adaptation
  3. Assembling components
  4. System evolution
We are going to discuss about “CORBA architecture” in this article which forms an important part of the third activity i.e., assembling the components. 

What does CORBA stand for?


- The assembling of the components is facilitated through some well defined infrastructure which provides binding for the separate components. 
- CORBA is an important component technology that stands for “common object request broker architecture” and has been developed by OMG (object management development). 
In CORBA “ORB” which stands for “object request broker” is object oriented and a more advance version of “RPC” or “remote procedural calls” that was an old technology. 
- With the remote procedural calls or object request brokers, the client applications are able to call the methods (passing responses and generating responses) from accessing the objects across an amalgam of several different networks.

What CORBA is meant for?


- To put it simply, we can say that the CORBA is an effective standard mechanism using which different operations on an object can be invoked. 
- CORBA is categorized under the category of distributed middle ware technology.
- It is meant to connect remote objects and inter- operate between them on operating systems, different networks, various machines and programming languages etc.
- It is done with the means of a standard IIOP protocol. 
- CORBA has made it easy to write software components in multiple computer languages that need to run together and support multiple platforms. 
- With CORBA all the components work together like a single set of integrated applications.  
- CORBA normalizes the method call semantics between the various objects of the application that reside either in remote address space or same address space. 

More about CORBA...


- The first version of the CORBA 1.0 was released in the year of 1991. 
- The IDL or interface definition language is used by CORBA for the specification of the interfaces which are presented to the outer world by the objects. 
- Then a mapping from IDL is specified by CORBA in a specific implementation language like java or C++. 
- For certain languages like C, C++, Ruby, Smalltalk, COBOL, Python and so on standard mappings exist and for some other languages like visual basic, TCL, Erlang and Perl non standard mappings exist.
- In practice, the ORB is initialized by the software application and an object adapter is accessed. 
- This object adapter maintains things like:
  1. Reference counting
  2. Object policies
  3. Instantiation policies
  4. Object lifetime policies
IDL java mapping makes use of the CORBA architecture.  


Wednesday, May 30, 2012

Explain the concepts of URL manipulation?


Today in this internet savvy world, I guess almost everybody is familiar with what is an URL or uniform resource locator. 
If you see an URL, you can make out that it is nothing but a string of characters. These characters add up to mark up a reference string which points to a source from internet. A uniform resource locator was previously known as uniform resource identifier.
The URLs came in to existence in the year of 1994 along with the introduction of the World Wide Web by Sir Tim Berners – Lee along with the contributions from the internet engineering task force. 
The format of a typical URL consists of the domain names along with the file paths and the forward slashes are used to distinguish between the different file names and folders. Name of the servers are preceded by a double slash. 

Components of URL


Let us now list the components of a typical URL in the order in which they are lined up in the URL:
  1. The scheme name which is usually a protocol.
  2. The scheme is followed by a colon
  3. Two slashes
  4. Name of the domain (if any depending on the scheme).
  5. A port number
  6. CGI (common gateway scripts) scripts
  7. Query string
  8. Fragment identifier (optional)

Categories of URL


- The URLs are categorized under two categories namely relative URLs and absolute URLs. 
- The relative URLs are used whenever the references contained in the resources refer to another resource. 
- These relative URLs are often conceived from the absolute URLs. 
- The URLs locate a resource based on their primary access mechanism. 
- There are various issues related to URLs like URL normalization, URL manipulation etc. 

What is meant by URL Manipulation?


- URL manipulation is just another name for URL rewriting.
- As the term itself suggests it is all about altering the parameters of the URL.
- The URL manipulation is used for good purposes also and for bad ones also. 
- It is a technique that is usually employed by the web server administrator for convenience and is often used by the hackers for nefarious purposes. 
- The original URLs of the resources are quite complicated and complex. 
- Therefore, a purpose of this technique is also to make it easy for the user to access a web resource by providing a simple URL. 
- URL manipulation technique is used so that the user does not require cutting, copying or pasting long and arcane string of characters. 
- This technique is also employed since remembering complex URLs is a difficult task and they are quite lengthy which makes it quite a tedious task for the users to remember or store it and use. 
- Therefore, using the technique of URL manipulation they are modified in to simple and short URLs which are comparatively easy for the users to remember. 

Wrong Use of URL Manipulation
- A nefarious use of URL manipulation is to use the URL of a legitimate site or web resource without the prior permission or knowledge of the site owner or administrator to redirect the users to an illegitimate web site or web resource. 
- Such illegitimate sites then may install malicious code on the hard drive of the user’s system.
- This may also have an intended purpose of increase the traffic on the attacker’s illegitimate web site or application.
- There is a term similar to the term URL manipulation called URL poisoning. These two terms may sound similar in meaning, though they are not. 

What is URL Poisoning?


- URL poisoning is a technique that is employed to track the activities of the user on the web. 
- This technique involves the addition of an identification number to the current URL of the web browser when that particular web site is visited by the user. 
- This URL with the ID number is then used for tracking the visits of that user on the sites.


Facebook activity