Monday, September 30, 2013
Making mistakes concerning the network security is very common. The same mistakes are repeated again and again. These problems cannot be solved without changing our working methods. In this article we discuss about some common security problems that are faced by a network.
ØUsing weak and non-complex passwords for accessing the network:
- Brute forcing is an old school exploit to which many of the system network administrators are open to.
- The very famous captcha technology has been implemented for correcting this vulnerability of the network security passwords.
- In the common captcha, the user is required to type in the digits or the letters that are displayed on the screen in some sort of distorted image.
- This technology has been designed to prevent the network to be accessed by unwanted internet bots.
- However, this is not as safe as it looks.
- It just gives a false sense to the network admins for countering the brute forcing.
- Complex password is the solution for this problem.
- For creating a complex password, more than seven characters need to be combined with special characters and numbers.
- Apart from the creation of the complex passwords, a password expiration system has to be implemented.
- This system is for reminding the users for changing their passwords.
- Also, care should be taken regarding the reuse of the passwords.
- Cycling of the passwords should not be allowed.
Ø Using server application or software that is outdated:
- The patches are released by the companies from time to time for ensuring that the system does not become vulnerable to the various threats.
- Also, new exploits and threats are posed by the hackers that can harm the network if the patches are not properly used.
- For ensuring the network administrator is kept informed of the new threats, the software or the applications have to be updated regularly.
Ø Web cookies:
- Even though the viruses and malware cannot be introduced in to the network through cookies, these cookies can be tracked by some third party cookies for compiling the records of the browsing histories of the individuals.
- The cookies that are not encrypted pose a major threat because they make the system vulnerable to the cross site scripting (XSS) attacks, thus putting your privacy at risk.
- The open cookies can provide access to the cookies with the log-in data which can be used by hackers for intruding in to your systems.
- The solution to this problem is to use the encrypted cookies along with an encoded expiration time.
- The admins might ask the users to re-log-in before accessing important network directories.
Ø Plain hashes:
- Hashing is the technique used for indexing and retrieval purposes in the database.
- In most of the encryption algorithms, the plain hashes are mostly used.
- A type of encryption is the salt that might be added to the hashes for making the creation of a look-up table that might assist the brute force or directory attacks extremely difficult or let’s say almost impractical.
- But this works only when large salt is used.
- Usually a pre-computed look up table might not be used by the attacker in exploitation of the network.
- This makes the network security system even more complex.
- So even if the attacker is able to break into your system, he won’t be able to access the information from the database.
- The encryption key should be kept hidden.
Ø Shared web hosting:
- This service is used by the websites that reside on one same server.
- Each site is given its own partition.
- This is economically feasible for most of the systems.
- But here if the attacker breaches in to system of one website, he can get into other website’s security systems too.
Friday, September 27, 2013
With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators.
What is Quality of Service?
- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies.
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types.
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic.
- The minimum as well as the maximum wait times are also specified for the transmission.
- This is done through the contention windows.
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters.
- The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters.
Below we mention some parameters:
Ø QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data.
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.
Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Variation in delay
Ø Cell error ratio
Ø Cell loss ratio
Thursday, September 26, 2013
The process of multiplexing is carried out at the transport layer. Several conversations are multiplexed in to one connection or physical links or virtual circuit. For example, suppose the host has only one network address available for use. Then it has to be used by all the transport connections originating at that host. For multiplexing the following two main strategies are followed:
Ø Upward multiplexing and
Ø Downward multiplexing
- In upward multiplexing, the different transport connections are multiplexed in to one network connection.
- These transport connections are grouped by the transport layer as per their destinations.
- It then maps the groups with the minimum number of network connections possible.
- The upward multiplexing is quite useful where the network connections come very expensive.
- It is only used when the connections with high bandwidth are required.
- In case of the downward multiplexing, the multiple network connections are opened by the transport layer and the traffic is distributed among them.
- But for using downward multiplexing, it is necessary that this capacity must be handled well by the subnet’s data links.
- In either of the cases it is not guaranteed that the segments will be delivered in order.
- Therefore, another technique is adopted.
- The segments are numbered sequentially.
- Each octet is numbered by the TCP sequentially.
- Segments are then numbered based up on the number of the first octet present in that segment.
- The segments might get damaged in the transition or some may even fail to arrive at the destination.
- This failure is not acknowledged by the transmitter.
- However, the successful receipt of the segment is does acknowledged by the receiver.
- Sometimes, the cumulative acknowledgements might be used.
- If the ACK triggers a time out interrupt, the re-transmission of the segment is done.
- Also the re-transmission is done when an ACK is lost.
- The receiver must have the ability to recognize the duplicate ACKs.
- If such thing occurs, the receiver assumes by itself the ACK might have been lost.
- This happens when the ACK duplicate is received before the connection is closed.
- If the duplicate is received after the closure of the connection, the situation is dealt differently.
- In this case, the sender and receiver are allowed to know about each other’s existence.
- They negotiate about the parameters and the transport entity resources are allocated based up on some mutual agreement.
The connection release is of two types:
Ø Asymmetric release:
This is the one used in the telephone systems. However it does not works well for the network that use packet switching.
Ø Symmetric release:
- This is certainly better than the previous one.
- Here, all the directions are released independently with respect to each other.
- The host continues receiving data after the disconnection TPDU has been sent.
- But the symmetric release has another problem which is related with indirection levels and fake messages.
- There are no proper solutions for this problem in case of the unreliable communication media.
- Note that this has nothing to do with the protocol.
- Putting a reliable protocol over an unreliable medium can actually guarantee the delivery of the message.
- Another thing to be noted is that it the time limit within which the message will be delivered cannot be guaranteed by any protocol.
- Error conditions might prolong the delivery period.
- Restarting the connections can lead to the loss of all the state info and the connection might remain as half-open.
- Since no protocol has been designed to deal with this problem therefore one has to go forward with the risks associated with releasing the connections.
Wednesday, September 25, 2013
- Multiplexing or muxing is a very important process in computer networks and the telecommunications.
Using this process, a number of digital data streams or analog message signals are combined as one signal and then transported over the common medium.
- Multiplexing is used wherever it is required to share a resource that is very expensive.
- The most common example of multiplexing is of using one wire for several telephone calls.
- The origin of the multiplexing dates back to 1870s when telegraphy was started.
- Now it is used to a great extent in the field of communications.
- The telephone carrier multiplexing was developed by George Owen Squire in the field of telephony.
- The communication channel over which the multiplexed signal might be transmitted might be a physical transmission medium.
- The high level communication channel’s capacity is divided by multiplexing process in to a number of low level logical channels where for each message or data stream one channel is used.
- Demultiplexing is the reverse process of multiplexing.
- This is used for the extraction of the original signals on the reception side.
- A multiplexer or MUX is a device that is used for carrying out the multiplexing process and the demultiplexer or DEMUX is the device that performs demultiplexing.
- IMUX or inverse multiplexing is another process whose aim is just the opposite of the multiplexing.
- It breaks down a single data stream in to various streams while transferring them at the same time over various communication channels.
- Later, the original stream is recreated.
Types of Multiplexing
Many different types of multiplexing technologies are available today. Each has its own significance:
Ø SDM or space-division multiplexing:
This technique implies on using different point – to – point wires for individual communication channels. For example, an audio cable of analogue stereo, multi – pair telephone cable, switched star network, mesh network. However typically the wired SDM is not usually considered as multiplexing. In SDM a phased array antenna is formed by multiple antennas. For example MIMO (multiple – input and multiple – output), SIMO (simple – input and multiple – output), MISO (multiple – input and single – output) etc.
Ø FDM or frequency-division multiplexing:
This is considered to be an analog process, here the signals are sent in to different frequency ranges over a shared medium. For example, TV and radio broadcasting from satellite stations through the earth’s atmosphere. One cable is given in each house but over this cable many signals can be sent to other subscribers also. For accessing the desired signal, the users require to tune to that particular frequency. WDM or wavelength division multiplexing is a variant of FDM.
Ø TDM or time-division multiplexing:
Unlike FDM, TDM is a digital technology but very rarely it might be used as an analog technology also. The process involves putting bytes in a sequence for each input stream one by one. this sequencing is done in such a way that the receiver can appropriately receive them. If this is done quickly, the fact that another logical communication path was served in that circuit time won’t be detected by the receiver.
Ø CDM or code-division multiplexing:
In this multiplexing technique, the same frequency spectrum is shared by the several channels at the same time. Also the bandwidth of the spectrum is quite high when compared to the symbol rate or the bit rate. It is implemented in either of the two forms namely direct sequence spread spectrum and frequency hopping.
Some other types of multiplexing techniques which are less prominent are:
Polarization-division multiplexing: Used in optical and radio communications.
Orbital angular momentum multiplexing
Monday, September 23, 2013
- The QoS or the quality of service is such a parameter that refers to a number of aspects of computer networks, telephony etc.
- This parameter allows transportation of traffic as per some specific requirements.
- Technology has advanced so much now computer networks can also be doubled up as the telephone networks for doing audio conversations.
- The technology even supports the applications which have strict service demands.
- The ITU defines the quality of service in telephony.
It covers all the requirements concerning all the connection’s aspects such as the following:
Ø Service response time
Ø Signal – to – noise ratio
Ø Cross – talk
Ø Frequency response
Ø Loudness levels etc.
- The GoS (grade of service) requirement is one subset of the QoS and consists of those aspects of the connection that relate to its coverage as well as capacity.
- For example, outage probability, maximum blocking probability and so on.
- In the case of the packet switched telecommunication networks and computer networking, the resource reservation mechanisms come under the concept of traffic engineering.
- QoS can be defined as the ability by virtue of which the different applications, data flows and users can be provided with different priorities.
- It is important to have QoS guarantees if the capacity of the network is quite insufficient.
- For example, voice over IP, IP-TV and so on.
- All these services are sensitive to delays, have fixed bit rates and have limited capacities.
- The protocol or network supporting the QoS might agree up on some traffic contract with the network node’s reserve capacity and the software.
- However, the quality of service is not supported by the best effort services.
-Providing high quality communication over such networks provides a alternative to the QoS control mechanisms that are complex.
- This happens when the capacity is over-provisioned so much that it becomes sufficient for the peak traffic load that has been expected.
- Now since the network congestion problems have been eliminated, the QoS mechanisms are also not required.
- It might be sometimes be taken as the level of the service’s quality i.e., the GoS.
- For example, low bit error probability, low latency, and high bit rate and so on.
- QoS can also be defined as a metric that reflects up on the experienced quality of the service.
- It is the cumulative effect that can be accepted.
Certain types of the network traffic require a defined QoS such as the following:
Ø Streaming media such as IPTV (internet protocol television), audio over Ethernet, audio over IP etc.
Ø Voice over IP
Ø Video conferencing
Ø iSCSI, FCoE tec. Storage applications
Ø safety critical applications
Ø circuit emulation service
Ø network operations support systems
Ø industrial control systems
Ø online games
- All the above mentioned services are examples of the inelastic services and a certain level of latency and bandwidth is required for them to operate properly. - On the other hand, the opposite kind of services such as the elastic services can work with any level of bandwidth and latency.
- An example of these type of services is the bulk file transfer application based up on TCP.
- A number of factors affect the quality of service in the packet switched networks.
- These factors can be broadly classified in to two categories namely technical and the human factors.
The following factors are counted as the human factors:
Ø grade of service and so on.
- ATM (asynchronous transfer mode) or GSM like voice transmissions in the circuit switched networks have QoS in their core protocol.
Saturday, September 21, 2013
In the field of computer networking, the purpose of the 4th layer or the transport layer is to provide services for the end to end communication for the various operating applications. The services are provided within an architectural framework that consists of protocols and the components and is layered. It also offers convenient services such as the following:
Ø Connection – oriented data stream support
Ø Flow control
Ø Multiplexing and so on.
- Both the OSI (open systems interconnection) and TCP/ IP model include the transport layer.
- The foundation of the internet is based up on the TCP/ IP model whereas for the general networking, the OSI model is followed.
- However, the transport layer is defined differently in both of these models. Here we shall discuss about the transport layer in the TCP model since it is used for keeping the API (application programming interface) convenient to the internet hosts.
- This is in contrast with the definition of the transport layer in the OSI model.
- TCP (transmission control protocol) is the most widely used transport protocol and so the internet protocol suite has been named after it i.e., the TCP/ IP.
- It is a connection-oriented transmission protocol and so it is quite complex.
- This is also because it incorporates reliable data stream and transmission services in to its state-ful design.
- Not only TCP there are other protocols in the same category such as the SCTP (stream control transmission protocol) and DCCP (datagram congestion control protocol).
Now let us see what all services are provided by the transport layer to its upper layers:
ØConnection-oriented communication: It is quite easy for the application for interpreting the connection as a data stream instead of having to cope up with the connectionless models that underlie it. For example, internet protocol (IP) and the UDP’s datagram protocol.
Ø Byte orientation: Processing the data stream is quite easy when compared with using the communication system format for processing the messages. Because of such simplification, it becomes possible for the applications to work up on message formats that underlie.
Ø Same order delivery: Usually, it is not guaranteed by the transport layer that the data packets will be received in the same order in which they were sent. But this is one of the desired features of the transport layer. Segment numbering is used for incorporating this feature. The data packets are thus passed on to the receiver in order. Head of line blocking is a consequence of implementing this.
Ø Reliability: During the transportation some data packets might be lost because of errors and problems such as network congestion. By using error detection mechanism such as CRC (cyclic redundancy check), the data might be checked by the transport protocol for any corruption and for the verification whether the correct reception of the data by either sending a NACK or an ACK signal to the sending host. Some schemes such as the ARR (automatic repeat request) are sometimes used for the retransmission of the corrupted or the lost data.
Ø Flow control: The rate with which the data is transmitted between two nodes is managed for preventing a sending host with a fast speed from the transmission of data more than what the receiver’s data buffer can take at a time. Otherwise it might cause a buffer overrun.
Ø Congestion avoidance: Traffic entry in to the network can be controlled by means of congestion control by avoiding congestive collapse. The network might be kept in a state of congestive collapse by automatic repeat requests.
Friday, September 20, 2013
A number of problems are encountered because of the size of the data packets. There is no ability in the data link layer by means of which it could handle these problems and so the bridges also don’t work here.
The Ethernet also experiences a number of problems because of the following:
Ø Different way in which the maximum packet size is defined.
Ø Maximum packet size that can be handled by a router.
Ø The maximum length slot that are used for transmission
Ø Errors due to the packet length
The data packets can be fragmented in two ways namely:
- Transparent and
- Non – transparent
Both these ways can be followed based on a network by network basis. We can also say that no such end – to – end agreement exists based up on which it can be decided which process is to be used.
- This type of fragmentation is followed when a packet is split in to smaller fragments by a router.
- These fragments are sent to the next router which does just the opposite i.e., it reassembles the fragments and combine them to form original packet.
- Here, the next network does not come to know whether any fragmentation has taken place.
- Transparency is maintained between the small packet networks when compared to the other subsequent networks.
- For example, transparent fragmentation is used by the ATM networks by means of some special hardware.
- There are some issues with this type of fragmentation.
- It puts some burden on the performance of the network since all the fragments have to be transmitted through the same gateway.
- Also, sometimes the repeated fragmentation and reassembling has to be done for small packet network in series.
- Whenever an over-sized packet reaches a router, it is broken up in to small fragments.
- These fragments are transported to the next exit router.
- The fragments are assembled by this exit router which then forwards them to the next router.
- Awareness regarding this fragmentation is not maintained for the subsequent networks.
- For a single packet fragmentation is done many times before the destination is finally reached.
- This of course consumes a lot of time because the repeated fragmentation and assembling has to be carried out.
- Sometimes, it also presents the reason of corrupting the packet’s integrity.
- In this type, the packet is split in to fragments by one router.
- But the difference is that these fragments are not reassembled until the fragments reach their destination.
- They remain split till then.
- Since in this type of fragmentation the fragments are assembled only at the destination host, the fragments can be routed independent of each other.
- This type of fragmentation also experiences some problems such as header has to be carried by each of the fragments till they reach their destination.
- Numbering has to be done for all the fragments so that no problem is experienced in reconstructing the data stream.
Whichever type of fragmentation we use, one thing has to be made sure which is that later we should be able to form the original packets using the fragments. This insists on having some type of labeling for the fragments.
Segmentation is another name for the fragmentation. A packet is injected in to the data link layer by the IP layer but it is not responsible for reliable transmission of the packets. Some maximum value on the size of the packets is imposed by each layer for their reasons. For a large packet that travels through the network for which the MTU is small, fragmentation is very much needed.