Subscribe by Email


Wednesday, March 31, 2010

WHOIS Protocol - Specifications and characteristics

- WHOIS is a TCP-based transaction-oriented query/response protocol that is widely used to provide information services to Internet users.
- Whois is a draft standard protocol. Its status is elective.
- The protocol delivers its content in a human-readable format.
- WHOIS lacks many of the protocol design attributes, for example internationalization and strong security, that would be expected from any recently-designed IETF protocol.
- The Whois program is commonly used in the UNIX environment to connect to a Whois server. The purpose of the server is to provide directory type services.
- The original Whois server was set up so that the Network Information Center could maintain a contact list for networks connected to the Internet. However, many sites now use Whois to provide local directory services.

The use of the data in the WHOIS system has evolved into a variety of uses, including:
- Supporting the security and stability of the Internet by providing contact points for network operators and administrators.
- Determining the registration status of domain names.
- Assisting law enforcement authorities in investigations for enforcing national and international laws.
- Assisting in the combating against abusive uses of Information communication technology.
- Facilitating inquiries and subsequent steps to conduct trademark clearances and to help counter intellectual property infringement, misuse and theft in accordance with applicable national laws and international treaties.
- Contributing to user confidence in the Internet as a reliable and efficient means of information and communication.
- Assisting businesses, other organizations and users in combating fraud.

Protocol Specification


A WHOIS server listens on TCP port 43 for requests from WHOIS clients. The WHOIS client makes a text request to the WHOIS server, then the WHOIS server replies with text content.

Internationalization


The WHOIS protocol has not been internationalised. The WHOIS protocol has no mechanism for indicating the character set in use.


Tuesday, March 30, 2010

Network News Transfer Protocol - NNTP

- NNTP specifies a protocol for the distribution, inquiry, retrieval, and posting of news articles using a reliable stream-based transmission of news among the ARPA-Internet community.
- NNTP is designed so that news articles are stored in a central database allowing a subscriber to select only those items he wishes to read. Indexing, cross-referencing, and expiration of aged messages are also provided.
- NNTP is designed so that news articles are stored in a central database allowing a subscriber to select only those items he wishes to read.
- The Network News Transfer Protocol (NNTP) established the technical foundation for the widely used Newsgroups.
- The NNTP protocol is the delivery mechanism for the USENET newsgroup service.
- NNTP is also used by clients who need to read news articles on USENET servers.
- NNTP uses an interactive command and response mechanism that lets hosts determine which articles are to be transmitted.
- Server-to-server exchanges : In the server-to-server exchange, one server either requests the latest articles from another server (pull) or allows the other server to push new articles to it.
- User-to-server connections : The user first connects with a newsgroup server (usually located at an ISP (Internet service provider), then downloads a list of available newsgroups. The user can then subscribe to a newsgroup and begin reading articles available in that group or post new articles.

Protocol Structure - NNTP


NNTP uses commands and responces for communications.
- Article - Display the header, a blank line, then the body (text) of the specified article.
- Head : Identical to the ARTICLE command except that they respectively return only the header lines or text body of the article.
- Status : Similar to the ARTICLE command except that no text is returned.
- Group : The required parameter ggg is the name of the newsgroup to be selected.
- Body : Identical to the ARTICLE command except that they respectively return only the header lines or text body of the article.
- List : Returns a list of valid newsgroups and associated information.
- NewsGroups : A list of newsgroups created since will be listed in the same format as the LIST command.
- ewNews : A list of message-ids of articles posted or received to the specified newsgroup since "date" will be listed.
- Next : The internally maintained "current article pointer" is advanced to the next article in the current newsgroup.
- Post : If posting is allowed, response code 340 is returned to indicate that the article to be posted should be sent.
- Quit : The server process acknowledges the QUIT command and then closes the connection to the client.


Monday, March 29, 2010

POP - Post Office Protocol

Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. The Post Office Protocol (POP) allows you to fetch email that is waiting in a mail server mailbox. POP defines a number of operations for how to access and store email on your server.
It works in conjunction with the SMTP (Simple Mail Transfer Protocol), which provides the message transport services required to move mail from one system to another.

Purpose of POP


If somebody sends you an email it usually cannot be delivered directly to your computer. The message is stored in a place where you can pick it up easily. Your ISP (Internet Service Provider) does this job. It receives the message for you and keeps it until you download it. Now, POP, the Post Office Protocol is what allows you to retrieve mail from your ISP. This is also about all the Post Office Protocol is good for.

What POP allows you to do?


Things that can be done via the POP include:
- Retrieve mail from an ISP and delete it on the server.
- Retrieve mail from an ISP but not delete it on the server.
- Ask whether new mail has arrived but not retrieve it.
- Peek at a few lines of a message to see whether it is worth retrieving.

POP3 is an open Internet standard. The common POP3 commands and responses are :
- getwelcome(): Gets the greeting from the server.
- user(username): Login with a username. If valid username, server will respond with request for password.
- pass_ (password): Send password. If valid, server response will be two numbers, message count and mailbox size.
- stat(): Get the mailbox status. Response is two numbers, message count and mailbox size.
- list([message]): Get list of messages. An option "message" gets information on a specific message.
- retr(message): Get message number "message".
- dele(message): Delete message number "message".
- rset(): Remove all deleted message markings.
- noop(): No operation. Do nothing. Really. Needed in unusual programming situations.
- quit(): Quit. Commits all changes, unlocks the mailbox, and ends the server connection.
- top(message, lines): Gets just the first "lines" number of lines of message number "message". Useful on low bandwidth lines to get just the first part of long messages.
- uidl([message]): Gets a unique id list -- a message digest including unique ids. The option gets the unique id for the specific message "message".


Sunday, March 28, 2010

Dynamic Trunking Protocol (DTP)

The Dynamic Trunking Protocol (DTP) is a proprietary networking protocol developed by Cisco Systems for the purpose of negotiating trunking on a link between two VLAN-aware switches, and for negotiating the type of trunking encapsulation to be used. It works on the Layer 2 of the OSI model. If a port can become a trunk, it may also have the ability to trunk automatically, and in some cases even negotiate what type of trunking to use on the port. DTP provides this ability to negotiate the trunking method with the other device.
There are a couple of other potential issues that arise when you start trunking.
- The first issue is that both ends of a trunk cable had better agree they're trunking, or they're going to be interpreting trunk frames as normal frames. To resolve this, Cisco came up with a protocol for switches to communicate intentions. The first version of it was VTP, VLAN Trunking Protocol, which worked with ISL. The newer version works with 802.1q as well, and is called Dynamic Trunking Protocol (DTP).
- The second issue is creating VLAN's.

Switch port modes


- auto : causes the port to passively be willing to convert to trunking. The port will not trunk unless the neighbor is set to on or desirable . This is the default mode.
- on : forces the link into permanent trunking, even if the neighbor doesn't agree.
- off : forces the link to permanently not trunk, even if the neighbor doesn't agree.
- desirable : causes the port to actively attempt to become a trunk, subject to neighbor agreement.
- nonegotiate : forces the port to permanently trunk but not send DTP frames.

Protocol Structure of DTP


On a Catalyst set-based switch, the syntax for setting up a link as a trunk is:

set trunk mod_num/port_num [on | desirable | auto | nonegotiate] [isl | dot1q | negotiate] [vlan_range]
Use this command to set the specified port or ports to trunking.


Some of the challenges involved in the process of using test automation tools

Description of automated decision making problems

In the past decade or so, test automation has jumped up to being a solution touted by a number of experts for improving quality. Even when you consider the case of institutions that provide training on testing concepts, their courses that cost the most are those related to automated testing tools and how to become an expert on using them. However, it should be considered that taking a decision to go for automated testing needs to be a thoroughly considered decision, with the cost and benefits needing to be calculated (in one small company, I realized that the decision to go in for automated testing was made based on the fact that the company had hired 2 people who had done some amount of automated testing earlier). After all, it is very easy to mockup a Excel file and Powerpoint with the benefits of introducing automated testing, but as I said earlier, this should not be a decision taken in haste.

Here are a few points that you should be aware of before you go in for automated testing tools

- Resources: If you are going in for automated testing, then consider the fact that you will need to have more people for your testing team (this also means that some of the people you need will have to have experience with using automated testing). The typical work for a testing team is normally challenging enough, without having to spend the additional time required to automates the test cases.
- Time required: Automation of test cases can take some time to happen. It takes time to learn how to use a testing tool, it takes time to create a proper automation framework, and then starting to write the proper automation scripts and cover all your test cases (or atleast the desired level of test cases that are considered proper for automation).
- Skill set differences. For a team comprising of black box testers, as well as a manager of such people, having a team of people expert in automation testing means a different skill sets. This can cause various morale issues, as well as a lack of understanding of the various specialized needs of the testing team.
- When the application needs to change frequently. During the course of the product development process, the product can change, including the UI. I have seen cases in the past where the UI changes, and the product automation no longer works. In such cases, the script needs to be re-worked, which needs more effort.
- Maintainability of such scripts. Typically, over a period of time, automation scripts need to be maintained, and as people move on in the company, maintaining the same scripts (and ensuring that new people are able to understand the entire structure of the test automation framework) can take increasing periods of effort and time.
- Interlap between test automation areas and manual testing areas. This can be hard to figure for many products where the product functionality cannot be broken down into clean separable areas. In such cases, duplication of testing efforts can happen.
- Expensive. From my experience, the cost needed for test automation is not a one-time effort. We had bought a fairly expensive tool, but then realized that we still had to pay around 40% of the cost annually for the AMC and for the ability to get regular updates.
- Platform compatibility. This was something new that we learnt. We had a product that was only available on Windows, and after a couple of cycles, we also moved onto the Mac and onto Linux because of the potential of getting more sales. And then we realized that our testing tool did not support Linux, and we either needed to get a new tool for Linux (and which meant that we needed to port all our test scripts), or we needed to go in for manual testing (in which case also we did not get the benefits of automated testing on Linux).


Saturday, March 27, 2010

CLNP (Connectionless Network Protocol)

CLNP is a datagram network protocol. It provides fundamentally the same underlying service to a transport layer as IP. CLNP provides essentially the same maximum datagram size, and for those circumstances where datagrams may need to traverse a network whose maximum packet size is smaller than the size of the datagram, CLNP provides mechanisms for fragmentation (data unit identification, fragment/total length and offset). Like IP, a checksum computed on the CLNP header provides a verification that the information used in processing the CLNP datagram has been transmitted correctly, and a lifetime control mechanism ("Time to Live") imposes a limit on the amount of time a datagram is allowed to remain in the internet system.

CLNP (Connectionless Network Protocol) may be used between network-entities in end systems or in Network Layer relay systems (or both). CLNP is intended for use in the Subnetwork Independent Convergence Protocol (SNICP) role, which operates to construct the OSI Network Service over a defined set of underlying services, performing functions necessary to support the uniform appearance of the OSI Connectionless-mode Network Service over a homogeneous or heterogeneous set of interconnected subnetworks.

CLNP Protocol Structure


- NLP ID - Network Layer Protocol Identifier : The value of this field is set to binary 1000 0001 to identify this Network Layer protocol as ISO 8473. The value of this field is set to binary 0000 0000 to identify the Inactive Network Layer protocol subset.
- Length ID : Length Indicator is the length in octets of the header.
- Version : Version/Protocol Id Extension identifies the standard Version of ISO 8473.
- Lifetime : PDU Lifetime representing the remaining lifetime of the PDU, in units of 500 milliseconds.
- Flags : There are three flags : segmentation permitted, more segments, error report.
- Type : The Type code field identifies the type of the protocol data unit, which could be data PDU or Error Report PDU.
- Segment Length : The Segment Length field specifies the entire length, in octets, of the Derived PDU, including both header and data (if present).
- Checksum : The checksum is computed on the entire PDU header.


Friday, March 26, 2010

Routing Information Protocol (RIP) Cont...

RIP Message Format


RIP updates are placed as UDP payload inside an IP datagram. The format is as follows:
- Command : It indicates whether the packet is a request or a response. The request asks that a router send all or part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables.
- Version number : It specifies the RIP version used. This field can signal different potentially incompatible versions.
- Zero : This field is not actually used by RFC 1058 RIP; it was added solely to provide backward compatibility with prestandard varieties of RIP. Its name comes from its defaulted value: zero.
- Address family identifier (AFI) : It specifies the address family used. RIP is designed to carry routing information for several different protocols. Each entry has an address-family identifier to indicate the type of address being specified. The AFI for IP is 2.
- Address : It specifies the IP address for the entry.
- Metric : It indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

RIPv2 header explanation


- Command : Indicates whether the packet is a request or a response.
- Version : Version of RIP.
- Unused : It has a value set to zero.
- Address family identifier (AFI) : It specifies the address family used.
- Route tag : It provides a method for distinguishing between internal routes (learned by RIP) and external routes (learned from other protocols).
- IP address : It specifies the IP address for the entry.
- Subnet mask : It contains the subnet mask for the entry. If this field is zero, no subnet mask has been specified for the entry.
- Next hop : It indicates the IP address of the next hop to which packets for the entry should be forwarded.
- Metric : It indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.


Thursday, March 25, 2010

Some cases in which there is low or negligible benefit of going in for automation of test cases

Automation of test cases can lead to major benefits for teams that implement the automation of their test cases in an effective way, and with a proper strategy. However, this is not an absolute, and there are many situations when it may not be worth the effort to do an automation. One needs to do an analysis of the cost / benefits of going in for automation (but be sure to include long term benefits as well - such as for a regular long cycle, it may be worth going in for automation if it is beneficial in the long term). Some of the situations in which it may not be beneficial to go in for a test automation strategy are:
- Calculate the effort required for writing the test cases for automation: Just because you have test cases for manual testing does not mean that you can go in for automated testing. You would need to convert those test cases into automation script, and depending on the size of the product, this can be a sizable effort. The effort required for this (converted into number of people required) is a prime input into the calculation of whether test automation should be done.
- How likely is it that there will be a need for automation in the future. Automation has much more value if there is a constant ongoing effort for testing. If there is not likely to be much more need of testing of the product, then the benefit that you can get by automating the test cases is significantly reduced.
- Diversity of cases. This is related to the overall effort required for automation. If there are less chances of situations of load testing or testing of multiple input parameters, then some of the benefits of going in for automation of testing reduce
- When the test software is expensive. Say you have a small product or project that sells for a few hundred dollars or less, and is also not sold in appreciable numbers, then it does not make sense to go in for the more commercially available automation softwares that can be fairly expensive
- When the test case is likely to change during the cycle. If you consider the case of an automation software that is UI based, and the UI of the target application keeps on getting modified, then the cost of modifying the automation test scripts to take the changes in the UI can increase the costs involved in an automation strategy
- Getting people to use these automation testing tools is expensive. Using these automation testing tools can take some level of expertise, and it can take some amount of experience to master using such tools (even though many of them claim that they are simple to use). People who are skilled in using such tools can be expensive to hire, and retain.


Routing Information Protocol (RIP)

Characteristics of RIP


- RIP (Routing Information Protocol) is a standard for exchange of routing information among gateways and hosts.
- RIP is most useful as an "interior gateway protocol".
- All RIP routing protocols are based on a distance vector algorithm called the Bellman-Ford algorithm, after Bellman's development of the equation used as the basis of dynamic programming, and Ford's early work in the area.
- RIP is considered an effective solution for small homogeneous networks.
- For larger, more complicated networks, RIP's transmission of the entire routing table every 30 seconds may put a heavy amount of extra traffic in the network.
- RIP sends routing-update messages at regular intervals and when the network topology changes. When a router receives a routing update that includes changes to an entry, it updates its routing table to reflect the new route.
- Each router sends a list of distance-vectors each of its neighbours periodically.
- The metric must be a positive integer. This metric measures the cost to get to the destination. In RIP, this cost describes number of hops.
- Each hop is assigned a hop count value, which is typically 1. The router adds 1 to the metric value when a router receives a routing update and enters the network in the routing table.
- RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15.
- RIP implements the split horizon and holddown mechanisms to prevent incorrect routing information from being propagated.
- RIP uses numerous timers to regulate its performance. These include a routing-update timer, a route-timeout timer, and a route-flush timer.

Limitations of the protocol


- The protocol is limited to networks whose longest path involves 15 hops.
- The protocol depends upon "counting to infinity" to resolve certain unusual situations.
- This protocol uses fixed "metrics" to compare alternative routes.
- It is not appropriate for situations where routes need to be chosen based on real-time parameters such a measured delay, reliability, or load.


Wednesday, March 24, 2010

Scenarios in which to use automated testing - when to use automated versus manual testing

In a previous post (What is Automated Testing), I had talked about what is automated testing, and what are the benefits of automated testings. In this post, I will explain about some of the scenarios in which the use of automated testing is more likely (and beneficial). This is also part of the discussion about where to use manual testing versus where to use automated testing.
- There is a tremendous benefit to using automated software where the work is repetitive and follow a certain pre-defined set of cases. Consider the case where a team is working on a set of features for a new version of the product. The team needs to also ensure that earlier released features continue to work and are not impacted by any changes in code in other parts of the application. This can be done through the process of automating the test cases for these features, so that the testing happens automatically every time a new build is released.
- When a large number of test cases have to be carried out in a fixed time frame, then these can be done through automation. Automation will typically execute a large number of test cases in a much shorter time frame than manual testing.
- Allows load testing. When you need to simulate situations where there are a large number of users load testing the system, automation can help to a great deal in such cases. For applications that are seeking to evaluate their performance under stress conditions, automation is necessary.
- Testing with a large number of different inputs. Suppose there is a software that takes inputs from the user, and produces an output. In such cases, the testing for the input cases would need to consider a range of different inputs, including acceptable values and a range of unacceptable values. These are done far better through automation testing, since doing this manually involves a lot of effort.
- Automated testing can be tweaked to do a number of tests that would be difficult for people otherwise. For example, during the course of a normal product development cycle, thousands of files can be touched, and it is impossible to test all the changes on a regular basis, especially for stuff such as security errors (and there can be many types of checks that the development team would want to do)
And there can be more benefits, would love to hear more from people who use automated testing ..


SOAP (Simple Object Access Protocol) Protocol

SOAP stands for Simple Object Access Protocol. It is a communication protocol. It is for communication between applications. SOAP is a format for sending messages. It communicates via Internet. SOAP is platform independent and language independent protocol. SOAP is based on XML. It is simple and extensible protocol. SOAP allows you to get around firewalls. SOAP is a W3C recommendation.

SOAP is a protocol specification for invoking methods on servers, services, components and objects. A SOAP message may need to be transmitted together with attachments of various sorts, ranging from facsimile images of legal documents to engineering drawings. Such data are often in some binary format. SOAP consists of three parts:
- The SOAP envelope construct defines an overall framework for expressing what is in a message; who should deal with it, and whether it is optional or mandatory.
- The SOAP encoding rules defines a serialization mechanism that can be used to exchange instances of application-defined datatypes.
- The SOAP RPC representation defines a convention that can be used to represent remote procedure calls and responses.

Where is SOAP used ?


One of the most important uses of SOAP is to help enable XML Web Services. A web Service is an application provided as a service on the web. They are functional software components that can be accessed over the Internet. Web Services combines the best of component-based development and are based on Internet Standards that supports communication over the net.

Syntax Rules


A SOAP message must be encoded using XML. A SOAP message must use the SOAP Envelope namespace. A SOAP message must use the SOAP Encoding namespace. A SOAP message must not contain a DTD reference. A SOAP message must not contain XML Processing Instructions.

Message Format


XML was chosen as the standard message format because of its widespread use by major corporations and open source development efforts. The lengthy syntax of XML can be both a benefit and a drawback. While it promotes readability for humans, facilitates error detection, and avoids interoperability problems such as byte-order (Endianness), it can retard processing speed and be cumbersome.


Tuesday, March 23, 2010

Network Virtual Terminal (NVT)

Telnet is designed for terminal to terminal communication and distributed computer processing. Each host sets up a Network Virtual Terminal (NVT) and a host at one end assumes that an NVT has been set up at the other end. The NVT defines a set of rules for how information is formatted and sent, such as character set, line termination, and how information about the Telnet session itself is sent.There is the mechanism to negotiate options so that the hosts can operate a more elaborate interface at each end using different fonts etc. than the NVT. The User Host is the one that initiates a conversation whilst the Server Host is the one that is providing services.

The Network Virtual Terminal (NVT) is a bi-directional character device. The NVT has a printer and a keyboard. The printer responds to incoming data and the keyboard produces outgoing data which is sent over the TELNET connection and, if echoes are desired, to the NVT's printer as well. Any code conversion and timing considerations are local problems and do not affect the NVT.

Brief NVT description


- NVT comands are inserted to the data stream via TCP/IP before sending to the TCP/IP connection.
- Every NVT command is prefixed by character "0xFF".
- There are some basic commands with 2 byte interpretation only (EOF, ABORT, BRK, AYT, NOP, EC), and others with defined start ( = 0xFF 0xFA) and defined end ( = 0xFF 0xF0) commands.
- The TCP/IP device separates NVT commands and processes them without delay, while the data stream stores to the output stack.
- The NVT commands can't be found in the serial port data, if the device is Serial / TCP/IP converter.
- If you are sending character "0xFF" (255), the PC will just double it, because in NVT "0xFFFF" means send character "0xFF" to the output.
- NVT uses a negotiation process. It's a way of testing if terminals on the opposite side use ECHO or not ar if there are specific terminals etc.


Monday, March 22, 2010

The TELNET (Terminal Network) Protocol

TELNET (TErminaL NETwork) is a network protocol used on the Internet or local area networks to provide a bidirectional interactive communications facility.
- Telnet offers users the capability of running programs remotely and facilitates remote administration.
- Telnet is available for practically all operating systems and eases integration in heterogeneous networking environments.
- The Telnet protocol is applied on a TCP connection to send data in ASCII format coded over 8 bits between which the Telnet check sequences come.

Communication is established using TCP/IP and is based on a Network Virtual Terminal (NVT). On the client, the Telnet program is responsible for translating incoming NVT codes to codes understood by the client's display device as well as for translating client-generated keyboard codes into outgoing NVT codes.

Commands


The Telnet protocol uses various commands to control the client-server connection. These commands are transmitted within the data stream. The commands are distinguished from the data by setting the most significant bit to 1.

Telnet Options


Options give the client and server a common view of the connection. They can be negotiated at any time during the connection by the use of commands. They are described in separate RFCs.

Use of TELNET


The use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons :
- Telnet, by default, does not encrypt any data sent over the connection (including passwords).
- Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle.

Telnet is popular in various application areas:


- Enterprise networks to access host applications, e.g., on IBM Mainframes.
- Administration of network elements, e.g., in commissioning, integration and maintenance of core network elements in mobile communication networks, and many industrial control systems.
- MUD games played over the Internet.
- Internet game clubs.
- Embedded systems.
- Mobile data collection applications.


Sunday, March 21, 2010

FTP - File Transfer Protocol

- File Transfer Protocol (FTP), a standard Internet protocol, is the simplest way to exchange files between computers on the Internet.
- FTP is an application protocol that uses the Internets TCP/IP protocols.
- FTP is commonly used to transfer Web page files from their creator to the computer that acts as their server for everyone on the Internet.
- FTP is also commonly used to download programs and other files to your computer from other servers.
- Web browser can also make FTP requests to download programs you select from a Web page.
- FTP can also be used to update (delete, rename, move, and copy) files at a server.
- FTP can be run in active mode or passive mode, which control how the second connection is opened.
- In active mode the client sends the server, the IP address port number, that the client will use for the data connection, and the server opens the connection.
- Passive mode was devised for use where the client is behind a firewall and unable to accept incoming TCP connections.

The objectives of FTP are :
- to promote sharing of files (computer programs and/or data),
- to encourage indirect or implicit (via programs) use of remote computers,
- to shield a user from variations in file storage systems among hosts, and
- to transfer data reliably and efficiently.

Anonymous FTP


Thousands of hosts on the Internet run ftp servers that permit guests to login. Such servers usually contain data and software of interest to the general public. They are often called anonymous ftp servers because the guest login name is anonymous. To login to an anonymous ftp server, enter the name anonymous when prompted for a username or userid. When prompted for a password, enter your full e-mail address, unless the on-screen instructions specify an alternative guest password.


Saturday, March 20, 2010

UDP - User Datagram Protocol

The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer protocol.UDP is often used in videoconferencing applications or computer games specially tuned for real-time performance.
- UDP network traffic is organized in the form of data grams.A data-gram comprises one message unit. The first eight bytes of a data-gram contain header information and the remaining bytes contain message data.
- UDP can be used in networks where TCP is traditionally implemented.
- It does not guarantee reliability or the correct sequencing of data.
- UDP makes use of a simple communication model without implicit transmission checks for guaranteeing reliability, sequencing, or data-gram integrity.
- UDP considers that error checks and corrections should be carried out in the communicating application, and not at the network layer.
- UDP makes the protocol that much faster and more efficient because it does not have the overhead of checking whether the data has reached the destination every time it is sent.
- UDP is a stateless protocol. UDP is used for packet broadcast or multi-casting whereby the data is sent to all the clients in the network.

The UDP header consists of four fields each of 2 bytes in length :

- Source Port : Source port recognizes the sending port and should be understood to be the port to respond to if required. If not used, then its value should be zero.

- Destination Port : UDP packets from a client use this as a service access point (SAP) to indicate the service required from the remote server.

- UDP length : The number of bytes comprising the combined UDP header information and payload data.

- UDP Checksum : A checksum to verify that the end to end data has not been corrupted by routers or bridges in the network or by the processing in an end system.


Friday, March 19, 2010

Benefits of automated testing processes and systems, including scenarios where automated testing is done

Before we start with details about what are the benefits of automated testing, it is worth an explanation about what automated testing is, and then move on from their.
What is automated testing ?
Automated testing is the process whereby manual testing that is in place is automated (with the assumption that these manual testing systems exist, or are in progress enough that these can then be automated. In more detail, automation can be explained as the use of strategies and tools (along with processes) that help or eliminate human efforts and intervention; one of the main benefits is to ensure that repetitive tasks that are necessary but involve boring and regular tasks can be handled through automation.
What are some of the benefits of automation ?
- Regular and boring tasks can be removed from the necessity of humans doing it. One perfect example is the use of a series of testing during the process of software development whenever a new build or a fix is made. Such testing is required to ensure that the build is safe to use (internally it means that a number of tests that were required to be done by a person can now be done by the automation system). One side benefit is that such automation can be scheduled; so if a build comes in at 6 AM, there is no need for somebody to be there to carry out the series of tests. The automation software can handle doing the testing at that time.
- The testing is reliable. Given that the testing steps are being carried out by software, this testing will always follow the same pattern, without any variation
- Load testing. Setting up automation is the first part of setting up load testing, since being able to run a series of tests in order to load testing depends on being able to run the tests through an automation framework
- Scriptable or programmable. Automation tools allow you to customize the automation testing process, thus giving you a huge degree of flexibility to control what all you can do
- Use again and again. Once you create an automation framework, you can use it again and again. If the user interface changes, a small amount of re-work would be enough to run the automation tests
- Speed. Automation tests will run pretty fast, much faster than a human would be able to do the testing
- Comprehensive testing abilities. Organizations that start using automation testing frameworks eventually move onto building complex automation frameworks that cover a large portion of their functionality
- Improves the human testing abilities. Releasing humans testers from testing mundane and repetitive tasks can help them to do more value added testing, adding to product quality.


RARP : Reverse Address Resolution Protocol

- RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local area network can request to learn its IP address from a gateway server's Address Resolution Protocol (ARP) table or cache.
- A reverse address resolution protocol (RARP) is used for disk less computers to determine their IP address using the network. The RARP message format is very similar to the ARP format.
- When a new machine is set up, its RARP client program requests from the RARP server on the router to be sent its IP address.
- The RARP server will return the IP address to the machine which can store it for future use assuming that the entry has been put in the router table.
- RARP is available for Ethernet, Fiber Distributed-Data Interface, and Token Ring LANs.
- The 'operation' field in the RARP packet is used to differentiate between a RARP request and a RARP reply packet.
- Since a RARP request packet is a broadcast packet, it is received by all the hosts in the network. But only a RARP server processes a RARP request packet, all the other hosts discard the packet.
- The RARP reply packet is not broadcast, it is sent directly to the host, which sent the RARP request.

When a RARP server receives a RARP request packet, it performs the following steps:
- The MAC address in the request packet is looked up in the configuration file and
mapped to the corresponding IP address.
- If the mapping is not found, the packet is discarded.
- If the mapping is found, a RARP reply packet is generated with the MAC and IP
address. This packet is sent to the host, which originated the RARP request.

When a host receives a RARP reply packet, it gets its IP address from the packet and completes the booting process.


Thursday, March 18, 2010

Serial Line Internet Protocol - SLIP protocol

The need for a data link layer protocol to let IP operate over serial links was identified very early on in the development of TCP/IP. To solve the problem they created a very simple protocol that would frame IP data grams for transmission across the serial line. This protocol is called the Serial Line Internet Protocol, or SLIP for short.
SLIP modifies a standard TCP/IP data gram by appending a special "SLIP END" character to it, which distinguishes data gram boundaries in the byte stream. SLIP requires a serial port configuration of 8 data bits, no parity, and either EIA hardware flow control, or CLOCAL mode (3-wire null-modem) UART operation settings.

- Serial Line Interface Protocol (SLIP) is a TCP/IP protocol used for
communication between two machines that are previously configured for communication with each other.
- The dial-up connection to the server is typically on a slower serial line rather than on the parallel or multiplex lines.
- SLIP does not provide error detection, being reliant on other high-layer protocols for this.
- A SLIP connection needs to have its IP address configuration set each time before it is established.
- The Serial Line Internet Protocol (SLIP) is a mostly obsolete encapsulation of the Internet Protocol designed to work over serial ports and modem connections.
- A version of SLIP with header compression is called CSLIP (Compressed SLIP).
- The Parallel Line Internet Protocol (PLIP) is very similar to SLIP, but works at higher speeds via a parallel port.
- SLIP is a STREAMS-based computer networking facility that provides for the transmission and reception of IP packets over serial lines.
- SLIP can be used to connect one host to another via a single, physical serial line connection between serial ports or over longer distances using a modem at each end of a telephone line.


Wednesday, March 17, 2010

Sliding Window Protocols

These protocols comes under the data link layer.data link layer. It provides services to the network layer. It’s a bidirectional protocol. It means sender deletes the frames when it gets the acknowledgment.

The essence of all sliding window protocols is that at any instant of time, the
sender maintains a set of sequence numbers corresponding to frames it is permitted
to send. These frames are said to fall within the sending window. Similarly,
the receiver also maintains a receiving window corresponding to the set of frames
it is permitted to accept. The sender’s window and the receiver’s window need
not have the same lower and upper limits or even have the same size.

Sliding Window Protocols

The sequence numbers within the sender’s window represent frames that have
been sent or can be sent but are as yet not acknowledged. When new packet from network layer comes in to send, it is given highest no and the upper edge of window is advanced by 1. When the acknowledgment comes in, lower edge of the window is advanced by 1.

Since frames currently within the sender’s window may ultimately be lost or
damaged in transit, the sender must keep all these frames in its memory for possible
retransmission. The receiving data link layer’s window corresponds to the frames it may accept. When a frame whose sequence number is equal to the lower edge of the window is received, it is passed to the network layer, an acknowledgment is generated, and the window is rotated by one.

Types of sliding window protocols


- One-Bit sliding window protocols.
- Go Back N sliding window protocols.
- Selective Repeat sliding window.


Tuesday, March 16, 2010

Concept of Piggybacking

The data link layer provides service to the Network Layer above it:
* The network layer is interested in getting messages to the corresponding network layer module on an adjacent machine.
* The remote Network Layer peer should receive the identical message generated by the sender (e.g., if the data link layer adds control information, the header information must be removed before the message is passed to the Network Layer).
* The Network Layer wants to be sure that all messages it sends, will be delivered correctly (e.g., none lost, no corruption). Note that arbitrary errors may result in the loss of both data and control frames.
* The Network Layer wants messages to be delivered to the remote peer in the exact same order as they are sent.

Interleaving data and control frames on the same circuit is an improvement over having two separate physical circuits, yet another improvement is possible. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet. The acknowledgment is attached to the outgoing data frame. In effect, the acknowledgment gets a free ride on the next outgoing data frame.

The technique of temporarily delaying outgoing acknowledgments so that they can be hooked onto the next outgoing data frame is known as piggybacking.
Advantage : Better use of available channel bandwidth.
Disadvantage : If the data link layer waits longer than the sender’s timeout period, the frame will be retransmitted, defeating the whole purpose of having acknowledgments.


Monday, March 15, 2010

Why do we do software testing ? Some of the advantages of software testing ..

Why do we have the concept of software testing ? What are some of the advantages of setting up dedicated teams for testing purposes ? Many years back, there was no emphasis on software testing, and there has been a change in the focus, with more focus on software testing now. In many companies, there are specialized testing teams that focus on how to test any product or software and also to develop more competencies in this subject. However, it is a familiar subject and one that a lot of software testers are asked as part of interviews; to identify the benefits that software testing brings to the table (including from the business perspective):
- During the process of software development, there are many bugs that are introduced in the software; this is a natural process and it is an ideal scenario that code can be developed without having bugs in it. Developers, with their review processes, unit level of testing, and similar cases, cannot find the level of bugs that come with full integration (unless they take on the role of testing)
- There is a cost of finding defects vs. a graph of when the bug was found. If a bug is found in the design phase, it is very cheap to fix it; if a bug is found during development / coding phase, there is a higher cost; if the bug is found during customer acceptance testing, the cost is higher; and if the bug is found after release, the cost can be very high. The introduction of a phase of software testing can help reduce the cost of these bugs
- Having a software testing team boasting of skilled industry processes can be a big factor in terms of improving the credibility of the team and can result in a perceived enhanced value by customers
- Enhanced customer testing processes such as root cause analysis helps ensure a feedback mechanism and improvement in the processes followed by the team
- Setting up testing systems modeled on customer settings gives a huge boost to the expectation that the overall quality level of the system will be high, and will meet the customer needs
- Having a software testing team involved in the overall development phase ensures that there are more people to take a different look at the features and evaluate them from a customer point of view


Concept of Bit stuffing

Bit stuffing is the insertion of one or more bits into a transmission unit as a way to provide signaling information to a receiver. The receiver knows how to detect and remove or disregard the stuffed bits.

Bit stuffing is required by many network and communications protocols for the following reasons:
- To prevent data being interpreted as control information. For example, many frame-based protocols, such as X.25, signal the beginning and end of a frame with six consecutive 1 bits. Therefore, if the actual data being transmitted has six 1 bits in a row, a zero is inserted after the first 5 so that the dat is not interpreted as a frame delimiter. Of course, on the receiving end, the stuffed bits must be discarded.
- For protocols that require a fixed-size frame, bits are sometimes inserted to make the frame size equal to this set size.
- For protocols that required a continuous stream of data, zero bits are sometimes inserted to ensure that the stream is not broken.

Bit stuffing in Data Link layer

Each frame begins and ends with a special bit pattern, 01111110, called a flag byte. When the sender's data link layer encounters five consecutive ones in the data, it automatically stuffs a 0 bit in the outgoing bit stream. When the receiver sees five consecutive 1 bits, followed by 0 bit, it automatically destuffs the 0 bit. Bit stuffing is completely transparent to the network layer.
With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern. If the receiver loses track, all it has to do is scan the input for flag sequences.


Sunday, March 14, 2010

Framing in Data Link Layer

The data link layer detects the number of bits transmitted by physical layer to be error free. The approach used is to break the bit stream up into discrete frames and compute the checksum for each frame. When a frame arrives at the destination, the checksum is recomputed. If this checksum is different from the one obtained in the frame, the data link layer knows that an error has occurred and takes steps to deal with.
There are three different types of framing, each of which provides a way for the sender to tell the receiver where the block of data begins and ends:
- Byte-oriented framing : Computer data is normally stored as alphanumeric characters that are encoded with a combination of 8 bits (1 byte). This type of framing differentiates one byte from another.
- Bit-oriented framing : This type of framing allows the sender to transmit a long string of bits at one time.
- Clock-based framing : In a clock-based system, a series of repetitive pulses are used to maintain a constant bit rate and keep the digital bits aligned in the data stream.

The following methods are commonly used for calculating the check sum :
- Character Count : It uses a field in the header to specify the number of characters in the frame. When data link layer at destination sees the character count, it knows how many characters follow, and hence where the end of frame is. The disadvantage with this method is that the count can be garbled by a transmission error.

- Character stuffing : This method gets around the problem of resynchronization after an error. Each frame starts with the ASCII character sequence DLE(Data Link Escape) STX(Start of Text) and end with the sequence DLE ETX(End of Text). If the destination loses track of frame boundaries, all it has to do is to look for DLE STX or DLE ETX. Problem occurs when binary data such as object programs or floating point numbers are transmitted. Solution to this problem is to have sender's data link layer insert an ASCII DLE character just before each accidental DLE character in the data. The data link layer on receiving end removes the DLE before the data is given to the network layer. This is called character stuffing. A disadvantage of using this framing method is that it is closely tied to 8-bit characters.
-


Saturday, March 13, 2010

Data Link Layer - Layer 2 of OSI model

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.
At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization.

The data link layer performs various functions depending upon the hardware protocol used, but has four primary functions:

- Communication with the Network layer above.
- Communication with the Physical layer below.
- Segmentation of upper layer datagrams (also called packets) into frames in sizes that can be handled by the communications hardware.
- The data link layer organizes the pattern of data bits into frames before transmission. The frame formatting issues such as stop and start bits, bit order, parity and other functions are handled here.
- It provides error checking by adding a CRC to the frame, and flow control.
- The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards.
- The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking.
- The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Technologies and protocols used with this layer are Ethernet, Token Ring, FDDI, ATM, SLIP, PPP, HDLC, and ADCCP.
- The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer.
- Data link layer processing is faster than network layer processing because less analysis of the packet is required.
- The Data Link layer also manages physical addressing schemes such as MAC addresses for Ethernet networks, controlling access of any various network devices to the physical medium.


Friday, March 12, 2010

Physical Layer - Layer 1 of OSI model

- The physical layer is level one in the seven level OSI model. Actually, it is last layer that receives and processes the data from the sending device while it is first layer to receive the data at the destination end.
- It performs services requested by the data link layer.
- The Physical Layer defines the Mechanical, Electrical, Procedural and Functional specifications for activating, maintaining and deactivating the physical link between communication network systems.
- The basic objective of this layer is to transform the data in the form that is needed to be carried through the transmission media over the network. The transmission media either bounded or unbounded , carries the data in the form of electromagnetic waves or radio waves.
- The Physical Layer is responsible for bit-level transmission between network nodes.
The main functions of physical layer are :

- Definition of Hardware Specifications: The details of operation of cables, connectors, wireless radio transceivers, network interface cards and other hardware devices are generally a function of the physical layer. Devices used in the Physical Layer are
* Network Interface Cards (NIC)
* Transceivers
* Repeaters
* Hubs
* Multi Station Access Units (MAU’s)
- Encoding and Signaling: The physical layer is responsible for various encoding and signaling functions that transform the data from bits that reside within a computer or other device into signals that can be sent over the network.
- Data Transmission and Reception: After encoding the data appropriately, the physical layer actually transmits the data, and of course, receives it.
- Topology and Physical Network Design: The physical layer is also considered the domain of many hardware-related network design issues, such as LAN and WAN topology. There are four possible kinds of topologies:
* Bus
* Star
* Ring
* Mesh
In general, then, physical layer technologies are ones that are at the very lowest level and deal with the actual ones and zeroes that are sent over the network.


Thursday, March 11, 2010

How to support a reliable communication in transport layer ?

At the Transport layer, each particular set of pieces flowing between a source application and a destination application is known as a conversation.To identify each segment of data, the Transport layer adds to the piece a header containing binary data. This header contains fields of bits. It is the values in these fields that enable different Transport layer protocols to perform different functions.

Reliability means ensuring that each piece of data that the source sends arrives at the destination. At the Transport layer the three basic operations of reliability are:
- tracking transmitted data.
- acknowledging received data.
- retransmitting any unacknowledged data.

This requires the processes of Transport layer of the source to keep track of all the data pieces of each conversation and the retransmit any of data that did were not acknowledged by the destination. The Transport layer of the receiving host must also track the data as it is received and acknowledge the receipt of the data. These reliability processes place additional overhead on the network resources due to the acknowledgement, tracking, and retransmission. To support these reliability operations, more control data is exchanged between the sending and receiving hosts. This control information is contained in the Layer 4 header.

Determining the Need for Reliability
Applications, such as databases, web pages, and e-mail, require that all of the sent data arrive at the destination in its original condition, in order for the data to be useful. Any missing data could cause a corrupt communication that is either incomplete or unreadable. Therefore, these applications are designed to use a Transport layer protocol that implements reliability.


Wednesday, March 10, 2010

The Transport Layer - Layer 4 of OSI model

The Transport Layer of the OSI model is responsible for delivering messages between networked hosts. The Transport Layer should be responsible for fragmentation and reassembly.

- This layer converts the data received from the upper layers into segments and prepares them for transport.
- The Transport layer is responsible for end-to-end (source-to-destination) delivery of entire messages.
- It allows data to be transferred reliably and uses sequencing to make sure that the order of packets is maintained.
- It also provides services such as error checking and flow control.
- In case IP, lost packets arriving out of order must be reordered.
- The size and complexity of a transport protocol depends on the type of service it can get from the network layer.
- The transport layer can accept relatively large messages, but there are strict message size limits imposed by the network (or lower) layer.
- Two transport protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), sits at the transport layer.
- TCP establishes connections between two hosts on the network through 'sockets' which are determined by the IP address and port number. It keeps a track of the packet delivery order and the packets that must be resent.
- UDP provides a low overhead transmission service, but with less error checking.
- The Transport layer protocols are either connectionless or connection-oriented.
- Connection-oriented means that a connection (a virtual link) must be established before any actual data can be exchanged. e.g. TCP.
- In Connectionless, the sender does not establish a connection before it sends data, it just sends it without guaranteeing delivery. e.g. UDP.

Data Segmentation


Data segmentation is the process by which the transport layer uniquely handles all data passed to and from different upper-level applications. For example, if a user is browsing the web and checking email at the same time, each program would be passing data and waiting for a reply on a unique port number. The Transport layer ensures that data is passed to the correct application.


Tuesday, March 9, 2010

Open System Interconnection Reference Model - OSI Model

The OSI Reference Model is founded on a suggestion developed by the International Organization for Standardization (ISO). The model is known as ISO OSI (Open Systems Interconnection) Reference Model because it relates with connecting open systems – that is, systems that are open for communication with other systems. It was developed in 1984. It defines a networking framework for implementing protocols in seven layers.
Layers in the OSI model are ordered from lowest level to highest.The stack contains seven layers in two groups:The layers are stacked this way:
* Application
* Presentation
* Session
* Transport
* Network
* Data Link
* Physical
The upper layers of OSI model are application, presentation and session layer. The software in these layers performs application-specific functions like data formatting, encryption, and connection management.
The lower layers are transport, network, data link and physical layer. These layers provide more primitive network-specific functions like routing, addressing,and flow control.

OSI model

Benefits of OSI model


- Helps users understand how hardware and software elements function together.
- OSI is independent of country.
- OSI is independent of the operating system.
- Makes troubleshooting easier by separating networks into manageable pieces.
- Helps users understand new technologies as they are developed.


Monday, March 8, 2010

Program Threats

There are many ways a program can fail and many ways to turn the underlying faults into security failures. When a user writes a program, there is a possibility that it can be misused by some other user and lead to an unexpected behavior. The two most common methods by which such behavior may occur are Trojan horses and trap doors.

Trojan Horse


One of the most serious threats to computer security is Trojan horse attack. A Trojan horse is a nasty program to break security that is hidden as something harmless such as a screen saver or game. The Trojan -horse problem is exacerbated by long search paths. The search path lists the set of directories to search when an ambiguous program name is given. The path is searched for a file of that name and the file is executed.
One can end such program by ending the interactive session with a sequence of key pressing such as control-alt-delete combination in Windows 95/98/NT systems.

Trap Door


A trap door is basically a program where the programmer might illegally or legally write the code to avoid normal security procedures for specific user. The designer of a program might leave a hole in the software that only the designer is capable of using.
A clever trap door can be included in a compiler. The compiler could generate standard object code as well as a trap door, regardless of the source code being compiled.


Sunday, March 7, 2010

Different types of computer virus

A parasitic program written intentionally to enter a computer without the user's permission or knowledge. The word parasitic is used because a virus attaches to files or boot sectors and replicates itself, thus continuing to spread.
Many people are confused about different types of computer virus. The term computer virus is often used broadly to cover several types of malicious programs, including viruses, worms and Trojan horses. Each of them shares some similarities and some subtle differences.

- Computer Viruses : Computer viruses are parasitic programs that can replicate and spread to other computers. Computer virus needs a host program to run, so it often attaches itself to executable files. The virus codes run once you open the executive files. Computer viruses are spread by sharing infected files or email attachments.

- Computer worms : They can also replicate themselves, but unlike computer viruses, worms are self-contained. They can run and spread without being part of a host program. Worms spread at enormous speed in the network.

- Trojan horses : Trojan horses are hidden codes embedded within a legitimate program. Trojan horses are run without your knowledge, they can damage your files or create security leak in your system, allowing unauthorized users to access your computer. Unlike viruses and worms, they usually do not replicate themselves.


Saturday, March 6, 2010

How does an anti-virus work ?

An anti-virus software program is a computer program that can be used to scan files to identify and eliminate computer viruses and other malicious software (malware).

Approaches used by anti-virus are



- Virus dictionary : It has a big dictionary of viruses, allowing it to scan files and flag any that are known to be viral. As new virus and malicious threats are discovered, they are added to a virus dictionary. Every detail of the virus is held in the dictionary. Some anti-virus programs uses this dictionary as a guide to identify any suspicious and threatening software or files. To stay up-to-date with any new viruses, the anti-virus software must regularly download updates to its dictionary. The dictionary approach has been deemed quite effective but hackers and virus creators have found a way around it by developing polymorphic viruses.

- Suspicious behavior : It monitors the behavior of all programs. If one program tries to write data to an executable program, for example, the anti-virus software can flag this suspicious behavior, alert a user and ask what to do. The suspicious behavior approach is more effective in stopping new viruses since it doesn't rely on a dictionary, which may not be regularly updated, for reference. This approach could be annoying as it can give lots of false positives.

Anti-virus software and user carefulness are the best form of protection that is out there now.


Friday, March 5, 2010

Antivirus Software - Heuristic Analysis

Heuristic analysis is a method employed by many computer antivirus programs designed to detect previously unknown computer viruses, as well as new variants of viruses already in the wild. Heuristic analysis is an expert based analysis that determines the susceptibility of a system towards particular threat/risk using various decision rules or weighing methods.

The common heuristic/behavioral scanning techniques :
- File Emulation : It allows the file to run in a controlled virtual system (or “sandbox”) to see what it does.
- File Analysis : It involves the software taking an in-depth look at the file and trying to determine its intent, destination, and purpose. Perhaps the file has instructions to delete certain files, and should be considered a virus.

The effectiveness using heuristic analysis is fairly low regarding accuracy and the number of false positives.This sort of scanning and analysis can take some time, which may slow-down system performance.
False positives are when the anti-virus software determines a file is malicious (and quarantines or deletes it) when in reality it is perfectly fine and/or desired.

Extensive use of heuristic analysis is also made in anti-spam solutions, to highlight those characteristics of an e-mail message that are spam-like.


Thursday, March 4, 2010

Antivirus Software - Signature based detection

Antivirus software is a computer program that detects, prevents, and takes action to disarm or remove malicious software programs, such as viruses and worms. Computer viruses are software programs that are deliberately designed to interfere with computer operation, record, corrupt, or delete data, or spread themselves to other computers and throughout the Internet.

There are several methods which antivirus software can use to identify malware :

Signature Based Detection


It is the most common method that anti-virus software uses to identify malware. This method is somewhat limited by the fact that it can only identify a limited amount of emerging threats, e.g. generic, or extremely broad, signatures.
Advantages :
- The signatures are easy to develop and understand if you know what network behavior you're trying to identify.
- The events generated by a signature-based IDS can very precisely inform you about what caused the alert.
- Signature based rules are based on Pattern matching, and with modern day systems pattern-matching can be performed very quickly.
- If your network is only having DNS, HTTP and SMTP traffic, all other signatures can be removed from the policy files.

Disadvantages :
- Signature based IDS can only detect known attacks, a signature must be created for every attack, and 0-day attacks cannot be detected.
- Signature based IDS systems are also prone to false positives since they are commonly based on regular expressions and string matching.
- Since they are based on pattern match, signatures usually don't work that great against attacks with self-modifying behavior.


Wednesday, March 3, 2010

Auto-sensing

- Autosensing is a feature of so-called "10/100" Ethernet hubs, switches, and NICs.
- Compatible Ethernet speeds can be selected using low-level signaling techniques probing the capability of the network.
- Autosensing was developed to make the migration from traditional Ethernet to Fast Ethernet products easier.

When first connected, 10/100 devices automatically exchange information with each other to agree on a common speed setting. The devices run at 100 Mbps if the network supports it, otherwise they drop down to 10 Mbps to ensure a "lowest common denominator" of performance. Many hubs and switches are capable of autosensing on a port-by-port basis; in this case, some computers on the network may be communicating at 10 Mbps and others at 100 Mbps. 10/100 products often incorporate two LEDs of different colors to indicate the speed setting that is currently active.

Auto-sensing is an active method of determining link mode. Each interface is expected to transmit specific information in a specific format. If an interface that is expecting to use auto-sensing does not receive this information from the other side, it assumes the other side cannot detect or change its mode.


Tuesday, March 2, 2010

Ethernet Hubs

A hub connects multiple devices together. Ethernet hubs, and are most commonly used in computers for networking purposes. Ethernet hubs are available in different types, depending on the speed of the network connection or broadband speed. The number of ports an Ethernet hub supports also varies. Older Ethernet hubs were relatively large in size and sometimes noisy as they contained built in fans for cooling the unit. Newer devices are much smaller, designed for mobility, and noiseless.

Working of an Ethernet Hub


The main purpose of the Ethernet hub is to transmit the large pockets or cluster of data it receives from one computer onto another through all the ports connected to it. Ethernet uses a protocol called CSMA/CD, which stands for Carrier Sense, Multiple Access with Collision Detection.

- Carrier Sense - When a device connected to an Ethernet network wants to send data it first checks to make sure it has a carrier on which to send its data.
- Multiple Access - This means that all machines on the network are free to use the network whenever they like so long as no one else is transmitting.
- Collision Detection - A means of ensuring that when two machines start to transmit data simultaneously, that the resultant corrupted data is discarded, and re-transmissions are generated at differing time intervals.


Monday, March 1, 2010

Hubs and their types

Hubs have become an integral part of various network and business systems. Hub, sometimes referred to as a concentrator or repeater Hub, refers to a networking component which acts as a convergence point of a Network, allowing the transfer of data packets.

Characteristics of Hubs :
- Hub is a small plastic box which takes its power from an ordinary wall outlet.
- Multiple computers are joined through a hub.
- On this network segment, all computers can communicate directly with each other.
- A hub includes a series of ports that each accept a network cable.
- Hubs are the layer 1 devices while switches and routers are layer 2 and layer 3 devices respectively.
- Hubs do not read any of the data passing through them and are not aware of their source or destination. Essentially, a hub simply receives incoming packets, possibly amplifies the electrical signal, and broadcasts these packets out to all devices on the network - including the one that originally sent the packet.

Types of Hubs :
- Passive Hubs : They do not amplify the electrical signal of incoming packets before broadcasting them out to the network. Their contribution in enhancing the performance is very less. It does not help in any way in the troubleshooting operations. Most of the passive hubs are easily obtainable at a lesser cost.
- Active Hubs : They amplify the incoming signals. An active hub is sometimes referred to as multiport repeater. An active hub takes a larger role in Ethernet communications with the help of technology called store & forward. If the data received being weak but readable, the active hub restores the signal before rebroadcasting the same. An active hub provides information n devices on the network which are not yet fully functional.
- Intelligent Hubs : This hub typically behaves like a stack. It is built in such a way that multiple units can be placed one on top of the other to conserve space. It has the ability to manage the network from one central location. With the help of an intelligent hub, one can easily identify, diagnose problems and even come up with remedial solutions.


Facebook activity