Hashing is a method to store data in an array so that storing, searching, inserting and deleting data is fast. For this every record needs an unique key.The basic idea is not to search for the correct position of a record with comparisons but to compute the position within the array. The function that returns the position is called the 'hash function' and the array is called a 'hash table'.
Main idea: Use an array of size m and the key k as address of the array.
A hash function h is used to map keys to [0::m-1].
Key technical issues :
- What is a good h? A good function avoids (but does not eliminate)collisions, and is quick to compute.
- How do we resolving collisions? Retrieval time is a function of collisions.
- What if we run out of space in the table?
- Can we rearranging keys upon an insertion?
A hash function is any well-defined procedure or mathematical function that converts a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index to an array. The values returned by a hash function are called hash values, hash codes, hash sums, or simply hashes.A hash function may map two or more keys to the same hash value.Hash functions are related to (and often confused with) check sums,check digits, fingerprints, randomization functions, error correcting codes, and cryptographic hash functions. Although these concepts overlap to some extent, each has its own uses and requirements and is designed and optimized differently.
A hash table or hash map is a data structure that uses a hash function to efficiently map certain identifiers or keys (e.g., person names) to associated values (e.g., their telephone numbers).n general, a hashing function may map several different keys to the same index. Therefore, each slot of a hash table is associated with (implicitly or explicitly) a set of records, rather than a single record. For this reason, each slot of a hash table is often called a bucket, and hash values are also called bucket indices.
Sunday, December 27, 2009
Introduction to Hashing
Posted by Sunflower at 12/27/2009 08:56:00 PM 0 comments
Labels: Array, Data, Data structure, Hash function, Hash table, Hashing
Subscribe by Email |
|
Thursday, December 24, 2009
RS422 Standard
RS-422 is a telecommunications standard for binary serial communications between devices. It is the protocol or specifications that must be followed to allow two devices that implement this standard to speak to each other. RS-422 is an updated version of the original serial protocol known as RS-232.
This standard was introduced in 1975 to offer improvements over the older RS-232 standard. It provides a balanced line with optional termination. The standard uses a voltage differential of 2v min to 5v max to represent the binary 0's and 1's. The specification allows data rates up to 10M baud at 40 feet maximum cable length.
The maximum cable length that can be driven will depend on the baud rate, the driver/receiver IC's, the cable type, and the amount of electrical noise in the surrounding environment. RS-422 can be used for point-to-point communication or for multi-drop one-master/many-slave systems.
The RS-422 standard only defines the characteristic requirements for the balanced line drivers and receivers. It does not specify one specific connector, signal names or operations. RS-422 interfaces are typically used when the data rate or distance criteria cannot be met with RS-232. The RS-422 standard allows for operation of up to 10 receivers from a single transmitter. The standard does not define operations of multiple tri-stated transmitters on a link.
RS-422 is a balanced four wire system. The signal sent from the DTE device is transmitted to the DCE device through two wires and the signal sent from the DEC device to the DTE device is transmitted through the other two wires. The signals on each pair of wires are the mirror opposite of each other, i.e., a "1" datum is transmitted as a plus 2 volt reference on one wire and a minus 2 volt reference on the other wire. To send a "0" datum, a minus 2 volt reference is transmitted through one wire and a plus 2 volt reference on the other wire. That is the opposite of what was done to transmit a \'1\' datum.
The RS-422-A interfaces between the Data Terminal Equipment (DTE) and Data Communication Equipment (DCE) or in any point-to-point interconnection of signals between digital equipment. It employs the electrical characteristics of balanced-voltage digital interface circuits.
Posted by Sunflower at 12/24/2009 12:09:00 AM 0 comments
Labels: Communication, Data, Protocol, RS422, Serial, Standards, Telecommunications
Subscribe by Email |
|
Wednesday, December 23, 2009
RS485 Standard
RS-485 is a telecommunications standard for binary serial communications between devices. It is the protocol or specifications that need to be followed to allow devices that implement this standard to speak to each other. A RS-485 compliant network is a multi-point communications network. The RS-485 standard specifies up to 32 drivers and 32 receivers on a single (2-wire) bus. RS-485 drivers are now even able to withstand bus contention problems and bus fault conditions. A RS-485 network can be constructed as either a balanced 2 wire system or a 4 wire system. If a RS-485 network is constructed as a 2 wire system, then all of the nodes will have equal ranking. A RS-485 network constructed as a 4 wire system, has one node designated as the master and the remaining nodes are designated as slaves. The maximum cable length can be as much as 4000 feet because of the differential voltage transmission system used. The typical use for RS485 is a single PC connected to several addressable devices that share the same cable.
RS485 meets the requirements for a truly multi-point communications network, and the standard specifies up to 32 drivers and 32 receivers on a single (2-wire) bus. With the introduction of "automatic" repeaters and high-impedance drivers / receivers this "limitation" can be extended to hundreds (or even thousands) of nodes on a network. RS485 extends the common mode range for both drivers and receivers in the "tri-state" mode and with power off. Also, RS485 drivers are able to withstand "data collisions" (bus contention) problems and bus fault conditions.
SPECIFICATIONS RS485
- Mode of Operation DIFFERENTIAL
- Total Number of Drivers and Receivers on One Line 1 DRIVER & 32 RECEIVER
- Maximum Cable Length 4000 FT.
- Maximum Data Rate 10Mb/s
- Maximum Driver Output Voltage -7V to +12V
- Driver Output Signal Level (Loaded Min.) Loaded +/-1.5V
- Driver Output Signal Level (Unloaded Max) Unloaded +/-6V
- Driver Load Impedance (Ohms) 54
- Max. Driver Current in High Z State Power On +/-100uA
- Max. Driver Current in High Z State Power Off +/-100uA
- Slew Rate (Max.) N/A
- Receiver Input Voltage Range -7V to +12V
- Receiver Input Sensitivity +/-200mV
- Receiver Input Resistance (Ohms) >=12k
Posted by Sunflower at 12/23/2009 11:53:00 PM 0 comments
Labels: Communication, Network, RS485, Serial, Standards, Telecommunications
Subscribe by Email |
|
Tuesday, December 22, 2009
RS232 Standard
RS232 is a asynchronous serial communication protocol widely used in computers and digital systems. It is called asynchronous because there is no separate synchronizing clock signal as there are in other serial protocols like SPI and I2C.
In RS232 there are two data lines RX and TX. TX is the wire in which data is sent out to other device. RX is the line in which other device put the data it need to sent to the device.
Voltage levels in RS232 are HIGH=-12V and LOW=+12V.
RS-232 Specifications :
- Cabling : Single-ended
- Number of Devices : 1 transmit, 1 receive
- Communication Mode : Full duplex
- Distance(max) : 50 feet at 19.2kbps
- Data Rate(max) : 1Mbps
- Signaling : Unbalanced
- Mark(data 1) : -5V (min) -15V (max)
- Space(data 0) : 5V (min) 15V (max)
- Input Level(min) : ±3V
- Output Current : 500mA
- Impedance : 5kW (Internal)
- Bus Architecture : Point-to-Point
RS232 Data Transmission :
Transmission
1. When there is no transmission the TX line sits HIGH (STOP CONDITION).
2. When the device needs to send data it pulls the TX line low for 104uS (This is the start bit which is always 0).
3. Then it sends each bit with duration of = 104uS.
4. Finally it sets TX lines to HIGH for at least 104uS (This is stop bit and is always 1).
Reception :
1. The receiving device is waiting for the start bit i.e. the RX line to go LOW.
2. When it gets start bit it waits for half bit time i.e. 104/2 = 51uS, i.e.it is in middle of start bit, it reads it again to make sure it is a valid start bit and not a spike.
3. Then it waits for 104uS and now it is in middle of first bit. It then reads the value of RX line.
4. In the same way it reads all the 8 bits.
5. Now the receiver has the data.
Limitations of RS232 Standard :
* The large voltage swings and requirement for positive and negative supplies increases power consumption of the interface and complicates power supply design. The voltage swing requirement also limits the upper speed of a compatible interface.
* Single-ended signaling referred to a common signal ground limits the noise immunity and transmission distance.
* Multi-drop connection among more than two devices is not defined. While multi-drop "work-arounds" have been devised, they have limitations in speed and compatibility.
* Asymmetrical definitions of the two ends of the link make the assignment of the role of a newly developed device problematic; the designer must decide on either a DTE-like or DCE-like interface and which connector pin assignments to use.
* The handshaking and control lines of the interface are intended for the setup and takedown of a dial-up communication circuit; in particular, the use of handshake lines for flow control is not reliably implemented in many devices.
* No method is specified for sending power to a device. While a small amount of current can be extracted from the DTR and RTS lines, this is only suitable for low power devices such as mice.
* The 25-way connector recommended in the standard is large compared to current practice.
Posted by Sunflower at 12/22/2009 11:01:00 PM 0 comments
Labels: Asynchronous, Communication, Data, Data transmission, Functional specifications, Receiver, RS232, Serial, Transmitter, Voltage
Subscribe by Email |
|
Data Communication Modes
Today computer is available in many offices and homes and therefore there is a need to share data and programs among various computers with the advancement of data communication facilities. The communication between computers has increased and it thus it has extended the power of computer beyond the computer room. Now a user sitting at one place can communicate computers of any remote sites through communication channel.
In data communication four basic terms are frequently used. They are :
* Data: A collection of facts in raw forms that become information after processing.
* Signals: Electric or electromagnetic encoding of data.
* Signaling: Propagation of signals across a communication medium.
* Transmission: Communication of data achieved by the processing of signals.
There are three ways for transmitting data from one point to another :
- Simplex :Data only flows in one direction.A good example of simplex
communications is a radio station and your car radio. Simplex is not often
used in computer communications because there is no way to verify when or if data is received. However, simplex communications is a very efficient way to distributed vast amounts of information to a large number of receivers.
- Half Duplex : In this mode, devices allow both transmission and receiving, but not at the same time. Essentially only one device can transmit at a time while all other half-duplex devices receive. RS-485 works in half-duplex mode.
- Full Duplex : In this mode, devices can transmit and receive data at the same time. RS232 and RS422 are examples of Full Duplex communications. There are separate transmit and receive signal lines that allow data to flow in both directions simultaneously.
Posted by Sunflower at 12/22/2009 10:36:00 PM 0 comments
Labels: Communication, Data, Data communication, Full Duplex, Half Duplex, Signals, Simplex, transmission
Subscribe by Email |
|
Monday, December 21, 2009
Difference between buffer and cache ?
A buffer is a region of memory used to temporarily hold output or input data.Buffers can be implemented in either hardware or software, but the vast majority of buffers are implemented in software. Buffers are used when there is a difference between the rate at which data is received and the rate at which it can be processed.
The terms "buffer" and "cache" are not mutually exclusive and the functions are frequently combined; however, there is a difference in intent. A buffer is a temporary memory location, that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. Thus, addressable memory is used as intermediate stage.
Additionally such a buffer may be feasible when a large block of data is assembled or disassembled (as required by a storage device), or when data may be delivered in a different order than that in which it is produced. Also a whole buffer of data is usually transferred sequentially (for example to hard disk), so buffering itself sometimes increases transfer performance. These benefits are present even if the buffered data are written to the buffer once and read from the buffer once.
A cache also increases transfer performance. A part of the increase similarly comes from the possibility that multiple small transfers will combine into one large block. But the main performance-gain occurs because there is a good chance that the same datum will be read from cache multiple times, or that written data will soon be read. A cache's sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible from the perspective of neighboring layers.
Posted by Sunflower at 12/21/2009 04:53:00 PM 0 comments
Labels: Buffers, Cache, Data, Input, Memory, Output
Subscribe by Email |
|
Page/Disk Cache and Web Cache
PAGE CACHE :
Page Cache or disk cache is transparent buffer of disk-backed pages kept in main memory (RAM) by the operating system for quicker access. All memory that is not directly allocated to applications is usually utilized for page cache. Since non-dirty pages in the page cache have identical copies in secondary storage(hard disk), discarding and re-using their space is much quicker than paging out application memory, and is often preferred.The page cache also aids in writing to a disk. Pages that have been modified in memory for writing to disk, are marked "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the page backing the particular block is looked up. If it is already found in cache, the write is done to that page in memory. Otherwise, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise,the page(s)are fetched from disk and requested modifications are done.
WEB CACHE :
Web caching is the caching of web documents(e.g.,HTML pages, images) to reduce bandwidth usage, server load, and perceived lag. A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met.
With a local cache in operation, user web object requests go via the local cache which then retains a copy of the said web object. This results in all subsequent requests for the same object being fulfilled from the local cache instead of from the site of origin. This process of web caching minimizes the amount of times identical web objects are transferred from remote websites by retaining copies of requested URLs in a cache. A web cache can be installed utilizing both software and hardware, and can run on various different platforms.
With a local cache in operation, subsequent requests for previously cached URLs result in the cached copy of the object being returned to the user; creating little or no extra network traffic, improving efficiency and reducing waiting time.
Posted by Sunflower at 12/21/2009 04:22:00 PM 0 comments
Labels: Caching, Disk Cache, Page Cache, Web cache, Web documents
Subscribe by Email |
|
Thursday, December 17, 2009
CPU Caching
The cache on your CPU has become a very important part of today's computing. The cache is a very high speed and very expensive piece of memory, which is used to speed up the memory retrieval process. Without the cache memory every time the CPU requested data it would send a request to the main memory which would then be sent back across the memory bus to the CPU. This is a slow process in computing terms. The idea of the cache is that this extremely fast memory would store and data that is frequently accessed and also if possible the data that is around it.
CPU's however use a 2 level cache system. The level 1 cache is the fastest and smallest memory, level 2 cache is larger and slightly slower but still smaller and faster than the main memory. The main problem with having too much cache memory is that the CPU will always check the cache memory before the main system memory.
Read cache is used to store copies of data and instructions that are retrieved from main memory or mass storage. If the central processing unit (CPU) needs to access the same data or instructions again, it can use the copy in read cache. This is much faster the going back to main memory or mass storage again. Write cache is a temporary store for data that needs to be written to main memory or mass storage. The CPU can move the data into cache very quickly, and then continue executing instructions. The data is subsequently moved to its permanent location by the cache controller, a process that takes more time because main memory and mass storage devices are much slower to access than cache memory.
Posted by Sunflower at 12/17/2009 08:13:00 PM 0 comments
Labels: Cache, Caching Memory, CPU, Data, Read, Store
Subscribe by Email |
|
Introduction to Caching
Caching is a well-known concept where programs continually access the same set of instructions, a massive performance benefit can be realized by storing those instructions in RAM. This prevents the program from having to access the disk thousands or even millions of times during execution by quickly retrieving them from RAM.
A cache is made up of a pool of entries. Each entry has a datum (a nugget of data) - a copy of the same datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.
When the cache client (a CPU, web browser, operating system) needs to access a datum presumed to exist in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, has become known as a cache miss. The previously uncached datum fetched from the backing store during miss handling is usually copied into the cache, ready for the next access.
When a system writes a datum to the cache, it must at some point write that datum to the backing store as well. The timing of this write is controlled by what is known as the write policy.
- In a write-through cache, every write to the cache causes a synchronous write to the backing store.
- In a write-back (or write-behind) cache, writes are not immediately mirrored to the store. Instead, the cache tracks which of its locations have been written over and marks these locations as dirty. The data in these locations is written back to the backing store when those data are evicted from the cache, an effect referred to as a lazy write.
- No-write allocation is a cache policy which caches only processor reads, thus avoiding the need for write-back or write-through when the old value of the datum was absent from the cache prior to the write.
Posted by Sunflower at 12/17/2009 04:09:00 PM 0 comments
Labels: Access, Cache, Caching. Memory, CPU, Datum, Entry, RAM
Subscribe by Email |
|
Parallel port
A parallel port is a type of interface found on computers (personal and otherwise) for connecting various peripherals. It is also known as a printer port or Centronics port. The IEEE 1284 standard defines the bi-directional version of the port. Parallel ports can be used to connect a host of popular computer peripherals:
* Printers
* Scanners
* CD burners
* External hard drives
* Iomega Zip removable drives
* Network adapters
* Tape backup drives
Parallel ports were originally developed by IBM as a way to connect a printer to your PC. When a PC sends data to a printer or other device using a parallel port, it sends 8 bits of data (1 byte) at a time. These 8 bits are transmitted parallel to each other, as opposed to the same eight bits being transmitted serially (all in a single row) through a serial port. The standard parallel port is capable of sending 50 to 100 kilobytes of data per second.
Pins (parallel connection)
Pin number Name
1 _STR - Strobe
2-9 Data Bits D0-D7
10 ACK - Acknowledgement
11 Busy
12 Paper Out
13 Online Signal
14 Auto feed
15 Error
16 Reset
17 Offline Signal
18-25 Ground
Posted by Sunflower at 12/17/2009 03:31:00 PM 0 comments
Labels: 25-pin, Parallel Connector, Parallel ports
Subscribe by Email |
|
Tuesday, December 15, 2009
9 Pin Serial Port Connector
Connector may be reversed depending on which side is viewed. All pins are numbered.
Pin No. Function
1 DCD (Data Carrier Detect)
2 RX (Receive Data)
3 TX (Transmit Data)
4 DTR (Data Terminal Ready)
5 GND (Signal Ground)
6 DSR (Data Set Ready)
7 RTS (Request To Send)
8 CTS (Clear To Send)
9 RI (Ring Indicator)
Voltage sent over the pins can be in one of two states, On or Off. On (binary value "1") means that the pin is transmitting a signal between -3 and -25 volts, while Off (binary value "0") means that it is transmitting a signal between +3 and +25 volts.
An important aspect of serial communications is the concept of flow control. This is the ability of one device to tell another device to stop sending data for a while. The commands Request to Send (RTS), Clear To Send (CTS), Data Terminal Ready (DTR) and Data Set Ready (DSR) are used to enable flow control.
Posted by Sunflower at 12/15/2009 03:06:00 PM 0 comments
Labels: 9 Pin, Connector, Pins, Port, Serial, Serial data, Serial Ports
Subscribe by Email |
|
Introduction to Serial Ports
Serial ports are a type of computer interface that complies with the RS-232 standard. They are 9-pin connectors that relay information, incoming or outgoing, one byte at a time. Each byte is broken up into a series of eight bits, hence the term serial port. Serial ports are one of the oldest types of interface standards.
In traditional computers, serial ports were configured as follows:
Serial Ports Interrupt Memory Address
COM 1 IRQ 4 0x3f8
COM 2 IRQ 3 0x2f8
COM 3 IRQ 4 0x3e8
COM 4 IRQ 3 0x2e8
Devices configured to use serial ports COM 1 and COM 3 could not be active at the same time, as they shared interrupt IRQ 4. The same was true of COM 2 and COM 4 port devices. The serial port is much more than just a connector. It converts the data from parallel to serial and changes the electrical representation of the data.
Serial flow is a stream of bits over a single wire (such as on the transmit or receive pin of the serial connector). For the serial port to create such a flow, it must convert data from parallel (inside the computer) to serial on the transmit pin (and conversely).
The advantage is that a serial port needs only one wire to transmit the 8 bits (while a parallel port needs 8). The disadvantage is that it takes 8 times longer to transmit the data than it would if there were 8 wires. Serial ports lower cable costs and make cables smaller. Serial ports, also called communication (COM) ports, are bi-directional. Bi-directional communication allows each device to receive data as well as transmit it.
Serial ports rely on a special controller chip, the Universal Asynchronous Receiver/Transmitter (UART), to function properly. The UART chip takes the parallel output of the computer's system bus and transforms it into serial form for transmission through the serial port. In order to function faster, most UART chips have a built-in buffer of anywhere from 16 to 64 kilobytes. This buffer allows the chip to cache data coming in from the system bus while it is processing data going out to the serial port.
Posted by Sunflower at 12/15/2009 02:31:00 PM 0 comments
Labels: Devices, Parallel ports, Ports, Receiver, Serial data, Serial Ports, Transmit, UART
Subscribe by Email |
|
Monday, December 14, 2009
Hamming Distance (HD)
Hamming distance (Hamming metric) In the theory of block codes intended for error detection or error correction, the Hamming distance d(u, v) between two words u and v, of the same length, is equal to the number of symbol places in which the words differ from one another. If u and v are of finite length n then their Hamming distance is finite since d(u, v) ← n.
It can be called a distance since it is non-negative, nil-reflexive, symmetric, and triangular:
0 ← d(u, v)
d(u, v) = 0 iff u = v
d(u, v) = d(v, u)
d(u, w) ← d(u, v) + d(v, w)
The Hamming distance is important in the theory of error-correcting codes and error-detecting codes: if, in a block code, the codewords are at a minimum Hamming distance d from one another, then
(a) if d is even, the code can detect d – 1 symbols in error and correct ½d – 1 symbols in error;
(b) if d is odd, the code can detect d – 1 symbols in error and correct ½(d – 1) symbols in error.
How to Calculate Hamming Distance ?
- Ensure the two strings are of equal length. The Hamming distance can only be calculated between two strings of equal length.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
- Compare the first two bits in each string. If they are the same, record a "0" for that bit. If they are different, record a "1" for that bit. In this case, the first bit of both strings is "1," so record a "0" for the first bit.
- Compare each bit in succession and record either "1" or "0" as appropriate.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
Record: "0011 0000 1111"
- Add all the ones and zeros in the record together to obtain the Hamming distance.
Hamming distance = 0+0+1+1+0+0+0+0+1+1+1+1 = 6
Posted by Sunflower at 12/14/2009 07:01:00 PM 0 comments
Labels: Binary Data, Calculate, Definition, Error correction, Error Detection, Errors, Hamming Distace
Subscribe by Email |
|
Error Detection Methods Cont...
- Cyclic Redundancy Check (CRC) :
This error detection method computes the remainder of a polynomial division of a generator polynomial into a message. The remainder, which is usually 16 or 32 bits, is then appended to the message. When another remainder is computed, a non-zero value indicates an error. Depending on the generator polynomial's size, the process can fail in several ways, however, it is very difficult to determine how effective a given CRC will be at detecting errors. The probability that a random code word is valid (not detectable as an error), is completely a function of the code rate: 1 - 2-(n - k). Where n is the number of bits of formed from k original bits of data ,(n - k) is the number of redundant bits, r.
Use of the CRC technique for error correction normally requires the ability to send retransmission requests back to the data source.
- Hamming distance based checks :
If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even more, d or fewer errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.
Posted by Sunflower at 12/14/2009 04:33:00 PM 0 comments
Labels: CRC, Cyclic Redundancy Check, Error correction, Error Detection, Errors, Hamming Distace, HC
Subscribe by Email |
|
Error Detection and Correction
Error detection and correction are techniques to ensure that data is transmitted without errors, even across unreliable media or networks. Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the additional ability to reconstruct the original, error-free data.
Because of the extremely low bit-error rates in data transmissions, most error detection methods and algorithms are designed to address the detection or correction of a single bit error.
Data Detection Methods :
Errors introduced by communications faults, noise or other failures into valid data, especially compressed data were redundancy has been removed as much as possible, can be detected and/or corrected by introducing redundancy into the data stream.
- Parity Schemes
A parity bit is an error detection mechanism that can only detect an odd number of errors. The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit.
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data is corrupt.
- Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message. On the receiver side, a new checksum may be calculated from the extended message. If the new checksum is not 0, an error has been detected.
Posted by Sunflower at 12/14/2009 03:47:00 PM 0 comments
Labels: Checksum, Error correction, Error Detection, Errors, Methods, Parity
Subscribe by Email |
|
Monday, December 7, 2009
Overview of Interrupt Service Routine (ISR)
An interrupt service routine (ISR) is a software routine that hardware invokes in response to an interrupt. ISRs examine an interrupt and determine how to handle it. ISRs handle the interrupt, and then return a logical interrupt value. If no further handling is required because the device is disabled or data is buffered, the ISR notifies the kernel with a SYSINTR_NOP return value. An ISR must perform very fast to avoid slowing down the operation of the device and the operation of all lower priority ISRs. When an ISR notifies the kernel of a specific logical interrupt value, the kernel examines an internal table to map the logical interrupt value to an event handle.
Although an ISR might move data from a CPU register or a hardware port into a memory buffer, in general it relies on a dedicated interrupt thread, called the interrupt service thread (IST), to do most of the required processing.
The system supports two different types of ISRs:
- The driver can register an InterruptService routine to handle line-based or message-based interrupts. (This is the only type available prior to Windows Vista.) The system passes a driver-supplied context value.
- The driver can register an InterruptMessageService routine to handle message-based interrupts. The system passes both a driver-supplied context value and the message ID of the interrupt message.
Posted by Sunflower at 12/07/2009 02:18:00 PM 0 comments
Labels: Interrupt Service Routine, Interrupts, ISR, Software Routine
Subscribe by Email |
|
Overview Of Interrupt Handling
An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task.
An interrupt handler is a low-level counterpart of event handlers. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.
When an interrupt is processed, a specific sequence of events takes place. You should write the interrupt service request (ISR) and interrupt service thread (IST) for your device driver with the following sequence of events in mind :
- When an interrupt occurs, the microprocessor jumps to the kernel exception handler.
- The exception handler disables all interrupts of an equal and lower priority at the microprocessor, and then calls the appropriate ISR for the physical interrupt request (IRQ).
- The ISR returns a logical interrupt, in the form of an interrupt identifier, to the interrupt handler and typically masks the board-level device interrupt.
- The interrupt handler re-enables all interrupts at the microprocessor, with the exception of the current interrupt, which is left masked at the board, and then signals the appropriate IST event.
- The IST is scheduled, services the hardware, and then finishes processing the interrupt.
- The IST calls the InterruptDone function, which in turn calls the OEMInterruptDone function in the OAL.
OEMInterruptDone re-enables the current interrupt.
Posted by Sunflower at 12/07/2009 01:46:00 PM 0 comments
Labels: Interrupt handler, Interrupt Service Routine, Interrupt Service Thread, Interrupts, ISR, IST
Subscribe by Email |
|
Edge Triggered Interrupts
An edge-triggered interrupt is a class of interrupts that are signalled by a level transition on the interrupt line, either a falling edge (1 to 0) or a rising edge (0 to 1). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its quiescent state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect the edge. This type of interrupt is useful for a fleeting signal that doesn't last long enough for the processor to recognize it using polled I/O or for when the signal can last a long time, but the significant event is when that signal first goes active.
Edge-triggered interrupt modules can be acknowledged immediately, no matter how the interrupt source behaves. The type of the interrupt source does not matter. It can be a pulse, a firmware-clear signal, or some external signal that eventually is cleared somehow. Edge-triggered interrupts keep firmware’s code complexity down, reduce the number of conditions firmware needs to be aware of, and provide more flexibility when interrupts are acknowledged. This keeps development time down and quality up.
Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to one particular state.Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices.
Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, and interrupts will continue to be received from the high-priority devices that are being serviced. If there is a device that the CPU does not know how to service, it may cause a spurious interrupt, or even periodic spurious interrupts, but it does not interfere with the interrupt signaling of the other devices.
Posted by Sunflower at 12/07/2009 01:29:00 PM 0 comments
Labels: Devices, Edge Triggered Interrupts, Interrupts, Signal, Source
Subscribe by Email |
|
Level Triggered Interrupts
Level-triggered Interrupt : It is the class of interrupts where the presence of an unserviced interrupt is indicated by a high level (1), or low level (0), of the interrupt request line. A device wishing to signal an interrupt drives the line to its active level, and then holds it at that level until serviced. It ceases asserting the line when the CPU commands it to or otherwise handles the condition that caused it to signal the interrupt.
Level-triggered interrupts force firmware engineers to take into account what is generating the interrupt source. If the interrupt source is just a pulse from a state machine, then the device drivers do not need to do any additional work. If the interrupt source is asserted when a counter equals zero, the device driver must first write a non-zero value to the counter before it can acknowledge the interrupt. If the interrupt source is a signal from a different block with its own device driver or an external device under its own firmware control the device driver has no control over when the interrupt source is cleared. Its only choice is to disable that interrupt so that it can exit the interrupt handler.
There are also serious problems with sharing level-triggered interrupts. As long as any device on the line has an outstanding request for service the line remains asserted, so it is not possible to detect a change in the status of any other device. Deferring servicing a low-priority device is not an option, because this would prevent detection of service requests from higher-priority devices. If there is a device on the line that the CPU does not know how to service, then any interrupt from that device permanently blocks all interrupts from the other devices.
Posted by Sunflower at 12/07/2009 12:59:00 PM 0 comments
Labels: Interrupts, Level Triggered Interrupts, Problems, Signal, Source
Subscribe by Email |
|
Friday, December 4, 2009
Overview of Interrupts
An interrupt is an unexpected hardware initiated subroutine call or jump that temporarily suspends the running of the current program.
Interrupts occur when a peripheral device asserts an interrupt input pin of the micro-processor. Provided the interrupt is permitted, it will be acknowledged by the processor at the end of the current memory cycle. The processor then services the interrupt by branching to a special service routine written to handle that particular interrupt. Upon servicing the device, the processor is then instructed to continue with what is was doing previously by use of the "return from interrupt" instruction.
Interrupts in general can be divided into two kinds- maskable and non-maskable.
A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump. A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens.
The function that is called or the particular assembly code that is executed when the interrupt happens is called the Interrupt Service Routine (ISR). Other terms of note are: An interrupt flag (IFG) this is the bit that is set that triggers the interrupt, leaving the interrupt resets this flag to the normal state. An interrupt enable (IE) is the control bit that tells the processor that a particular maskable interrupt should or should not be ignored.
Advantages of Interrupts :
Interrupts are used to ensure adequate service response times by the processing. Sometimes, with software polling routines, service times by the processor cannot be guaranteed, and data may be lost. The use of interrupts guarantees that the processor will service the request within a specified time period, reducing the likelihood of lost data.
Software Interrupt :
The Software Interrupt (SWI) is an instruction that can be placed anywhere within a program. It forces the microprocessor to act as if an interrupt has occurs. The vector for the 6800 is located at addresses $FFFA and $FFFB. The SWI is often used by Monitor Programs to set breakpoints, which stops the program at a particular location so that the contents of the memory and registers can be examined.
Interrupt Latency :
The time interval from when the interrupt is first asserted to the time the CPU recognizes it. This will depend much upon whether interrupts are disabled, prioritized and what the processor is currently executing.
Interrupt Response Time :
The time interval between the CPU recognizing the interrupt to the time when the first instruction of the interrupt service routine is executed. This is determined by the processor architecture and clock speed.
Posted by Sunflower at 12/04/2009 11:27:00 PM 0 comments
Labels: Interrupt Service Routine, Interrupts, ISR, Maskable, Non-Maskable, Subroutine, Types
Subscribe by Email |
|
Concept of Buffers
In computing, a buffer is a region of memory used to temporarily hold data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a keyboard) or just before it is sent to an output device (such as a printer). However, a buffer may be used when moving data between processes within a computer.
Buffering is used to improve several other areas of computer performance as well. Most hard disks use a buffer to enable more efficient access to the data on the disk. Video cards send images to a buffer before they are displayed on the screen (known as a screen buffer). Computer programs use buffers to store data while they are running. If it were not for buffers, computers would run a lot less efficiently and we would be waiting around a lot more.
Buffers are another way that receivers can ensure that they do not miss any data sent to them. Buffers can also be useful on the transmit side, where they can enable applications to work more efficiently by storing data to be sent as the link is available.
The buffers may be in hardware, software, or both. When the hardware buffers aren't large enough, a PC may also use software buffers, which are programmable in size and may be as large as system memory permits. The port's software driver transfers data between the software and hardware buffers.
In micro controllers, the buffers tend to be much smaller, and some chips have no hardware buffers at all. The smaller the buffers, the more important it is to use other techniques to ensure that no data is missed.
Posted by Sunflower at 12/04/2009 11:12:00 PM 0 comments
Labels: Buffers, Data, Devices, Hardware, Software, Storage
Subscribe by Email |
|
Wednesday, December 2, 2009
Handshaking Mechanism
Handshaking is an automated process of negotiation that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. It follows the physical establishment of the channel and precedes normal information transfer.
It is usually a process that takes place when a computer is about to communicate with a foreign device to establish rules for communication. When a computer communicates with another device like a modem, printer, or network server, it needs to handshake with it to establish a connection.
With handshaking signals, a transmitter can indicate when it has data to send, and a receiver can indicate when it is ready to receive data. The exact protocols that signals follow may vary, though many RS-232 and RS-485 links follow standard or conventional protocols.
In hardware handshaking, the receiver brings a line high when it is ready to receive data, and the transmitter waits for this signal before sending data. The receiver may bring the line low any time and the transmitter must detect this, stop sending, and wait for the line to return high before finishing the transmission.
Other links accomplish the same thing with software handshaking, by having the receiver send one code to indicate that it is ready to receive, and another to signal the transmitter to stop sending.
Posted by Sunflower at 12/02/2009 04:06:00 PM 0 comments
Labels: Handshaking, Hardware, Protocols, Receiver, Rules, Software, Transmitter
Subscribe by Email |
|
Data Formats
The data bits in a serial transmission may represent anything, including commands, sensor readings, status information, error codes, or text messages. The information may be encoded as binary or text data.
- Binary Data : The receiver interprets a received byte as a binary number with a value from 0 to 255. The bits are numbered 0 through 7, with each bit representing the bit's value (0 or 1) multiplied by a power of 2. A byte of 1111 1111 translates to 255 or FFh and 0001 0001 translates to 17 or 11h. In asynchronous mode, bit 0, the least-significant bit arrives first. Binary data works fine for many links but some links need to send messages or files containing text.
- Text Data : To send text, the program uses a code that assigns a numeric value to each text character. There are several coding conventions :
* ASCII : It consists of 128 codes and requires only seven data bits. An eighth bit, if used, may be 0 or a parity bit.
* ANSI : It consists of 256 codes with the higher codes representing special and accented characters.
Other formats use 16 bits per character, which allows 65,536 different characters.
One can also use text to transfer binary data by expressing the data in ASCII Hex format. Each byte is represented by a pair of ASCII codes that represent the byte's two hexadecimal characters. This format can represent any value using only the ASCII codes 30h through 39h (from 0 through 9) and 41h to 46h (for A through F).
Posted by Sunflower at 12/02/2009 03:29:00 PM 0 comments
Labels: ANSI, ASCII, Binary Data, Data format, Text format, transmission
Subscribe by Email |
|
Friday, November 27, 2009
Serial Data Transfer
In a serial link, the transmitter, or driver, sends bits one at a time, in sequence. One signal required by all serial links is a clock, or timing reference, to control the flow of data. The transmitter and receiver use a clock to decide when to send and read each bit. There are two types of serial-data formats :
- Synchronous Format : Data transfer method in which a continuous stream of data signals is accompanied by timing signals(generated by an electronic clock) to ensure that the transmitter and the receiver are in step (synchronized) with one another. The data is sent in blocks (called frames or packets) spaced by fixed time intervals. After the synchronized characters are received by the remote device, they are decoded and used to synchronize the connection. After the connection is correctly synchronized, data transmission may begin. The following is a list of characteristics specific to synchronous communication:
* There are no gaps between characters being transmitted.
* Timing is supplied by modems or other devices at each end of the connection.
* Special syn characters precede the data being transmitted.
* The syn characters are used between blocks of data for timing purposes.
- Asynchronous Format : The term asynchronous is used to describe the process where transmitted data is encoded with start and stop bits, specifying the beginning and end of each character. Asynchronous, or character-framed, transmission is used to transmit seven or eight-bit data, usually in ASCII character format. Each character has a specific start and end sequence, usually one start bit and one or two end (stop)bits.
When gaps appear between character transmissions, the asynchronous line is said to be in a mark state. A mark is a binary 1 (or negative voltage) that is sent during periods of inactivity on the line. When the mark state is interrupted by a positive voltage (a binary 0), the receiving system knows that data characters are going to follow. The following is a list of characteristics specific to asynchronous communication:
* Each character is preceded by a start bit and followed by one or more stop bits.
* Gaps or spaces between characters may exist.
Posted by Sunflower at 11/27/2009 03:23:00 PM 0 comments
Labels: Asynchronous, Format, Serial data, Synchronous, Transfer of data
Subscribe by Email |
|
Thursday, November 26, 2009
Metrics for Source Code
Halstead assigned quantitative laws to the development of computer software, using a set of primitive measures that may be derived after code is generated or estimated once design is complete. The measures are :
n1 = the number of distinct operators
n2 = the number of distinct operands
N1 = the total number of operator occurrences
N2 = the total number of operand occurrences
Length: N = n1log2n1 + n2log2n2
Volume: V = Nlog2(n1 + n2)
SUBROUTINE SORT (X,N)
DIMENSION X(N)
IF (N.LT.2) RETURN
DO 20 I=2,N
DO 10 J=1,I
IF (X(I).GE.X(J) GO TO 10
SAVE = X(I)
X(I) = X(J)
X(J) = SAVE
10 CONTINUE
20 CONTINUE
RETURN
END
OPERATOR COUNT
1 END OF STATEMENT 7
2 ARRAY SUBSCRIPT 6
3 = 5
4 IF( ) 2
5 DO 2
6 , 2
7 END OF PROGRAM 1
8 .LT. 1
9 .GE. 1
10 GO TO 10 1
n1 = 10 N1 = 28
n2 = 7 N2 = 22
Posted by Sunflower at 11/26/2009 02:58:00 PM 0 comments
Labels: Metrics, software engineering, Software Metrics, Source Code metrics
Subscribe by Email |
|
Component Level Design Metrics
Component level design metrics for software components focus on internal characteristics of a software component and include measures of the three Cs - module cohesion, coupling, and complexity.
- Cohesion Metrics :It defines a collection of metrics that provide an indication of the cohesiveness of a module.The metrics are defined in terms of five concepts :
* Data slice - data values within the module that affect the module location at which a backward trace began.
* Data tokens - Variables defined for a module
* Glue Tokens - The set of tokens lying on multiple data slices
* Superglue tokens - The set of tokens on all slices
* Stickiness - of a glue token is proportional to number of data slices that it binds
* Strong Functional Cohesion
SFC(i) = SG(i)/tokens(i)
- Coupling Metrics : Module coupling provides an indication of the connectedness of a module to other modules, global data, and the outside environment.
* Data and control flow coupling
di = number of input data parameters
ci = number of input control parameters
d0 = number of output data parameters
c0 = number of output control parameters
* Global coupling
gd = number of global variables used as data
gc = number of global variables used as control
* Environmental coupling
w = number of modules called (fan-out)
r = number of modules calling the module under consideration (fan-in)
* Module Coupling:
mc = 1/ (di + 2*ci + d0 + 2*c0 + gd + 2*gc + w + r)
mc = 1/(1 + 0 + 1 + 0 + 0 + 0 + 1 + 0) = .33 (Low Coupling)
mc = 1/(5 + 2*5 + 5 + 2*5 + 10 + 0 + 3 + 4) = .02 (High Coupling)
Posted by Sunflower at 11/26/2009 02:32:00 PM 0 comments
Labels: Cohesion, Component Level Design Metrics, Coupling, Metrics, software engineering, Software Metrics
Subscribe by Email |
|
Class Oriented Metrics - MOOD Metrics Suite
The MOOD metrics set refers to a basic structural mechanism of the OO paradigm as encapsulation (MHF and AHF), inheritance (MIF and AIF), polymorphisms (PF), message-passing (CF) and are expressed as quotients. The set includes the following metrics:
- Method Hiding Factor (MHF)
MHF is defined as the ratio of the sum of the invisibilities of all methods defined in all classes to the total number of methods defined in the system under consideration. The invisibility of a method is the percentage of the total classes from which this method is not visible. A low MHF indicates insufficiently abstracted implementation. A large proportion of methods are unprotected and the probability of errors is high. A high MHF indicates very little functionality. It may also indicate that the design includes a high proportion of specialized methods that are not available for reuse. An acceptable MHF range of 8% to 25% has been suggested but we neither endorse or criticize this view.
note : inherited methods not considered.
- Attribute Hiding Factor (AHF)
AHF is defined as the ratio of the sum of the invisibilities of all attributes defined in all classes to the total number of attributes defined in the system under consideration.
- Method Inheritance Factor (MIF)
MIF is defined as the ratio of the sum of the inherited methods in all classes of the system under consideration to the total number of available methods (locally defined plus inherited) for all classes.
- Attribute Inheritance Factor (AIF)
AIF is defined as the ratio of the sum of inherited attributes in all classes of the system under consideration to the total number of available attributes
(locally defined plus inherited) for all classes.
- Polymorphism Factor (PF)
It measures the degree of method overriding in the class inheritance tree. It equals the number of actual method overrides divided by the maximum number of possible method overrides.
PF = overrides / sum for each class(new methods * descendants)
PF varies between 0% and 100%. As mentioned above, when PF=100%, all methods are overridden in all derived classes. A PF value of 0% may indicate one of the following cases:
* project uses no classes or inheritance.
* project uses no polymorphism.
* full class hierarchies have not been analyzed.
- Coupling Factor (CF)
CF is defined as the ratio of the maximum possible number of couplings in the system to the actual number of couplings not imputable to inheritance.
CF = Actual couplings/Maximum possible couplings
Posted by Sunflower at 11/26/2009 02:13:00 PM 0 comments
Labels: Class oriented metrics, MOOD, Object Oriented, Types
Subscribe by Email |
|
Tuesday, November 24, 2009
Metrics for Object-Oriented Design - CK Metrics Suite Cont...
- Response For Class (RFC)
The RFC is defined as the total number of methods that can be executed in response to a message to a class. This count includes all the methods available in the whole class hierarchy. If a class is capable of producing a vast number of outcomes in response to a message, it makes testing more difficult for all the possible outcomes.
- Number of Children (NOC)
It is defined as the number of immediate subclasses.
* The greater the number of children, the greater the reuse, since inheritance is a form of reuse.
* The greater the number of children, the greater is the likelihood of improper abstraction of the parent class. If a class has a large number of children, it may be a case of misuse of sub-classing.
* The number of children gives an idea of the potential influence a class has on the design. If a class has a large number of children, it may require more testing of the methods in that class.
- Coupling between object classes (CBO)
It is defined as the count of the classes to which this class is coupled. Coupling is defined as : Two classes are coupled when methods declared in one class use methods or instance variables of the other class.
* Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application.
* In order to improve modularity and promote encapsulation, inter-object class couples should be kept to a minimum. The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult.
* A measure of coupling is useful to determine how complex the testing of various parts of a design are likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.
- Lack of Cohesion in Methods (LCOM)
It is defined as the number of different methods within a class that reference a given instance variable.
* Cohesiveness of methods within a class is desirable, since it promotes encapsulation.
* Lack of cohesion implies classes should probably be split into two or more subclasses.
* Any measure of disparateness of methods helps identify flaws in the design of classes.
* Low cohesion increases complexity, thereby increasing the likelihood of errors during the development process.
Posted by Sunflower at 11/24/2009 07:23:00 PM 0 comments
Labels: Class oriented metrics, Object Oriented, Software Metrics, Types
Subscribe by Email |
|
Metrics for Object-Oriented Design - CK Metrics Suite
In Object Oriented software development process, the system is viewed as collection of objects. The functionality of the application is achieved by interaction among these objects in terms of messages. Whenever, one object depends on another object to do certain functionality, there is a relationship between those two classes. In order to achieve perfect "separation of concern", objects should rely on the interfaces and contracts offered by another object without relying on any underlying implementation details. OO Design metrics can be a very helpful measuring technique to evaluate the design stability. Also, given a correct abstraction of layers and appropriate relationship between the classes, there are still chances that the coding process might introduce a few more vulnerability. At this stage also OO metrics can be of help to identify, if we need to pay further attention to any of the code to make it more maintainable.
There are different kinds of Object Oriented Metrics :
CLASS ORIENTED METRICS - THE CK METRICS SUITE
Chidamber and Kemerer's metrics suite for OO Design is the deepest reasearch in OO metrics investigation. They have defined six metrics for the OO design.
- Weighted Methods per Class (WMC)
It is defined as the sum of the complexities of all methods of a class.
* The number of methods and the complexity of methods involved is a predictor of how much time and effort is required to develop and maintain the class.
* The larger the number of methods in a class, the greater the potential impact on children, since children will inherit all the methods defined in the class.
* Classes with large numbers of methods are likely to be more application specific, limiting the possibility of reuse.
- Depth of Inheritance Tree (DIT)
It is defined as the maximum length from the node to the root of the tree.
* The deeper a class is in the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behavior.
* Deeper trees constitute greater design complexity, since more methods and classes are involved.
* The deeper a particular class is in the hierarchy, the greater the potential reuse of inherited methods.
Posted by Sunflower at 11/24/2009 12:50:00 PM 0 comments
Labels: Class oriented metrics, Object Oriented, Software Metrics, Types
Subscribe by Email |
|
Introduction to Software metrics
Software metrics are an integral part of the state of the practice in software engineering. More and more customers are specifying software and/or quality
metrics reporting as part of their contractual requirements.Software metrics provide a quantitative way to assess the quality of internal product attributes, thereby enabling a software engineer to assess quality before the product is built.
Good metrics should facilitate the development of models that are capable of predicting process or product parameters, not just describing them. Thus, ideal metrics should be :
- simple, precisely definable — so that it is clear how the metric can be evaluated.
- objective, to the greatest extent possible.
- easily obtainable.
- valid — the metric should measure what it is intended to measure.
- robust—relatively insensitive to (intuitively) insignificant changes in the process or product.
Software metrics may be broadly classified as :
- Product and Process Metrics : Product metrics are measures of the software product at any stage of its development, from requirements to installed system. Product metrics may measure the complexity of the software design, the size of the final program or the number of pages of documentation produced. Process metrics, on the other hand, are measures of the software development process, such as overall development time, type of methodology used, or the average level of experience of the programming staff.
- Objective and Subjective Metrics : Objective metrics should always result in identical values for a given metric, as measured by two or more qualified observers.
For subjective metrics, even qualified observers may measure different values for a given metric, since their subjective judgment is involved in arriving at the measured value.
- Primitive and Computed Metrics : Primitive metrics are those that can be directly observed, such as the program size (in LOC), number of defects observed in unit testing, or total development time for the project. Computed metrics are those that cannot be directly observed but are computed in some manner from other metrics.
Posted by Sunflower at 11/24/2009 12:13:00 PM 0 comments
Labels: Program Evaluate, Quality, Software, software engineering, Software Metrics
Subscribe by Email |
|
Monday, November 23, 2009
Control Structure Testing : Loop testing
Loops are the basis of most algorithms implemented using software. However, often we do consider them when conducting testing. Loop testing is a white box testing approach that concentrates on the validity of loop constructs. Four loops can be defined: simple loops, concatenate loops, nested loops, and unstructured loops.
- Simple Loops, where n is the maximum number of allowable passes through the loop.
o Skip loop entirely.
o Only one pass through loop.
o Two passes through loop.
o m passes through loop where m
- Nested Loops
o Start with inner loop. Set all other loops to minimum values.
o Conduct simple loop testing on inner loop.
o Work outwards.
o Continue until all loops tested.
- Concatenated Loops
o If independent loops, use simple loop testing.
o If dependent, treat as nested loops.
- Unstructured loops
o Don't test - redesign.
Posted by Sunflower at 11/23/2009 03:22:00 PM 0 comments
Labels: Control Structure Testing, Loop testing, Loops, Testing, Types
Subscribe by Email |
|
Control Structure Testing : Data Flow Testing
Data Flow Testing is a technique which is used effectively alongside Control Flow Testing. It is another type of white-box testing which looks at how data moves within a program. Data flow occurs when variables are declared and then accessed and changed as the program progresses.
A "definition-clear path" is a path which involves no changes of the variable being tested. For a statement with S as its statement number,
DEF(S) = {X| statement S contains a definition of X}
USE(S) = {X| statement S contains a use of X}
If statement S is an if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S’ if there exists a path from statement S to S’ which does not contain any condition of X.
A definition-use chain (or DU chain) of variable X is of the type [X,S,S’] where S and S’ are statement numbers, X is in DEF(S), USE(S’), and the definition of X in statement S is live at statement S’.
One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements.
Posted by Sunflower at 11/23/2009 02:59:00 PM 0 comments
Labels: Control Structure Testing, Data Flow Testing, Testing, Types
Subscribe by Email |
|
Control Structure Testing : Condition testing
Condition testing is a test case design approach that exercises the logical conditions contained in a program module. Errors in conditions can be due to:
* Boolean operator error
* Boolean variable error
* Boolean parenthesis error
* Relational operator error
* Arithmetic expression error
The condition testing method concentrates on testing each condition in a program. The purpose of condition testing is to determine not only errors in the conditions of a program but also other errors in the program. A number of condition testing approaches have been identified.
Domain testing needs three and four tests to be produced for a relational expression. For a relational expression of the form
E1 < relational-operator > E2
Three tests are required the make the value of E1 greater than, equal to and less than E2,respectively.
Posted by Sunflower at 11/23/2009 02:39:00 PM 0 comments
Labels: Boolean, Condition testing, Control Structure Testing, Testing, Types
Subscribe by Email |
|
Control Structure Testing : Branch Testing
Although basis path testing is simple and highly effective, it is not enough in itself. Next we consider variations on control structure testing that broaden testing coverage and improve the quality of white box testing. Control structure testing is a group of white-box testing methods.
* 1.0 Branch Testing
* 1.1 Condition Testing
* 1.2 Data Flow Testing
* 1.3 Loop Testing
Branch Testing : Branch Testing is a structural or white box technique, because it is conducted with reference to the code. A decision is an executable statement that may transfer control to another statement. It is also called Decision Testing. For every decision, each branch needs to be executed at least once. It ignores implicit paths that result from compound conditionals. It treats a compound conditional as a single statement.
- This example has two branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
statement 2
END IF
- This examples also has just two branches to be executed, despite the compound conditional:
IF ( a equals b AND c less than d ) THEN
statement 1
ELSE
statement 2
END IF
- This example has three branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
IF ( c equals d) THEN
statement 2
ELSE
statement 3
END IF
END IF
- Obvious decision statements are if, for, while, switch.
- Subtle decisions are return (boolean expression), ternary expressions, try-catch.
Posted by Sunflower at 11/23/2009 02:08:00 PM 0 comments
Labels: Branch Testing, Control Structure Testing, Testing, Types
Subscribe by Email |
|
Friday, November 20, 2009
Web Engineering Process
Choosing a process model is based on the attributes of the software to be developed.
If immediacy and continuous evolution are the primary attributes of WebApp, a web engineering team might choose an agile process model. If a WebApp is to be developed over a longer time period, an incremental process model might be chosen.
Defining a Framework :
Any one of the agile process models can be applied successfully as a WebE process. Before we define a process framework for WebE, three things must be kept in mind :
- WebApps are often delivered incrementally.
- Changes will occur frequently.
- Timelines are short.
WebE Process Framework :
- Customer Communication : Within the WebE process, customer communication is characterized by :
* Business analysis : It defines the business/organizational context for the WebApp
* Formulation : It is a requirements gathering activity involving all stakeholders. The intent is to describe the problem that the WebApp is to solve.
- Planning : The “plan” consists of a task definition and a timeline schedule for the time period (usually measured in weeks) projected for the development of the WebApp increment.
- Modeling :
Analysis model — establishes a basis for design.
* Content Analysis
* Interaction Analysis
* Functional Analysis
* Configuration Analysis
Design model — represents the key WebApp elements.
* Content design
* Aesthetic design
* Architectural design
* Interface design
* Navigation design
* Component design
- Construction : WebE tools and technology are applied to construct the WebApp that has been modeled.
- Deployment : The WebApp is configured for its operational environment, delivered to end-users, and then an evaluation period commences.
Posted by Sunflower at 11/20/2009 11:59:00 AM 0 comments
Labels: Framework, Process, Web Applications, Web based systems, Web Engineering, WebApp, WebE
Subscribe by Email |
|
Thursday, November 19, 2009
WebApp Engineering Layers
The development of Web based systems and applications incorporate specialized process models, software engineering methods adapted to the characteristics of WebApp development, and a set of important enabling technologies. Process, methods, and technologies provide a layered approach to WebE that is conceptually identical to the software engineering layers :
- Process
WebE process models embrace the agile development philosophy that defines the following activities
* Embrace change.
* Encourages the creativity and independence of development staff and strong interaction with WebApp stakeholders.
* Builds systems using small development team
* Emphasizes evolutionary or incremental development using short development cycles.
- Methods
The WebE methods landscape encompasses a set of technical tasks that enable a Web engineer to understand, characterize, and then build a high-quality WebApp.
* Communication methods : It defines the approach used to facilitate communication between Web engineers and all other Web stakeholders.
* Requirements analysis methods : These methods provide a basis for understanding the content to be delivered by WebApp, the function to be provided for the end user, and modes of interaction that each class of user will require.
* Design methods : It encompasses a series of design techniques that address WebApp content, application and information architecture, interface design, and navigation structure.
* Testing methods : It incorporates formal technical reviews of both the content and design model and a wide array of testing techniques.
- Tools and Technology
These technologies encompasses a wide array of content description and modeling languages, browsers, multimedia tools, site authoring tools, database connectivity
tools, servers and server utilities, and site management and analysis tools.
Posted by Sunflower at 11/19/2009 04:14:00 PM 0 comments
Labels: Layers, Methods, Process, Technology, Tools, Web Applications, Web based systems, WebApp, WebE
Subscribe by Email |
|
Attributes of Web Based Systems
Web-based systems and applications deliver a complex array of content and functionality to a broad population of end-users. The following attributes are encountered in the vast majority of WebApps.
- Network Intensiveness.
- Concurrency.
- Unpredictable load.
- Performance.
- Availability.
- Data driven.
- Content sensitive.
- Continuous evolution.
- Immediacy.
- Security.
- Aesthetics.
The following application categories are most commonly encountered in WebE work :
- Informational : read-only content is provided with simple navigation and links.
- Download.
- Customizable.
- Interaction.
- User input.
- Transaction-oriented.
- Service-oriented.
- Portal.
- Database access.
- Data warehousing.
Posted by Sunflower at 11/19/2009 03:34:00 PM 0 comments
Labels: Attributes, Categories, Web based systems, Web Engineering, WebApp
Subscribe by Email |
|
Wednesday, November 18, 2009
What are Client Server Applications ? A more detailed description
Client server applications are very suitable for the current web world. It is very easy to translate modern web applications into a client server model. For example, if you consider the usage of Yahoo mail that you access through your favorite browser, it is also a client server architecture, at multiple levels. If you consider the case of the Yahoo mail, your browser or mobile runs a client software that connects to a program running at the server which renders the web pages; then the web server itself acts as a client to a database running at the server farm which supplies the actual data that is returned back to the server program, from where it is returned to the client running at the browser.
If you consider the technology of the client server architecture, it is also referred to as a 2 tier architecture, with the 2 tiers being the client and the server. This is the most basic form of the client server architecture, and it can be expanded to go upto using multiple levels such as described in the Yahoo mail program explained above. Both the client and server softwares are normally considered as part of the same software, but they can be modified separately and yet the network keeps on working.
The client server model allows the setting up of such networks that can span across multiple locations, with the client software residing in one location, and the server application in another location. The advantage and simplicity of the client server architecture has made it the prominent architecture behind most modern business and non-business applications. In a mark of how important this model is, even the mainframes of the past have now started using a client server model.
Posted by Ashish Agarwal at 11/18/2009 09:31:00 PM 0 comments
Labels: Architecture, Client, Client Server, Design, Server
Subscribe by Email |
|
Client Server Applications: The definition
What is Client Server Architecture ? What is some of the defining characteristics of Client Server Applications, especially since there is a buzz for the past many years about web applications, or a mixed breed (using applications such as Adobe AIR).
Client Server applications are literally define by the names used where one software application, at the client end (or at the user end) makes a service request to another software application that sits at the server (typically a machine with a much higher configuration). However, the separation between the client and server is logical, since both of them could exist on the same machine. There is a process to separate the work load of the application between the server application, and the client application. Client server applications are one of the central concepts behind network computing. Initially, the term was used to differentiate between the mainframe model or the Unix model where the entire work was done at the server, and the client was typically dumb, with no capability to do any processing.
Posted by Ashish Agarwal at 11/18/2009 12:15:00 AM 0 comments
Labels: Application, Client Server, Definition
Subscribe by Email |
|
Monday, November 16, 2009
Introduction to Web Engineering
The impact of Web-based systems and applications is arguably the single most significant event in the history of computing. As WebApps grow in importance,a disciplined WebE approach adapted from software engineering principles, concepts, process, and methods has begun to evolve.
WebApps are different from other categories of computer software. They are network intensive, content driven, and continuously evolving. The immediacy that drives their development, the overriding need for security in their operation, and the demand for aesthetic as well as functional content delivery are additional differentiating factors. Like other types of software, WebApps can be assessed using a variety of quality criteria that include usability, functionality, reliability, efficiency, maintainability, security, availability, scalability, and time to market.
WebE can be described in three layers - process, methods, and tools/technology. The WebE process adopts the agile development philosophy that emphasizes a "lean" engineering approach that leads to the incremental delivery of the system to be built. The generic process framework - communication, planning, modeling, construction, and deployment - is applicable to WebE. These framework activities are refined into a set of WebE tasks that are adapted to the needs of each project. A set of umbrella activities similar to those applied during software engineering work - SQA, SCM, project management - apply to all WebE projects.
Posted by Sunflower at 11/16/2009 08:31:00 PM 0 comments
Labels: software engineering, Web Applications, Web Engineering, WebApp
Subscribe by Email |
|
Thursday, November 12, 2009
Requirements Management
Requirements management involves communication between the project team members and stakeholders, and adjustment to requirements changes throughout the course of the project. To prevent one class of requirements from overriding another, constant communication among members of the development team is critical. For example, in software development for internal applications, the business has such strong needs that it may ignore user requirements, or believe that in creating use cases, the user requirements are being taken care of. The purpose of requirements management is to assure the organization documents, verifies and meets the needs and expectations of its customers and internal or external stakeholders.
Requirements traceability is concerned with documenting the life of a requirement. It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability. Even the use of the requirement after the implemented features have been deployed and used should be traceable.
The purpose of the Requirements Traceability Matrix is to help ensure the object of the requirements conforms to the requirements by associating each requirement with the object via the traceability matrix.
A traceability matrix is used to verify that all stated and derived requirements are allocated to system components and other deliverables.
- Features Traceability Table : It shows how requirements relate to important customer observable system/product features.
- Source Traceability Table : It identifies the source of each requirement.
- Dependency Traceability Table : It indicates how requirements are related to one another.
- Subsystem Traceability Table : it categorizes requirements by the subsystem that they govern.
- Interface Traceability Table : It shows how requirements relate to both internal and external system interfaces.
Posted by Sunflower at 11/12/2009 03:18:00 PM 0 comments
Labels: matrix, requirements Management, Traceability table
Subscribe by Email |
|
Requirements Engineering Tasks
Requirements Engineering :
* Provides a solid approach for addressing challenges in software project.
* Must be adapted to the needs of the: Process, project and product and the people doing the work.
* Begins during the communication activity and continues into the modeling activity.
* Helps software engineers to better understand the problem they will work to solve.
Requirements Engineering Tasks :
- Inception
* A task that defines the scope and nature of the problem to be solved.
* Software engineers ask context-free questions.
* Intent is to establish a basic understanding of the problem, the people who want a solution, nature of the solution that is desired and the effectiveness of preliminary communication and collaboration between the customer and the developer.
- Elicitation
Ask the customer, the users and others about the objectives of the system, what is to be accomplished, how the system of product fits into the needs of the business and finally, how the system or product is to be used on a day-to-day basis.
Why Elicitation is Difficult?
* Problems of scope.
* Problems of understanding.
* Problems of volatility.
- Elaboration
* Basic requirements (obtained from the customer during inception and elicitation) are refined and modified.
* Focuses on developing a refined technical model of software functions, features and constraints..
* Driven by the creation and refinement of user scenarios.
* End-result: analysis model that defines the informational, functional and behavioral domain of the problem.
- Negotiation
* There’s no winner and loser in an effective negotiation.
* Customers usually ask for more rather than what can be achieved.
* Some proposed conflicting requirements.
* The requirements engineer must reconcile these conflicts through a process of negotiation.
- Specification
* It can be written document, a set of graphical models, a formal mathematical model,a collection of usage scenarios,a prototype,or any combination of these.
* It is the final work product produced by the requirements engineer.
* It serves as the foundation for subsequent software engineering activities.
* It describes the function and performance of a computer-based system and the constraints that will govern its development.
- Validation
* Work products produced are assessed for quality in this step.
* A task which examines the specification to ensure that all software requirements have been stated unambiguously.
* That inconsistencies, omissions and errors have been detected and corrected.
* That work products conform to the standards established for the process, project and the product.
Posted by Sunflower at 11/12/2009 02:52:00 PM 0 comments
Labels: Elaboration, Elicitation, Inception, Negotiation, Requirements Engineering, software engineering, Specification, Tasks, Validation
Subscribe by Email |
|
Wednesday, November 11, 2009
Types Of Requirements
Requirements are categorized in several ways. The following are common categorizations of requirements that relate to technical management.
- Customer Requirements
Statements of fact and assumptions that define the expectations of the system in terms of mission objectives, environment, constraints, and measures of effectiveness and suitability (MOE/MOS). The customers are those that perform the eight primary functions of systems engineering, with special emphasis on the operator as the key customer.
* Operational distribution or deployment: Where will the system be used?
* Mission profile or scenario: How will the system accomplish its mission objective?
* Performance and related parameters: What are the critical system parameters to accomplish the mission?
* Utilization environments: How are the various system components to be used?
* Effectiveness requirements: How effective or efficient must the system be in performing its mission?
* Operational life cycle: How long will the system be in use by the user?
* Environment: What environments will the system be expected to operate in an effective manner?
- Functional Requirements
Functional requirements explain what has to be done by identifying the necessary task, action or activity that must be accomplished. Functional requirements analysis will be used as the toplevel functions for functional analysis.
- Non-functional Requirements
Non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behaviors.
- Performance Requirements
The extent to which a mission or function must be executed; generally measured in terms of quantity, quality, coverage, timeliness or readiness. During requirements analysis, performance (how well does it have to be done) requirements will be interactively developed across all identified functions based on system life cycle factors; and characterized in terms of the degree of certainty in their estimate, the degree of criticality to system success, and their relationship to other requirements.
- Design Requirements
The “build to,” “code to,” and “buy to” requirements for products and “how to execute” requirements for processes expressed in technical data packages and technical manuals.
- Derived Requirements
Requirements that are implied or transformed from higher-level requirement. For example, a requirement for long range or high speed may result in a design requirement for low weight.
- Allocated Requirements
A requirement that is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items.
Posted by Sunflower at 11/11/2009 07:08:00 PM 0 comments
Labels: Requirements, software engineering, Types
Subscribe by Email |
|
Overview of Requirements Engineering
Software requirements engineering is the process of determining what is to be produced in a software system. In developing a complex software system, the requirements engineering process has the widely recognized goal of determining the needs for, and the intended external behavior, of a system design. This process is regarded as one of the most important parts of building a software system.
Requirements engineering is an important aspect of any software project, and is a general term used to encompass all the activities related to requirements. The five specific steps in software requirements engineering are:
* Requirements inception
* Requirements elicitation
* Requirements analysis
* Requirements specification
* Requirements validation
Although they seem to be separate tasks, these four processes cannot be strictly separated and performed sequentially. All four are performed repeatedly because the needs are often impossible to realize until after a system is built. Even when requirements are stated initially, it is likely they will change at least once during development, and it is very likely they will change immediately after development.
Posted by Sunflower at 11/11/2009 06:10:00 PM 0 comments
Labels: Requirements Engineering, Software, software engineering, Steps
Subscribe by Email |
|
Tuesday, October 27, 2009
Overview to System Simulation Tools
System simulation tools provide the software engineer with the ability to predict the behavior of a real-time system prior to the time that it is built. In addition, these tools enable the software engineer to develop mock-ups of the real-time system, allowing the customer to gain insight into the function, operation, and response prior to actual implementation.
Tools in this category allow a team to define the elements of a computer-based system and then execute a variety of simulations to better understand the operating characteristics and overall performance of the system. Two broad categories of system simulation tools exist :
- General purpose tools that can model virtually any computer-based system.
- Special purpose tools that are designed to address a specific application domain.
REPRESENTATIVE TOOLS :
- CSIM : Developed by Lockheed Martin Advanced Technology labs, is a general purpose discrete-event simulator for block diagram-oriented systems.
- Simics : Developed by Virtutech, is a system simulation platform that can model and analyze both hardware and software-based systems.
- SIX : Developed by Wolverine Software, provides general purpose building blocks for modeling the performance of a wide variety of systems.
Posted by Sunflower at 10/27/2009 07:50:00 PM 0 comments
Labels: software engineering, System Simulation, Systems, Tools
Subscribe by Email |
|
Introduction to System Simulation
Systems simulation is a set of techniques for using computers to imitate, or simulate, the operations of various kinds of real-world facilities or processes.The computer is used to generate a numerical model of reality for the purposes of describing complex interaction among components of a system. The complexity of the system surges from the stochastic (probabilistic) nature of the events, from the rules for the interactions of the elements, and the difficulty to perceive the behavior of the systems as a whole with the passing of time.
When to use simulations?
Systems that change with time, such as a gas station where cars come and go (called dynamic systems) and involve randomness.Modeling complex dynamic systems theoretically need too many simplifications and the emerging models may not be therefore valid. Simulation does not require that many simplifying assumptions, making it the only tool even in absence of randomness.
System terminology:
- State: A variable characterizing an attribute in the system.
- Event: An occurrence at a point in time which may change the state of the system.
- Entity: An object that passes through the system.
- Queue: It is a task list.
- Creating: Creating is causing an arrival of a new entity to the system at some point in time.
- Scheduling: Scheduling is the act of assigning a new future event to an existing entity.
- Random variable: A random variable is a quantity that is uncertain.
- Random variate: A random variate is an artificially generated random variable.
- Distribution: A distribution is the mathematical law which governs the probabilistic features of a random variable.
Posted by Sunflower at 10/27/2009 07:37:00 PM 0 comments
Labels: Process, Simulation, software engineering, System Simulation, Systems
Subscribe by Email |
|
Sunday, October 25, 2009
Introduction to System Modeling
A model is a simplified representation of a system at some particular point in time or space intended to promote understanding of the real system. A system is understood to be an entity which maintains its existence through the interaction of its parts. A model is a simplified representation of the actual system intended to promote understanding. Whether a model is a good model or not depends on the extent to which it promotes understanding. Since all models are simplifications of reality there is always a trade-off as to what level of detail is included in the model. If too little detail is included in the model one runs the risk of missing relevant interactions and the resultant model does not promote understanding. If too much detail is included in the model the model may become overly complicated and actually preclude the development of understanding.
System modeling shows how the system should be working. To construct a system model, the system engineer should consider the following factors :
- Assumptions : It reduces the number of possible permutations and variations, thus enabling a model to reflect the problem in a reasonable manner.
- Simplifications : It enable the model to be created in a timely manner.
- Limitations : It helps to bound the system.
- Constraints : It will guide the manner in which the model is created and the approach taken when the model is implemented.
- Preferences : It indicates the preferred architecture for all data, functions, and technology. The preferred solution sometimes comes into conflict with other restraining factors.yet, customer satisfaction is often predicated on the degree to which the preferred approach is realized.
Posted by Sunflower at 10/25/2009 11:31:00 AM 0 comments
Labels: Factors, Models, System Modeling, Systems
Subscribe by Email |
|
Tuesday, October 20, 2009
Planning Practices - Type of Software Engineering Practice
The planning activity encompasses a set of management and technical practices that enable the software team to define a road maps it travels toward its strategic goal and tactical objectives. There are many different planning philosophies. Regardless of the rigor with which planning is conducted,the following principles always apply :
- Understand the scope of the project.
- Involve the customer in the planning activity.
- Recognize that planning is iterative.
- Estimate based on what you know.
- Consider risk as you define the plan.
- Be realistic.
- Adjust granularity as you define the plan.
- Define how you intend to ensure quality.
- Describe how you intend to accommodate change.
- Track the plan frequently and make adjustments as required.
What questions must be asked and answered to develop a realistic project plan ?
- Why is the system being developed ?
- What will be done ?
- When will it be accomplished ?
- Who is responsible for a function ?
- Where are they organizationally located ?
- How will the job be done technically and managerially ?
- How much of each resource is needed ?
Posted by Sunflower at 10/20/2009 09:54:00 PM 0 comments
Labels: Planning Practices, software engineering, Software Process
Subscribe by Email |
|
Communication Practices - Type of Software Engineering Practice
Before customer requirements can be analyzed, modeled, or specified they must be gathered through a communication activity. Effective communication is among the most challenging activities that confront a software engineer. However, many of the principles apply equally to all forms of communication that occur within a software project.
Software engineers communicate with many stakeholders, but customers and end users
have the most significant impact on the technical work that follows. In some cases the customer and the end user are one in the same, but for many projects, the customer and the end user are different people, working for different managers in different business organizations.
- Listen : Try to focus on the speaker's words, rather than formulating your response to those words.
- Prepare before you communicate.
- Someone should facilitate the activity.
- Face-to-face communication is best.
- Take notes and document decisions.
- Strive for collaboration.
- Stay focused, modularize your discussion.
- If something is unclear, draw a picture.
- Once you agree to something, move on.
- If you can't agree to something, move on.
- If a feature or function is unclear and cannot be clarified at the moment, move on.
- Negotiation is not a contest or a game. It works best when both parties win.
Posted by Sunflower at 10/20/2009 09:10:00 PM 0 comments
Labels: Communication Practices, Practices, software engineering
Subscribe by Email |
|
Introduction to Software Engineering Practice Cont...
Construction incorporates a coding and testing cycle in which source code for a component is generated and tested to uncover errors. Integration combines individual components and involves a series of tests that focus on overall function and local interfacing issues.
Coding principles define generic actions that should occur before code is written, while it is being created, and after it has been completed. Although there are many testing principles, only one is dominant : testing is a process of executing a program with the intent of finding an error.
During evolutionary software development, deployment happens for each software increment that is presented to the customer. Key principles for delivery consider managing customer expectations and providing the customer with appropriate support information for the software. Support demands advance preparations. Feedback allows the customer to suggest changes that have business value and provide the developer with input for the next iterative software engineering cycle.
Posted by Sunflower at 10/20/2009 08:48:00 PM 0 comments
Labels: introduction, software engineering, Software Engineering Practice
Subscribe by Email |
|
Introduction to Software Engineering Practice
Software engineering practice encompasses concepts, principles, methods, and tools that software engineers apply throughout the software process. Every software engineering project is different, yet a set of generic principles and tasks apply to each process framework activity regardless of the project or the product.
A set of technical and management essentials are necessary if good software engineering practice is to be conducted. Technical essentials include the need to understand requirements and prototype areas of uncertainty, and the need to explicitly define software architecture and plan component integration. Management essentials include the need to define priorities and define a realistic schedule that reflects them, the need to actively manage risk, and the need to define appropriate project control measures for quality and change.
Customer communication principles focus on the need to reduce noise and improve bandwidth as the conversation between developer and customer progresses. Both parties must collaborate for the best communication to occur.
Planning principles all focus on guidelines for constructing the best map for the journey to a completed system or product. The plan may be designed solely for a single software increment, or it may be defined for the entire project. Regardless, it must address what will be done, who will do it, and when the work will be completed.
Modeling encompasses both analysis and design, describing representations of the software that progressively become more detailed. The intent of the models is to solidify understanding of the work to be done and to provide technical guidance to those who will implement the software.
Posted by Sunflower at 10/20/2009 07:54:00 PM 0 comments
Labels: introduction, software engineering, Software Engineering Practice
Subscribe by Email |
|
Thursday, October 15, 2009
Introduction to Feature Driven Development (FDD) - Type of Agile Software Development
Feature Driven Development (FDD)was originally developed and articulated by Jeff De Luca, with contributions by M.A. Rajashima, Lim Bak Wee, Paul Szego, Jon Kern and Stephen Palmer. FDD is a model-driven, short-iteration process. It begins with establishing an overall model shape. Then it continues with a series of two-week "design by feature, build by feature" iterations. The features are small, "useful in the eyes of the client" results. FDD designs the rest of the development process around feature delivery using the following eight practices:
1. Domain Object Modeling
2. Developing by Feature
3. Component/Class Ownership
4. Feature Teams
5. Inspections
6. Configuration Management
7. Regular Builds
8. Visibility of progress and results
Feature Driven Development asserts that:
- A system for building systems is necessary in order to scale to larger projects.
- A simple, but well-define process will work best.
- Process steps should be logical and their worth immediately obvious to each team member.
- "Process pride" can keep the real work from happening.
- Good processes move to the background so team members can focus on results.
- Short, iterative, feature-driven life cycles are best.
FDD recommends specific programmer practices such as "Regular Builds" and "Component/Class Ownership". FDD's proponents claim that it scales more straightforwardly than other approaches, and is better suited to larger teams. Unlike other Agile approaches, FDD describes specific, very short phases of work which are to be accomplished separately per feature. These include Domain Walkthrough, Design, Design Inspection, Code, Code Inspection, and Promote to Build.
Posted by Sunflower at 10/15/2009 08:31:00 PM 0 comments
Labels: Agile Methodology, Agile Software Development, FDD, Feature Driven Development, Process, software engineering
Subscribe by Email |
|
Quick Overview of Crystal Methods - Type of Agile Software Development
The Crystal methodology is one of the most lightweight, adaptable approaches to software development. Crystal is actually comprised of a family of methodologies (Crystal Clear, Crystal Yellow, Crystal Orange, etc.) whose unique characteristics are driven by several factors such as team size, system criticality, and project priorities. This Crystal family addresses the realization that each project may require a slightly tailored set of policies, practices, and processes in order to meet the project’s unique characteristics.
The use of the word "crystal" refers to the various facets of a gemstone — each a different face on an underlying core. The underlying core represents values and principles, while each facet represents a specific set of elements such as techniques, roles, tools, and standards. Cockburn also differentiates between methodology, techniques, and policies. A methodology is a set of elements (practices, tools); techniques are skill areas such as developing use cases; and policies dictate organizational "musts".
Posted by Sunflower at 10/15/2009 08:23:00 PM 0 comments
Labels: Agile Methodology, Agile Software Development, Crystal method, Process, software engineering
Subscribe by Email |
|
Introduction To Scrum - Type of Agile Software Development
Scrum is an agile method for project management developed by Ken Schwaber. Its goal is to dramatically improve productivity in teams previously paralyzed by heavier, process-laden methodologies. Its intended use is for management of software development projects as well as a wrapper to other software development methodologies such as Extreme Programming.Scrum is a lightweight management framework with broad applicability for managing and controlling iterative and incremental projects of all types. With Scrum, projects progress via a series of iterations called sprints. Each sprint is typically 2-4 weeks long. Scrum is ideally suited for projects with rapidly changing or highly emergent requirements.
A scrum team is typically made up of between five and nine people, but Scrum projects can easily scale into the hundreds. The team does not include any of the traditional software engineering roles such as programmer, designer, tester, or architect.
- The product owner is the project’s key stakeholder and represents users, customers and others in the process.
- The ScrumMaster is responsible for making sure the team is as productive as possible.
- The product backlog is a prioritized features list containing every desired feature or change to the product.
- At the start of each sprint, a sprint planning meeting is held during which the product owner prioritizes the product backlog, and the scrum team selects the work they can complete during the coming sprint. That work is then moved from the product backlog to the sprint backlog, which is the list of tasks needed to complete the product backlog items the team has committed to complete in the sprint.
- Each day during the sprint, a brief meeting called the daily scrum is conducted. This meeting helps set the context for each day’s work and helps the team stay on track.
- At the end of each sprint, the team demonstrates the completed functionality at a sprint review meeting, during which, the team shows what they accomplished during the sprint.
Scrum enables the creation of self-organizing teams by encouraging verbal communication across all team members and across all disciplines that are involved in the project. A key principle of scrum is its recognition that fundamentally empirical challenges cannot be addressed successfully in a traditional "process control" manner.
Posted by Sunflower at 10/15/2009 05:51:00 PM 0 comments
Labels: Agile Methodology, Agile Software Development, Cleanroom Software engineering, Process, Scrum
Subscribe by Email |
|