Subscribe by Email


Showing posts with label Standards. Show all posts
Showing posts with label Standards. Show all posts

Saturday, October 12, 2013

What is WiMax technology?

Worldwide inter-operability for microwave access or wimax is standard developed for wireless communications that has been designed so as to deliver data rates of 30-40 mbps. The update in the technology in the year 2011 upgraded the technology to provide around 1 gbps for the stations that were fixed. 
- The Wimax forum is responsible for naming the technology as Wimax. 
- This forum was formed in the year of 2001 for the promotion of the inter-operability and conformity of this standard. 
- The Wimax has been defined by the forum as the technology based up on standards that enable the last mile wireless broadband delivery as alternative for the DSL and the cable thing. 
- The IEEE 802/ 16’s interoperability implementations are referred to as the WiMax. 
- The wimax forum has ratified this family of standards. 
- By virtue of the certification provided by this forum, the vendors are able to sell mobile and fixed products that are wimax certified. 
- This is done for ensuring that a level of inter-operability is maintained at par with the other products that have been also certified for the same profile. 
- The ‘fixed wimax’ is the name given to the original IEEE 802.16 standards.
- ‘Wifi on steroids’ is the term used to refer to WiMax sometimes. 

It has got a number of applications such as in:
Ø  Broadband connections
Ø  Cellular back-haul
Ø  Hot spots and so on.

- This technology shares some similarity with the Wifi technology however, this one is more capable of transmitting data at greater distances.
It is because of its range and bandwidth that the WiMax is suitable for the following applications:
Ø  Provides services such as the IPTV services and VoIP (telecommunications services).
Ø  Provides mobile broadband connectivity that is portable across the cities and countries and that can be accessed via different kinds of devices.
Ø  Provides an alternative for DSL and cable in the form of wireless last mile broadband access.
Ø  Acts as a source of internet connectivity.
Ø  Metering and smart grids.

- This technology can be used at home for providing internet access across the countries. 
- This has also caused a rise in the market competition. 
- The WiMax is even economically feasible. 
- Mobile wimax has been used as a replacement for the technologies like CDMA, GSM that are cellular phone technologies.  
- The technology has also been used as an overlay for increasing the capacity.  
The fixed wimax is now used for 2g, 3g and 4g networks as a wireless back-haul technology in almost all the nations whether they are developed or developing.  
- In some states of North America, this technology is provided through a numbered of copper wire line connections. 
- On the other hand, the technology is back hauled via satellites in case of the remote cellular operations.  
- While in other cases even microwave Links are used. 
- The bandwidth requirements of the WiMAX demand more substantial back-haul when compared to other legacy cellular applications. 
- In some of the cases, the sites have been aggregated by the operators by use of Wireless Technology.  
- The traffic is then introduced to the fiber networks as per the convenience.  
The technologies that provide triple play services are directed compatible with the WiMAX.  
- These services might include multi-casting and quality of service. 
- WiMax has been widely used for providing assistance in the communications. 
- The Intel Corporation has donated the hardware for WiMax technology for assisting the FCC (federal communications commission) and FEMA etc.
- The subscribers’ stations or SS are the devices which are used for connecting to a WiMAX Network. 
- These devices might be portable such as the following:
      > Handsets and smart phones
      > PC peripheral such as USB dongles, PC Cards and so on. 
      > Embedded devices in notebooks.



Monday, October 7, 2013

What is Wifi technology? How does it work?

- Wifi has emerged as a very popular technology. 
- This technology has enabled the electronic devices to exchange information between them and to share the internet connection without using any cables or wires. 
- It is a wireless technology. 
- This technology works with the help of the radio waves. 
- The Wifi is defined as a WLAN (wireless local area network) product by the wifi alliance that is based on the standards defined by IEEE (802.11 standards). 
Most of the WLANs are based upon these standards only and so this technology has been named as wifi which is the synonymous with the term WLAN. 
- The wifi-certified trademark might be used by only those wifi products which have the complete certification for the wifi alliance inter-operability. 
- A number of devices now use wifi such as the PCs, smart phones, video game consoles, digital cameras, digital audio players, tablet computers and so on. 
- All these devices can connect to the network and access internet by means of a wireless network access point. 
- Such an access point is more commonly known as a ‘hotspot’. 
- The range of an access point is up to 20 m. 
- But it has a much greater range outside.  
- An access point can be installed in a single room or in an area of many square miles. 
- This can be achieved by using a number of overlapping access points. 
However, the security of the wifi is less compared to the wired connections for example Internet.
- This is so because a physical connection is not required by an intruder. 
- The web pages using SSL have security but the intruders can easily access the non-encrypted files on the internet. 
- It is because of this, that the various encryption technologies have been adopted by the wifi. 
- The earlier WEP encryption was weak and so was easy to break.
- Later, came the higher quality protocols such as the WPA2 and WPA. 
- The WPS or the wifi protected set up was an optional feature that was added in the year of 2007. 
- This option a very serious flaw which is that it allowed the recovery of the password of the router by an attacker.
- The certification and the test plan has been updated by the wifi alliance for ensuring that there is resistance against attacks in all the devices that have been newly certified.
- For connecting to a wifi LAN, a wireless network interface controller has to be incorporated in to the computer system.
- This combination of the interface controller and the computer is often called as the station. 
- The same radio frequency communication channel is shared by all the stations.
- Also, all the stations receive any transmission on this channel. 
- Also, the user is not informed of the fact that the data was delivered to the recipient and so is termed as the ‘best–effort delivery mechanism’. 
- For transmitting the data packets, a carrier wave is used. 
- These data packets are commonly known as the ‘Ethernet frames’. 
Each station regularly tunes in to the radio frequency channel for picking up the transmissions that are available. 
- A device that is wifi enabled can connect to the network if it lies in the range of the wireless network. 
- One condition is that the network should have been configured for permitting such a connection. 
- For providing coverage in a large area multiple hotspots are required. 
- For example, wireless mesh networks in London. 
- Through wifi, services can be provided in independent businesses, private homes, public spaces, high street chains and so on. 
- These hotspots have been set up either commercially or free of charge. 
- Free hotspots are provided at hotels, restaurants and airports. 


Saturday, June 29, 2013

What are the reasons for using layered protocols?

Layered protocols are typically used in the field of networking technology. There are two main reasons for using the layered protocols and these are:
  1. Specialization and
  2. Abstraction
- A neutral standard is created by a protocol which can be used by the rival companies for creating programs that are compatible. 
- So many protocols are required in the field and that should also be organized properly and these protocols have to be directed to the specialists that can work up on these protocols. 
- A network program can be created using the layered protocols by a software house if the guidelines of one layer are known. 
- The services of the lower level protocols can be provided by the companies. 
This helps them to specialize. 
- In abstraction, it is assumed that another protocol will provide the lower services. 
- A conceptual framework is provided by the layered protocol architecture that divides the complex task of information exchange into much simpler tasks between the hosts. 
- The responsibility for each of the protocols is narrowly defined. 
- A protocol provides an interface for the successive higher layer protocol. 
- As a result of this, it goes in to hiding the details of the higher protocol layers that underlies. 
- The advantage of using the layered protocols is that the same application i.e., the user level program can be used by a number of diverse communication networks.
- For example, when you are connected to a dial up line or internet via LAN you can use the same browser. 
- For simplifying the networking designs, one of the most common techniques used is the protocol layering. 
- The networking designs are divided in to various functional layers and the protocols are assigned for carrying out the tasks of each layer. 
- It is quite common to keep the functions of the data delivery separate from each other and separate layers for the connection management too.  
Therefore, we have one protocol for performing the data delivery tasks and second one for performing connection management. 
- The second one is layered up on the first one. 
- Since the connection management protocol is not concerned with the data delivery, it is also quite simple. 
- The OSI seven layer model and the DoD model are one of the most important layered protocols ever designed. 
- A fusion of both the models is represented by the modern internet. 
- Simple protocols are produced by the protocol layering with some well defined tasks. 
- These protocols then can be put together to be used as a new whole protocol. - As required for some particular applications, the individual protocols can be either replaced or removed. 
- Networking is such a field involving programmers, electricians, mathematicians, designers, electricians and so on. 
- People from these various fields have very less in common and it is because of the layering that people with such varying skills to make an assumption or feel like others are carrying out their duty. 
- This is what we call abstraction. 
- Protocols at a level can be followed by an application programmer via abstraction assuming that network exists and similarly electricians assume and do their work. 
- One layer can provide services to the succeeding layer and can get services in return too. 
- Abstraction is thus the fundamental foundation for layering. 
- Stack has been used for representing the networking protocols since the start of network engineering. 
- Without stack, it would be unmanageable as well as overwhelming. 
Representing the layers of specialization for the first protocols derived from TCP/ IP.



Wednesday, June 26, 2013

What are advantages and disadvantages of having international standards for network protocols?

Without rules governing the exchange of data and establishment of connections between networks, there would have a big chaos in the industry. These rules are called line protocols and are very crucial for the inter-networking and communication across the networks. 
The mode and the speed of communication are controlled by what are called the communications software packages. These network protocols are defined by the international standards. However, having international standards for the protocols has got both advantages and disadvantages. 

- A number of standard network protocols for carrying out several functions such as routing, packetizing, and addressing and so on. 
- All these protocols lay out a standard definition of how the routing and addressing has to be done. 
- They also define the specifications for the structure of the packets to be transferred between different hosts. 
- Some commonly used routing protocols are:
Ø  X.25
Ø  IPX/ SPX
Ø  TCP/ IP
Ø  OSI

- The OSI stands for open systems interconnection. 
- Early networking systems had a big problem which was that there was a lack of consistency between the protocols employed by several types of different computers. 
- As a consequence of this problem, the international standards came in to the picture. 
- Thus, international standards were established for the various data transmission protocols. 
- For example, OSI is a set of standard protocols developed by ISO (international standards organization).
- In this model, the functionalities of the network protocols have been divided in to seven layers of the communication rules or protocols. 
- The purpose of this model is to identify the functions being offered by the system. 
- The following three layers appear in the host systems and other units such as processor and control unit etc.:
  1. Physical layer
  2. Data link layer
  3. Network layer
- The leftover layers are found in the host systems only. 

Advantages of having International Standards
Ø  If all the systems are following the same standard, it becomes easy for everyone to connection to everyone else. In other words, the international standards provide easy interconnectivity.
Ø  If any standard is widely used, it gains economies of scale. For example, VLSI chips etc.
Ø  With all the systems using the same standard, the installation and the maintenance of the connections become quite easy.
Ø  Software designed by the developers from all over the world, won’t have any problem in interfacing with the host system and the other software. They will work well with a wide range of operating systems and hardware since both are using the same standard.


Disadvantages of having International Standards
Ø  Poor standards may be formed as a result of the frequent standardization.
Ø Once the standard is adopted internationally, it’ll be difficult to make changes to it. It will be difficult to introduce new and better techniques in to it.
Ø  If a problem occurs, it has to be seen as an international problem.
Ø The manufacturers and companies will be bound to follow the same international standards and so they won’t be able to develop something better of their own.
Ø  Large multinational companies won’t be able to pool everyone in to using their proprietary protocols and therefore no huge profits.


TCP/ IP protocol was developed to make communication easy between the dissimilar systems. A number of hardware and software vendors support this protocol ranging from mainframes to microcomputers. A number of corporations, universities and govt. agencies are making use of this protocol. 


Friday, April 5, 2013

What are different types of operating system?


Developing an operating system is one of the most complicated activities and favorite of most of the computing hobbyists. For a hobby OS, its code is not directly derived from the already existing Oss. Some entirely new concepts might also be included in the OS development. It may also start from modeling an existing one. Whatever the case maybe, the hobbyist is his own active developer. Application software might be developed specifically for an OS or hardware. Therefore, when the application has to be ported to some OS that may implement its required functionality differently, the application might be required to be changed, adapted or maintained. 


Types of Operating System

There are many types of operating about which we shall discuss in this article.

1. Real time operating system
- It is a multi – tasking OS aimed at the execution of the applications that are real time. 
-These operating systems work on scheduling algorithms written exclusively for them. 
- This is done so as to make them achieve a behavior that is deterministic in nature. 
- Their main objective is to give a quick response to the events that is also predictable in nature. 
- The design which is implemented is event driven and employs the idea of time sharing and sometimes both. 
- The system that is event driven switches among the different tasks according to the priorities assigned to them. 
- On the other hand, the systems following the time sharing methodology switch between the tasks based up on the clock interrupts.

2. Multi–user and single user operating systems: 
- In the multi–user operating systems the same computer system can be accessed by multiple users at the same time. 
- Systems that can be classified under the multi–user systems are the internet servers and the time sharing systems since using the time sharing principle they allow multiple users to access the system. 
- There are other types of operating systems that allow only one user to execute a number of programs simultaneously and are called the single – user operating systems.

3. Multi–tasking and single–tasking operating systems: 
- The operating systems that allow multiple programs to be executed simultaneously (as per the human time scales) are termed as the multi – tasking OS.
- In the single–taking OS, only one program can be run at a time. 
- Multi –taking can be done in the following two ways:
Ø  Pre–emptive multi –tasking: The CPU time is sliced and each of the time slots are given to each of the programs that are to be executed. This kind of multi–tasking is supported by the operating systems such as Linux, AmigaOS and Solaris.
Ø  Co–operative multi–tasking: Systems following this rely on one process for giving time to the other processors but in a pre–defined manner. This multi–tasking type was used by the MS windows 16 – bit version, Mac OS preceding OS X.
There are Oss that used to support both of these namely win9x and Windows NT.

4. Distributed operating system: 
- This kind of OS is used to manage a group of computers that are independent of each other and makes them seem like one single system.
- This OS led to the development of networked computers which could link to and communicated with one another.
- These computers in turn paved way for distributed computing. 
- They carried out computations on more than one computer. 
- Computers working in cooperation with each other, together make up a distributed system.

5. Embedded Operating System: 
It is used in embedded computer systems such as in PDAs.


Thursday, April 4, 2013

What is an Operating System?


- A collection of small and large software that help in the management of the computer hardware resources is called an operating system. 
- As the term suggests it operates or drives the system. 
- The basic common services required by the computer programs are offered by this OS only. 
- Without an OS, the application programs would fail to function. 
- Operating systems are of many types. 
- One such type is the time sharing OS that schedules the tasks to be done so that the processor time, printing, mass storage and so on resources could be utilized efficiently. 
- It is an intermediate thing between the hardware and the user. 
- It is through the OS that you are able to actually communicate with the computer hardware. 
- Functions such as memory allocation and basic input output operations are dependent totally on the OS. 
- Even though the hardware directly executes the application code, it does frequently involve the OS or OS itself interrupts in between. 
- Any device containing a computer do has an OS such as video game consoles, mobile phones, web servers, super computers and so on.
- Some popular OS are:
Ø  Android
Ø  BSD
Ø  Linux
Ø  iOS
Ø  Microsoft windows
Ø  Windows phone
Ø  Mac OS X
Ø  IBM z/ OS
All the OS have relation with UNIX save windows and z/OS.

- Types of Operating systems are:
  1. Real time OS
  2. Multi – user OS
  3. Multi – tasking OS
  4. Single  - tasking OS
  5. Distributed OS
  6. Embedded OS
- It was in 1950 that the basic operating systems came in to existence such as parallel processing, interrupts and run time libraries.
- Assembly language was used for writing the UNIX OS. 
- There are many sub–categories in the Unix like family of the operating systems:
  1. System V
  2. BSD
  3. Linux and so on.
- A number of computer architectures are supported by these Unix – like systems. 
- They come in heavy use in the following fields:
  1. Servers in business
  2. Work stations in academic
  3. Engineering environments
- Few UNIX variants are available for free such as BSD, Linux etc. and are quite popular. 
- The holder of the Unix trademark is the open group and it has certified four Oss as Unix so far. 
- Two of the original system V Unix descendants are IBM’s AIX and HP’s HP – UX and they run only on the hardware provided by their manufacturer. 
Opposite to these is the sun microsystem’s Solaris OS that can be used on different hardware types (inclusive of the Sparc and x86 servers etc. and PCs). - The POSIX standard was established to the sought the inter-operability of the Unix. 
- This standard is applicable for any OS now, even though originally it was developed especially for the variants of Unix.
- Berkeley Software Distribution or BSD family is a Unix sub–group. 
- It includes the following:
  1. FreeBSD
  2. NetBSD
  3. OpenBSD
- The major use of all of these is in the web servers. 
- Furthermore, they are also capable of functioning as a PC OS. 
- BSD has made a great contribution in the existence of the internet. 
- Most of the protocols were refined and implemented in BSD. 


Wednesday, March 6, 2013

What is meant by Conditional Access System?


- Conditional access system or CAS is a system that has been developed as a means for providing the protection to content by setting certain criteria that should be met before the access to it is granted. 
- The concept of conditional access is however related closely to the digital television systems such as the satellite television. 
- The standards for the conditional access system have been defined under the following specification documents of the digital video broadcasting standard (DVB):
  1. DVB – CA (conditional access)
  2. DVB – CSA (common scrambling algorithm)
  3. DVB – CI (common interface)
- All of these standards together define a method for obfuscating a digital television stream, thus providing access to only those possess the ‘valid description smart cards’. 
- All of these specifications are available on the standards page of the DVB web site.
- Conditional access is the result of combination of encryption and scrambling technologies. 
- Firstly, the data stream is subjected to scrambling with a secret key of 48 – bits usually called the control word. 
- The control word is generated in such a pattern in which the successive word cannot be predicted. 
- It is recommended by the DVB that a physical process should be used for that. 
- For a person to unscramble the data stream it is required that he/ she should know about the current control word. 
- In order to provide the protection to control word during the course of transmission, encryption methodology is used. 
- The control word will be de-crypted by the conditional access system only when there is authorization. 
- This authorization is given in the form of EMM or entitlement management message. 
- They are specific for each subscriber. 
- Different ECMs can also transmit the control word at one allowing the usage of many conditional access systems at once and simultaneously. 
- The simulscrypt of the DVB allows all the multiplex operators to cooperate while at the same time saves bandwidth. 
- Channels such as the hot bird satellites and the CNN international make use of several CASs in parallel. 
- The data from the decryption is read from the cards and updated if required through either of the following:
  1. CAM or conditional access module
  2. PC card – format card reader
  3. Built – in ISO/ IEC 7816 card reader
- Conditional access systems include two major types namely analog systems and the digital systems both categorized at shown below:
  1. Analog systems:
Ø  Nagravision
Ø  Videocrypt
Ø  Videocipher
Ø  Eurocrypt
  1. Digital systems:
Ø  Abel Quintic
Ø  ABV (alliance broadcast vision)
Ø  Accessgate

- These systems are the key component for all the digital TV operations.
- Their purpose is to secure the investments of the operators through the encryption of the signals. 
- The payment from the customers in order to watch TV is also ensured by the conditional access systems. 
- But the TV operators usually have limited knowledge about the conditional access systems. 
- Encryption is involved with the whole conditional access system. 
- Operators have the analyzers for finding the cause of the common signal problems. 
- But they do lack tools that can debug the CASs. 
- Most of the conditional access system monitoring tools follow the ETSI TR 101 290 standard (a DVB promoted standard) even though the CAS is not fully covered by this specification. 
- Analyzing tools as such let the operators to know about the problems of conditional access system before the customers notice it. 
- But practically also, it is not possible to take care of all the problems that may affect the subscriber. 


Facebook activity