Subscribe by Email


Showing posts with label Factors. Show all posts
Showing posts with label Factors. Show all posts

Tuesday, April 23, 2013

What is Throughput, Turnaround time, waiting time and Response time?


In this article we discuss about four important terms that we often come across while dealing with processes. These 4 factors are:
1.  Throughput
2.  Turnaround Time
3. Waiting Time
4.  Response time

What is Throughput?

- In communications networks like packet radio, Ethernet etc., throughput refers to the rate of the successful delivery of data over the channel. 
- The data might be delivered via either logical link or physical link depending on the type of communication that is being used. 
- This throughput is measured in the terms of bps or bits per second or data packets per slot. 
- Another term common in networks performance is the aggregate throughput or the system throughput. 
- This equals to the sum of all the data rates at which the data is delivered to each and every terminal in a network. 
- In computer systems, throughput means the rate of successful completion of the tasks by the CPU in a specific period of time. 
- Queuing theory is used for the mathematical analyzation of the throughout. 
There is always a synonymy between the digital bandwidth consumption and the throughput. 
- Another related term is the maximum throughput.This bears synonymy with the digital bandwidth capacity.

What is Turnaround Time?

- In computer systems, the total time taken by the CPU from submission of a task or thread for execution to its completion is referred to as the turnaround time. 
- The turnaround time varies depending on the programming language used and the developer of the software.
- It deals with the whole amount of time taken for delivering the desired output to the end user following the start of the task completion process. 
- This is counted among the metrics that are used for the evaluation of the scheduling algorithms used by the operating systems. 
- When it comes to the batch systems, the turnaround time is more because of the time taken in the formation of the batches, executing and returning the output.

What is Waiting Time?

 
- This is the time duration between the requesting of an action and when it occurs. 
- Waiting time depends up on the speed and make of the CPU and the architecture that it uses. 
- If the processor supports pipeline architecture, then the process is said to be waiting in the pipe. 
- When the current task in processor is completed, the waiting task is passed on to the CPU for execution. 
- When the CPU starts executing this task, the waiting period is said to be over. 
- The status of the task that is waiting is set to ‘waiting’. From waiting status, it changes to active and then halts.

What is Response Time?

 
- The time taken by the computer system or the functional unit for reacting or responding to the input supplied is called the response time. 
- In data processing, there are various situations for which the user would perceive the response time:
Ø  Time between operator entering a request at a terminal  and
Ø  The instant at which appears the first character of the response.
- Coming to the data systems, the response time can be defined as the time taken from the receipt of EOT (end of transmission) of a message inquiry and start of the transmission in response to that inquiry. 
- Response is an important concept in the real time systems and it is the time that elapses between the dispatch of the request until its completion. 
- However, one should not confuse response time with the WCET.
- It is the maximum time taken by the execution of the task without any interference. 
- Response time also differs from the deadline. 
- Deadline is the time for which the output is valid. 


Wednesday, April 17, 2013

What are Real-time operating systems?


- The RTOS or a real time operating system was developed with the intention of serving the application requests that occur in real time. 
- This type of operating system is capable of processing the data as and when it comes in to the system. 
- This it does without making any buffering delays. 
- The time requirements are processed in 10ths of seconds or even on much smaller scale. 
A key characteristic feature of the real operating system is that the amount of time they take for accepting and processing a given task remains consistent. 
- The variability is so less that it can be ignored totally.

Real time operating systems also there are two types as stated below:
  1. The soft real –time operating system: It produces more jitter.
  2. The hard real – time operating system: It produces less jitter when compared to the previous one.
- The real time operating systems are driven by the goal of giving guaranteed hard or soft performance rather than just producing a high throughput. 
- Another distinction between these two operating systems is that the soft real time operating system can generally meet deadline whereas the hard real time operating system meets a deadline deterministic ally.
- For the scheduling purpose, some advance algorithms are used by these operating systems. 
- Flexibility in scheduling has many advantages to offer such as the cso (computer system orchestration) of the process priorities becomes wider.
- But a typical real time OS dedicates itself to a small number of applications at a time. 
- There are 2 key factors in any real –time OS namely:
  1. Minimal interrupt latency and
  2. Minimal thread switching latency.
- Two types of design philosophies are followed in designing the real  time Oss:
  1. Time sharing design: As per this design, the tasks are switched based up on a clocked interrupt and events at regular intervals. This is also termed as the round robin scheduling.
  2. Event – driven design: As per this design, the switching occurs only when some other event demands higher priority. This is why it is also termed as priority scheduling or preemptive priority.
- In the former designs, the tasks are switched more frequently than what is strictly required but it proves to be good at providing a smooth multi – tasking experience. 
- This gives the user an illusion that he/ she is solely using the machine. 
- The earlier designs of CPU forced us to have several cycles for switching a task and while switching it could not perform any other task. 
- This was the reason why the early operating systems avoided unnecessary switching in order to save the CPU time. 
- Typically, in any design there are 3 states of a task:
  1. Running or executing on CPU
  2. Ready to be executed
  3. Waiting or blocked for some event
- Many of the tasks are kept in the second and third states because at a time the CPU can perform only one task. 
- The number of tasks waiting to be executed in the ready queue may vary depending on the running applications and the scheduler type being used by the CPU. 
- On multi – tasking systems that are non – preemptive, one task might have to give up its CPU time to let the other tasks to be executed. 
- This leads to a situation called the resource starvation i.e., the number of tasks to be executed is more and the resources are less.


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Saturday, December 22, 2012

What is Rational Synergy Tool?


A task based configuration management tool was released by IBM called the Rational Synergy Tool. 

What does Rational Synergy Tool do?

- It enhances the development of the software systems and applications. 
- The development process was enhanced in terms of speed and ease by making actual improvements in the collaboration as well as communication among the team members.
- Rational synergy tool prepares a release management platform which is unified change and configurable. 
- It then brings the distributed and global team on this platform. 
- Thus, it helps in accelerating the build management and release processes which in turn amplifies the efficiency of the limited resources that are used in the development process. 
- It also plays great role in the unification of the distributed development teams. 
- All these factors contribute to the improvement of the performance of WAN or wide area network which is important for distant sites and global workers. 
- This has enabled the team members staying in different parts of the world to come together on work on a central data base that is provided by the rational synergy tool over the WAN. 
- This aids in the reduction of the need of simplification of the complex challenges in the development and replication. 
- The Rational Synergy Tool provides a complete solution to assist the organizations in improving the quality of their distributed development processes. 
- The environment provided by the rational synergy tool includes support for both the distributed as well as the centralized modes. 
- This tool comes with a powerful engine which drives the collaboration development, component based development and software reuse. 
- It assists you in achieving the desired resilience targets and scalability factors by integrating with the IBM rational and other partners. 
- The IBM rational synergy tool helps you with the following tasks:
  1. Configuration management of the end to end tasks.
  2. Baselines management support
  3. Advanced release management support
  4. Works as a single repository solution by integrating with the IBM rational change for software configuration and change management.
  5. Provides advanced support for parallel variants and development.
  6. Reduces the productivity and overhead for the software developers.
  7. Supports component based development
  8. Provides advanced SCM needs
  9. Supports global and distributed development.
- IBM rational synergy tool helps you to gain full control over the maintenance activities, document development and software. 
- No matter what size is your team, the rational synergy tool supports all from small to large development teams. 
- Also, in which environment your team is working i.e., whether in a distributed environment or a heterogeneous environment, it really does not matter for rational synergy, it supports all. 
- The process of maintenance of the multiple versions of the files in an archive is all managed by the rational synergy. 
- There are other tools available such as RCS or PVCS or SCCS which do control file versions but they are void of many benefits such as the following:
  1. Rule based configuration update
  2. Product reproducible
  3. Work flow management and so on.
- There is a lot of difference between the version control tools and rational synergy. 
- But the users who have experience with these version control tools should not find it much difficult to make transition to rational synergy. 
- The rational synergy comes with 2 types of interfaces namely:
  1. Synergy classic: This interface comes with CM capabilities which are important for admins.
  2. Rational synergy: This interface has been developed exclusively for the users who are build managers and developers.
- The rational synergy’s command line interface is applicable for both the UNIX and windows platforms. 
- Rational synergy as a configuration management tool provides a unique and easy way for the creation of a baseline. 


Saturday, August 4, 2012

What factors are needed to prioritize themes?

Need of users is considered before planning for a project. To achieve the best combination of product features, schedule, and cost requires deliberate consideration of the cost and value of the user stories and themes.
We need to prioritize and this responsibility is shared among the whole team. Individual user stories or features are aggregated into themes. Stories and themes are then prioritized relative to one another for the purpose of creating a release plan.

There are four primary factors to be considered when prioritizing:
1. The financial value of having the features.
2. The cost of developing new features.
3. The amount and significance of learning and new knowledge created by developing the features.
4. The amount of risk removed by developing features.

1. Determine the Value of Theme
- Estimate the financial impact over a period of time.
- It can be difficult to estimate the financial return on theme.
- It usually involves estimating number of new sales, average value of sales and so on.

2. Determine the Cost of Developing new Features
- Estimating cost of a feature is a huge determinant in overall priority of a feature.
- The best way to reduce the cost of change is to implement a  feature as late as possible.
- The best time to add feature is when there is no more time to change.
- Themes seem worthwhile when viewed in terms of time they will take.
- It is important to keep in mind that time costs money.
- The best way to do this while prioritizing is to do a rough conversion of story points or ideal days into money.

3. Learning New Knowledge
The knowledge that a team develops can be classified in two areas:
Product Knowledge
- It is the knowledge about what will be developed.
- It includes knowledge about features that are included and the features that are not included.
- Better knowledge of product will help the team to make better decisions.

Project Knowledge
- It is the knowledge about how product will be created.
- It includes knowledge about technologies, skills of developers, functioning of team together etc.
- The other side of acquiring knowledge is reducing uncertainty.

4. Risk
- A risk is anything that has not happened yet but might happen. It would threaten or limit the success of the project.
- Types of risks involved in a project are : schedule risk, cost risk and functionality risk.
- Struggle exists between high risk and high-value features of a project.
- Each approach has its drawbacks and the only solution is to give neither risk nor value total supremacy when prioritizing.

All these factors are combined by thinking first of the value and cost of the theme. Doing so will sort the themes into an initial order. Themes can then be moved forward or back in this order based on the other factors.


Sunday, July 22, 2012

What is the difference between authentication and authorization?


In this article, we have taken two very important topics of the cyber world namely authentication and authorization. We shall also discuss the difference between the two terms which have a direct link to our security on the World Wide Web and other networks. 

Concept of Authentication


"Authentication involves the act of the confirmation of the truth regarding all the attributes of some entity or datum under the question". 

The authentication process is also linked up with the confirmation of the identity regarding the following aspects:
  1. Confirmation of a person’s or software system’s or program’s identity.
  2. Tracing of the origins of some artifacts.
  3. Ensuring that what the labelling and packaging claims to be is what is that is actually in the product. 
There are three types of authentication methods which we have discussed below:
  1. The first type: It involves accepting of identity proof given by some credible person who can provide evidence of the identity or the originator and the object under assessment in question.
  2. The second type: It involves a comparison between the attributes of the object itself and what is known about the objects of same origin. But authentication of this type is quite vulnerable to forgery and calls for expert knowledge
  3. The third type: It involves authentication on the basis of the external affirmations like documentation. 
Three factors need to be verified in authentication are:
  1. Owner ship factors
  2. Knowledge factors
  3. Inherence factors

Concept of Authorization

- The process of authorization involves the act of the specification of the access rights to the resources.
- These are the resources that are involved with the computer security or information security in general.
- In particular these resources are used to access control to the security system and other desired information.
- To say it simply, authorization is the process of providing a definition for the access policy. 
- While the system is in operation, it makes use of the access control rules for making decisions regarding the rejection or approval of the access requests from the authenticated users or consumers. 
- Resources can be anything like:
  1. Individual files
  2. Items data
  3. Computer devices
  4. Computer programs
  5. Functionality of the computer applications and so on.
- Consumers may be either computer users or computer programs or other devices on the system. 
- The access control process that is performed during the authorization involves two main phases as mentioned below:
  1. Phase 1: This phase is known as the policy definition phase and involves authorization of the access.
  2. Phase 2: This phase is known as the policy enforcement phase and involves acceptation or rejection of the access requests.

Differences between Authentication and Authorization

  1. Verification of your identity: It means verifying who you are is called authentication whereas the verification of what you are authorized to do is called authorization. This is the simplest difference between the two similar sounding processes. Both of these processes are carried whenever some connection attempt is made and whether the attempt has to be allowed or rejected is decided based up on these two factors only.
  2. The basic goal of the authentication process is to verify whether you are who you claim to be or not? On the other hand the goal of the authorization is to set the access scope of the user who has been authenticated in the previous process. 


Saturday, June 16, 2012

Code Restructuring - An activity involved in software re-engineering process model

As we know “Software re- engineering process model” is a model that has been used over the years to improvise the already existing poor code that is no longer accepted.  The poor code is restructured to meet the current standards of software engineering. This model consists of 6 basic steps as mentioned below:
  1. Inventory analysis
  2. Documentation reconstruction
  3. Reverse engineering
  4. Code re- structuring
  5. Data re- structuring
  6. Forward Engineering
This article is dedicated to the discussion regarding the fourth stage of the software re- engineering process model i.e., the code reconstruction. 



What is meant by Code Reconstruction?


- The code restructuring process involves analyzation of the source code. 
- The violations of the programming practices are noted down and later repaired. 
- The revised code is then reviewed and subjected to extensive testing. 
-Firstly, in this stage the program files that are executable are identified with the consultation of the person in the company who is intimate enough with the knowledge of the files and can grant some logic to their existence. 
- Next step, in this stage is the verification of the identified and setting up of the baseline. 
This step requires a lot of investigative work since the executable might have been affected by a lot of factors. 
- The code is verified using the sample input data or test files.
- The baseline for the executable is formed by documented data files and source code files. 
Now the developer who holds the responsibility to re- engineer the code, needs to familiarize himself with the existing code and figure how is going to re- engineer it. 
- He then carries out several walk-throughs and code reviews under many different conditions with its users.
- At this stage, the developer is free to introduce comments in to the code as per its re- structuring needs.
- Other requirements can be stated in the project log. 
- It is very common for the developer to encounter bugs at this stage. 
- The developer needs to discuss with the client and figure out a way of dealing with the bugs. 
- Next follows the identification of the requirements of the process like:
  1. Commenting the code.
  2. Modularity of the code.
  3. Documentation of the code.
  4. Enhancement of the code.
  5. Removal of the unused or unnecessary code.
  6. Removal of the duplicate code.
  7. Removal of duplicate parameters.
  8. Porting of the code to another platform.
  9. Enhancement of the user interface.
  10. Defining the language standard of the code.
  11. Conversion of the code in to an alternative development language.
  12. Improvement and maintenance of the performance.
  13. Optimization of the code.
  14. Improvising the internal error handling capacity of the code.
  15. Inclusion of additional functionality.
  16. Fixing the existing bugs.
  17. Implementing alternate third party products.

More about Code Restructuring


- The code is restructured in identifiable phases i.e., the original code is re- engineered in the phases that have been identified. 
- The software tools here may be helpful in the investigation of the code coverage, unused variables, hot spots, coding standards and so on. 
- Apart from these, manual inspections can also help a big deal in identification and removal of the duplicated and false code, collate parameter definitions and optimize the array storage. 
- On an overall basis, restructuring of the code greatly improves its performance. 
- Even the simplest of the “not so obvious” issue can significantly affect the performance of the code like accessing of the data. 
- Different languages have their own way of storing arrays. 
- Therefore, the way that is used to access this data can hamper the performance to a great extent without coming to the knowledge of the developer. 
- Lastly, the code is tested after the completion of the restructuring. 


Wednesday, June 13, 2012

What factors are considered during component qualification in component based development?


Component based development is now becoming a common practice nowadays for re- using the validated software components of the already existing software systems or applications as a measure to shorten the development periods and enhance the quality of the software system or application.

The component based development focus on the development of new software systems by selection of the software components within the system architecture. Furthermore, the component based development has been known to accelerate the productivity of the development process and reduce the project costs. 

However, the success of the component based development depends on the suitability of the components chosen for the development of the new software. In this article we have discussed about the factors that should be considered during the component qualification in component based development. 

Due to some factors like increasing complexity and continuous change, it is required that the software should be changed to make the both ends meet. As a consequence, the software system or applications becomes more and more complex which means more and more errors. Assembling pre- existing software components not only increases the maintainability of the systems but also lends it more flexibility. Moreover, the system is assembled rapidly. It would not be wrong to call a component an independent entity intact with a complete functionality.

Process of Component Qualification


- Component qualification is a process that involves determination of the suitability of the component for being re- used in the next software system or application to be built. 
- This process of component qualification often comes to a great help whenever there is a competition for the products in the market.
There are numerous factors that should be considered while counting for the qualification of a software component. 

Factors Considered during Component Qualification


- The most important factors are of the functionality and services that are being provided by the software component. 
- The factors also consist of many other aspects like those mentioned below:
  1. Standards that have been used to develop the software component.
  2. Quality aspects like usability, reusability and component reliability and so on.
- The components should have been developed under some standards and provide functions common to most of the different software systems and applications should be qualified.
- Further, the qualified components should exhibit the property of re- usability to a high extent. 
- Components form high level aggregations of the smaller software pieces. 
- All these components together provide a black box building approach which encapsulates the implementation detail from the development environment and is re- usable with the interfaces.
- For a component to be qualified, the component should have been developed in such a way that it is able to connect to a software component during the run time of the system. 
- Or we can say that in order to be qualified, the component should exhibit the quality of independent deployment. 
- This is necessary because this approach makes use of the resources. 
- The component qualification is one of the development activities that are under taken by the component based software development. The other three activities are:
  1. Component adaptation
  2. Assembling components
  3. System evolution
- All the above mentioned three activities follow up after the component qualification. 
-The components that are required are defined by the system architecture and requirements. - The characteristics of the component interfaces usually form the basic criteria for their qualification even though the degree of fitness of the component in requirements and architecture is not provided by the interface. 
- Interface here means the services that are provided by the component and the means through which these are accessed by the consumers. 


Sunday, June 3, 2012

What is release planning and what is the need of release planning?


Release planning forms a very important part of the whole software development life cycle and from the term itself you can make out that it is related to the release of the software product.

What is a Release Plan?


- A release plan is drawn up during the release planning meeting. 
- The purpose of the release planning meeting is to lay out the overall project plan. 
- The release plan is further used to plan the iterations and schedules for the other processes. 
- For every individual iteration, a specific iteration plan is designed keeping in mind the specifications of the release plan. 
- It is important that a balance should be maintained between the technical aspect and the business aspect of a software project else the development conflicts will arise and the developers will never be able to finish the software project on time. 
- So, to get a better release plan it is important all the technical decisions must be handled by the technicians and all the business decisions are taken up by the business people. 

How to draw a proper release plan?


- To draw out a proper release plan, it is important that these classes of the stake holders co- ordinate properly. 
- In order to facilitate the co- ordination among these two, a set of rules has been defined for the release planning.
- With these rules it has been made possible that each and every individual involved with the project is able to state his/ her own decision.
- With such a way, it gets easy to plan a release schedule to which every one can commit to. 
Otherwise, the developers will find it difficult to negotiate with the business persons. 
- The essence of the release planning meeting lies in the proper estimation of all the user stories in terms of the ideal programming weeks. 

What is an ideal programming week?


Now you must be wondering what an ideal programming week is. 
- The ideal programming week is defined as how long one can imagine regarding the implementation of a particular user story if there was nothing else to be done. 
- Here by nothing else we do not mean a total absence of the other activities! 
- It only means the absence of the dependencies and extra work but presence of tests.

Factors on which a release plan depends are:


- The importance level of a user story is decided by the customer itself.
- He/ she also decide how much priority is to be given to which user story regarding its completion. 
- There are two factors based up on which the release plan can be drawn:
  1. Scope or
  2. Time

Role of Project Velocity in Release Planning


- A measure called the “project velocity” helps with the release planning. 
- This measure proves to be a great aid in determining the number of the user stories that can be implemented before the last date of the completion of the software project.
- Or in the terms of the scope, the project velocity helps in determining the number of user stories that can be completed. 
- When the release plan is created according to the scope, the total weeks of the estimated user stories is divided by the project velocity to obtain the total number of the iterations that can be carried out till the due date of the project completion. 

Philosophy Underlining Release Planning


The philosophy that underlies the release planning is that the quantification of a software product can be done by the below mentioned 4 variables:
  1. Scope: It defines how much work is to be done.
  2. Resources: It states the number of the people available.
  3. Time: It is the time of the release of the software product and
  4. Quality: It defines how good the software is. 


Tuesday, April 17, 2012

Explain the concepts of XSS cross site scripting?

XSS or cross site scripting is a much familiar word in today’s cyber world. Cross site scripting is categorized under the category of computer security vulnerabilities which are common among the web applications.

Purpose of XSS Cross Site Scripting



- This vulnerability makes the web application so vulnerable that the malicious outside attackers are able to inject the malicious client side scripts in to the web pages or applications that are later viewed by the people who visit the page.

- Another purpose may be to incur the access controls like the same origin policy.

- The cross site scripting vulnerability itself accounts for almost 80.5 percent of all the security vulnerabilities identified and documented in the year of 2007 by the Symantec.

- The cross site scripting technique is employed for curbing risk depending on the measure of the sensitivity of the data that is being processed by that particular web site or web page.

- Apart from this factor, another factor that influences this is the security mitigation as implemented by the owner of that web site.

Limitations of XSS Cross Site Scripting



- Cross site scripting can also be employed by some people to create petty nuisance.

- This vulnerability of the security system is often misused by the attackers for bypassing the security mechanisms on the client side which are usually implemented by the web browsers up on the web content on that particular site.

- There are various ways through which the attacker can find the access to the web pages for injecting their malicious scripts in to them.

- Such ways or methods can provide the attacker an unauthorized access to all the sensitive content of the page, information of the user activity as stored by the browser and session cookies etc.

About Cross Site Scripting



- Cross site scripting is a type of code injection attack and is somewhat similar to the SQL injection attacks.

- Earlier the cross site scripting technique was defined as the loading of the third party application that had been attacked at an unrelated attack site while executing java scripts in the context of security of the domain on target as created by the attacker.

- Eventually this cross site scripting refer to the different modes of the code injection, non java script vectors (like VBscript, flash, Java, ActiveX, HTML, SQL and so on).

- The cross site scripting vulnerabilities have been under exploitation since the advent of 20th century.

- So many famous social networking sites like my space, orkut, twitter, Facebook etc have been a victim of the cross site scripting in the past.

- With the sophistication of the cross site scripting techniques, they have now surpassed the vulnerabilities like buffer overflows reporting to be the most common security vulnerability.

- Even now 68 percent of the total web sites have been sorted as vulnerable to the cross site scripting attacks.

Classifications of XSS flaws


As such there are no proper criteria for the classification of the XSS flaws, but according to the experts they are classified in to two categories:

1. Persistent XSS Flaws
It is also known as stored XSS flaws and is the most destructive type. It occurs when the data which has been provided by the attacker is stored by the server.

2. Non persistent XSS flaws
It is also known as reflected XSS flaws and it is the most common type. It occurs when data from a web client is used by server scripts for generating required pages without the sanitization of the queries.

Some other experts classify them as:
1. DOM based XSS flaws: infect client side scripts.
2. Traditional XSS flaws: occur as a result of the flaws in the server side scripts.


Facebook activity