Subscribe by Email

Tuesday, July 31, 2012

What are different techniques for estimating used by agile teams?

Expending more time and effort to arrive at an estimate does not necessarily increase the accuracy of the estimate. The amount of effort put into an estimate should be determined by the purpose of that estimate. Although it is well known that the best estimates are given by those who will do the work, on an agile team we do not know in advance who will do the work. Therefore, estimating should be a collaborative activity for the team.

Concepts on Estimating

- Agile teams acknowledge that we cannot eliminate uncertainty from estimates, but they embrace the idea that small efforts are rewarded with big gains.
- Agile teams can produce more reliable plans because they frequently deliver small increments of fully working, tested, integrated code.
- Agile teams do not rely on single expert to estimate. Estimates are best derived collectively by the team.
- Estimates should be on a predefined scale.
- Features that will be worked on in the near future and that need fairly reliable estimates should be made small enough that they can be estimated on a non-linear scale from 1 to 10 such as 1,2,3,5, and 8 or 1,2,4, and 8.
- Larger features that will most likely not be implemented in the next few iterations can be left larger and estimated in units such as 13,20,40, and 100.
- Some teams choose to include 0 in their estimation scales.

Four common techniques for estimating are:

1. Expert Opinion
- In this approach, an expert is asked how long something will take or how big it will be.
- The expert relies on his/her intuition or gut feel and provides an estimate.
- This approach is less useful on agile projects as compared to traditional projects.
- In an agile project, estimations are made on user stories or other user-valued functionality. It requires lot of skills by more than one person which makes it difficult to find suitable experts.
- Benefit of expert opinion is that it does not take very long.

2. Analogy
- In this approach, the estimator compares the story being estimated with one or more other stories.
- If story is twice the size, it is given estimate twice as large.
- You do not compare all stories against a single baseline, instead, each story is estimated against an assortment of those that have already been estimated.

3. Dis aggregation
- It refers to breaking a story or feature into smaller, easier to estimate pieces.
- Be careful not to go very far with this approach.

4. A fun and effective way of combining these is planning poker.
- In planning poker, each estimator is given a deck of cards with a valid estimate shown on each card.
- A feature is discussed, and each estimator selects the card that represents his or her estimate.
- All cards are shown at the same time.
- The estimates are discussed and the process repeated until agreement on the estimate is agreed.

Monday, July 30, 2012

How does agile teams estimate the size of the project?

Agile teams separate estimates of size from estimates of duration. There are two measures of size:
1. Story points
2. Ideal time

Estimating Size with Story Points

- Story points are a relative measure of the size of a user story.
- A point value is assigned to each item when we estimate story points.
- Relative values are more important than raw values.
- A user story estimated as ten story points is twice as big, complex, or risky as a story estimated as five story points.
- A ten-point story is similarly half as big, complex or risky as a twenty-point story.
- The most important thing that matters are the relative values assigned to different stories.
- Velocity is a measure of a team's rate of progress per iteration.

- At the end of each iteration, a team can look at the stories they have completed and calculate their velocity by summing the story point estimates for each completed story.

- Story points are purely an estimate of the size of the work to be performed.
- The duration of a project is not estimated as much as it is derived by taking the total number of story points and dividing it by the velocity of the team.

There are two approaches to start with:
1. First Approach: Select a story that you think is one of the smallest story and say that story is estimated at one story point.
2. Second Approach: Select a story that seems somewhat medium and give it a number somewhere in the middle of the range you expect to use.

Estimating Size in Ideal Time

Ideal time and elapsed time are different. The reason for the difference, of course, is all the interruptions that may occur during any project.
- The amount of time a user story will take to develop can be more easily estimated in ideal days than in elapsed days.
- Estimating in elapsed days require us to consider all the interruptions that might occur while working on the story.
- If we instead estimate in ideal days, we consider only the amount of time the story will take.
- In this way, ideal days are an estimate of size, although less strictly so than story points.
- When estimating in ideal days, it is best to associate a single estimate with each user story.
- Rather than estimating that a user story will take 4 programmer days, 2 tester days, and 3 product owner days, it is better to sum those and say the story as a whole will take nine ideal days.

Sunday, July 29, 2012

How does agile teams work?

Agile teams work together as a team but include roles filled by specific individuals.
- First is the product owner, who is responsible for the product vision and for prioritizing features the team will work on.
- Next is the customer, who is the person paying for the project or purchasing the software once it is available.
- Users, developers, and managers are other roles on an agile project.

How does agile team work?

- Agile teams work in short, time-boxed iterations that deliver a working product by the end of each iteration.
- The features developed in these iterations are selected based on the priority to the business.
- This ensures that the most important features are developed first.
- User stories are a common way for agile teams to express user needs.
- Agile teams understand that a plan can rapidly become out of date. Because of this, they adapt their plans as appropriate.

What kind of planning are used for agile teams?

Agile teams use three levels of planning:
- Release planning : The release plan looks ahead for the duration of the release - typically, three to six months.
- Iteration planning : The iteration plan looks ahead only the duration of one iteration - typically, two to four weeks.
- Daily planning : A daily plan is the result of team member commitments made to each other in a daily stand-up meeting.

During release planning, the whole team identifies a way of meeting the conditions of satisfaction for the release, which includes scope, schedule, and resources. To achieve this, the product owner may need to relax one or more of her conditions of satisfaction.
A similar process occurs during iteration planning, when the conditions of satisfaction are the new features that will be implemented and the high level test cases that demonstrate the features were implemented correctly.

Saturday, July 28, 2012

What is virtual user script? Why do you need to parametrize fields in your virtual user script?

Nowadays, the concept of the virtual user is quite common in the field of load testing. 

What is a Virtual User?

- Virtual user is one of the quite useful tools have been ever invented in the field of software engineering. 
- Virtual users can be used in a number of ways for testing load, stress or capacity of any software system or application you want. 
- It will be the easiest to define a virtual user as a virtualized representation of a real world user which has been specifically designed for simulating the same interactions and behaviors with the software system or application or web site to be tested in a way that a real world user would do exactly. 

Let us take an example, suppose at a peak your web site got 100 users over a particular hour. It becomes quite easy for you to simulate the same behavior using 100 virtual users making use of the scripts that invoke quite similar interaction and the navigation exactly as your real users do. Today such virtual users are also available that can playback the recorded scripts. 

In this article we discuss about what a virtual user script is and what is the need for parameterizing the fields in a virtual user script.  

What is a Virtual User Script & Need to Parametrize fields?

The fact that almost all of the scripts of the virtual user are automated is quite common. 
- For each and every automated script a particular entry point is marked by the script statement in virtual user. 
- From this point, the actions of the automated scripts are much similar to that of the real world users. 
- At any point these virtual user scripts can be made to click on any of the buttons or the windows, type some words, and move the mouse around and so on.
- Not only scripts, there are certain functions that are supported well by the virtual user and these functions can act as an extension to the virtual user scripts. 
- These functions have been given a general name called “tasks”. 
- These tasks consist of a procedure along with a list of parameters and in some cases they may also include an optional return value. 
- The information regarding the environment of the target computer is also collected by the virtual user with help of a statement called the match statement. 
A specific environment element is searched by this match statement with the help of some descriptor traits like the following:
  1. Static text
  2. Edit text
  3. Pictures
  4. Icons
  5. User items and so on.
- There is one more kind of statement that is used and is called the collect statement.
- It is used to collect all elements of a certain type in to the list.
- Afterwards the virtual user interacts with the software system or application environment using certain keywords like:
  1. Select
  2. Drag
  3. Type
  4. Close
  5. Click and so on.
- Common objects of the operating system like windows, buttons, bars, menus and scrolls are accessed by the virtual users. 
- The virtual user software package comes with a log file feature using which one can write out all the information from within a script. 
- This feature also provides the scripter with information regarding the current run time state of the virtual user scripts. 
- These virtual user scripts can also be debugged whenever required by logging with the printh statement. 
Another fact about virtual user is that there exists no type checking which serves as a very good idea to log the parameters that serve as input to every task.

Friday, July 27, 2012

What are the causes for the failure of traditional planning approach?

The traditional planning approaches does not always lead to very satisfactory results.

Causes for Planning Failure

Cause #1: Planning is done by activity and not feature
- The traditional approaches to planning focus on activity completion rather than on delivery of features.
- Activity based plans generally lead to projects that overrun their schedules.
- Hence, quality is reduced.

Cause #2: Activities do not finish early
Cause #3: Lateness is passed down the schedule
- Traditional approaches being activity based, their main focus is to focus on dependencies between activities.
- Testing will start late if anything goes worse than planned according to traditional approach.
- Testing will start early if everything goes better than planned.

Ways to avoid late start of testing are:
1. User interface coding finishes late.
2. Middle tier coding takes longer than planned and finishes late.
3. Middle tier coding starts late as tables adding to database finishes late.
4. Tester is not available.

Cause #4: Activities are not independent
- Activities are independent if duration of one activity does not influence the duration of another activity.
- For independent activities, late finish on one activity can be offset by an early finish on another.

Cause #5: Delay caused by multitasking
- Multitasking exacts a horrible toll on productivity.
- It becomes an issue once a project starts to have some activities that finish late.
- Dependencies between activities become critical.
- For a traditionally planned project, multitasking becomes a problem for two reasons:
1. Work is assigned in advance and it is impossible to allocate work efficiently in advance.
2. It focuses on achieving high level of utilization of all individuals rather than on maintaining sufficient slack.

Cause #6: Features are not developed by priority
Cause #7: Ignoring Uncertainty
- We fail to acknowledge uncertainty in traditional approach.
- Ignore the uncertainty about product.
- Assuming initial requirement analysis will lead to complete specification of product.
- Ignoring uncertainty about how we will build the product.
- The best way to deal with uncertainty is to iterate.

After looking at the problems with traditional approaches to planning, many projects are disappointing. Planning based on activity diverts us from features and as a result, a variety of problems leads to the likelihood of delivering late against a schedule.

What are different characteristics of agile planning?

Two critical areas in software engineering are:
1. Estimating
2. Planning.
They are difficult and error-prone. These activities cannot be avoided.

The purpose of planning is to find an optimal answer to the overall product development question of what to build. The answer incorporates features, resources and schedule. It is supported by a planning process that reduces risk, uncertainty, supports reliable decision making, establishes trust and conveys information.

Planning is an ongoing iterative approach. It is an effort to find an optimal solution to the overall product development question of "what should we build?"

A good planning process supports:
1. Reducing Risk
2. Reducing Uncertainty
3. Supporting better decision making
4. Establishing trust
5. Conveying Information

A good plan is one that is sufficiently reliable that it can be used as the basis for making decisions about the product and project.

What is Agile Planning?

- It is more focused on planning than the creation of plan.
- It encourages change, results in plans that are easily changed.
- It shifts the focus from plan to planning.
- It balances the effort and investment in planning with the knowledge that we will revise the plan throughout the project.
- As the user needs are discovered, our plans are affected and hence changes become necessary.
- Plan are needed that are easily changed. Therefore, planning becomes more important that the plan.
- Changing plan does not mean a change in date.
- Plan can be changed without changing the date by:
            1. dropping a feature.
            2. reducing scope of feature.
            3. adding more people in the project.
- Agile planning is spread more or less evenly across the duration of a project.

Thursday, July 26, 2012

How can data caching have a negative effect on load testing results?

It is quite a heavy task to retrieve data from a certain repository if we see it through a performance point of view. It becomes much more difficult when the data repository lies too far from the application server.Retrieving data becomes difficult also when a specific type of data is accessed over and over again. 
Caching is a technique that has been developed as a measure for reducing the work load and the time consumed for retrieval of the data. 
In this article, we have discussed about the negative effects that simple data caching can have up on the load testing. 

Rules for Caching Concepts

Some rules have been laid down for the caching concepts which have been mentioned below:

1. The data caching is useful if used only for a short period of time and does not works when used through the life cycle of the software system or application.
2. Only that data which is not likely to be changed quite often should be cached.
3. There are certain data repositories that have the capability of supporting the notification events in case if the modification of the data takes place outside the application.

If these above stated rules are not followed properly, the data caching is sure to have a negative impact up on the load testing. 

How data caching produces a negative impact on load testing?

- This is so because the data caching has got some pitfalls which come in our observation only when there are potential situations when there is a possibility of data expiry and software system or application using inconsistent data. 
- Using caching technique is quite simple but any fault can cause an impact on load testing.
- Load testing involves putting demands on the software system or application in order measure its response. 
- The outcomes of the load testing helps in measuring the difference between the responses of the software system or application under normal as well as peak load conditions. 
- Load testing is usually used as a means to have a measure of the maximum capacity at which the software system or application can operate easily. 
- Data caching initiates quick response from the software system or application for obtaining cookies etc. 
- Though data caching responds faster than the usual memory transactions, it has a negative impact on the result of the load testing i.e. you will not get the original results rather the results you will get will be the altered ones. 

What you will get to see is the wrong performance of the software system or application. 

What is the purpose of caching?

- Caching is done with the purpose of storing certain data so that that data in the subsequent stages can be served faster. 
- Data caching affects load testing results in a way until and unless the cache is cleared up by the testing tool after every iteration of the virtual user, an artificial faster page load time is started to be given by the caching mechanism. 
- Such artificial timings will alter your load testing results and invalidate them. - In caching, all the recently visited web pages are stored. 
- When we carry out load testing, our aim is always to check the software system or application under load. 
- So if by chance the caching option is left enabled, what will happen is that the software system or application will try retrieving the data from the data that is locally saved giving false measure of the performance determination. 
- So, the caching option should always be disabled while you carry out load testing. 

Wednesday, July 25, 2012

What is the value of a testing group? How much interaction with users should testers have, and why?

The software testing is quite a tedious work and therefore cannot be performed by a single person alone lest he is working on a very small project. A software testing team may consist of any number of software testers. Till the year of 1980 the term “software tester” was a general term and it was after that only that the software testing evolved as an individual profession. 
Software testing requires for various kinds of roles as mentioned below:
  1. Manager
  2. Test lead
  3. Test designer
  4. Tester
  5. Automation developer
  6. Test administrator
All these above mentioned roles constitute a complete software testing team. In this article we shall discuss about the value of the software testing team and also how much interaction with users should testers have and its need. 

Value of a Testing Group/Team

- A testing team should be independently empowered otherwise it won't be effective. 
- Managing a testing process without the use of any separate test plan and test strategy is quite a tedious job. 
- One of the typical features of an independent testing team is that it does not perform white box testing.
Only when the defects are logged, the value of a testing team is realized by the whole development process and people involved in it. 
- A good testing team always makes sure that the software testing life cycle of the software system or application flows through the following stages:
  1. Planning
  2. Specifications
  3. Execution and
  4. Completion
- A typical testing team does not bothers itself with the quality of the code i.e., whether it is poorly written or well written as long as the required functionality is addressed by it.

Interaction of testing team with users

-For a testing team interacting with the users is quite a difficult job.
-Generally, when a detailed documentation of the software system or application is present, a very less interaction is required with the users. 
- On the other hand, when there is a lack of documentation a great user tester interaction is required. 
- However, irrespective of the availability of the documentation, a user interaction is always recommended since the actual usability, need and the functionality of the software system or application can be known only through the users.
- Most of the testing teams enforce user tester interaction and thus it is becoming quite common these days. 
- One more reason for the rise in the user tester interaction is that the more and more testing teams are following the trend of carrying out UAT or user acceptance test which involves a vigorous interaction among the testers and users and therefore increasing the margin of user tester interaction. 
- Also, it is important for the better development of a software system or application that the thoughts of the testers work in sync with the users.
- There are testers working in different situations out of which some cause problems. 
- Good testers usually have a habit of conversing in terms of the user’s thinking like:
         1. What goals they might be having? 
         2. What tasks they might want to perform? 
         3. How they might perform those tasks? And so on.

- When good testers look at a feature, user’s reaction is the first thing that comes to their mind.
- A good tester understands quite well that how the software system or application is exploited by the user, how the program will be affected by whatever the user does and what affect it will have on the user’s habits.

Tuesday, July 24, 2012

What are virtual users? For what purpose are virtual users created?

For the software developers it is quite frustrating to see their software systems and applications crashing soon after installing them. 
What impression such a failure of the software system or application will have on the user or the customer!
It is obvious that the customers and the users may think that the software product has not been through sufficient testing before the software product was released to them. 
It has become a standard in the world of software engineering that any organization developing a software system or application must follow a defined standard procedure for testing that software product in order to ensure its reliability as well as quality before it is shipped to the users or customers. This whole testing process is of the quality assurance. 

For every software product there might be a 100 ways in which it may be used by the user. Therefore, the software system or application must be checked in all these possible ways for the verification of whether or not the software product is working fine.
There are a number ways using which a software testing process can be made more effective then it is actually. 
- One of such ways is use of automated testing tool in which suites of tests as well as specific tests can be set up and can be run by the computer. 
- Number of hours of drudgery are reduced by a huge margin as well as time and money. 
- The saved energy and time can be re-focused upon the tasks that call for more human interaction. 
- This whole process makes the software product very much reliable.
- It is always a good idea to verify the quality of the software artifact before it is shipped out. 

One of the automated testing tools is the virtual user than can help in this regard. In this article we talk about virtual user and with what purpose this tool has been created. 

What is a Virtual User and Purpose of Virtual User?

- Virtual user in abbreviated form is known as VU and is a kind of tool that helps a computer in emulating a human user i.e., it helps in performing actions like typing of words, commands and clicking of mouse and so on. 
- The computer on which the virtual user is installed actually acts as a host and takes the control of the other under itself just as a human tester would do. 
- One of the targeted computer systems is asked to act as an agent and receive instructions from the host computer. 
- The environment of a virtual user is constituted by an application whose work is to compile and run the scripts. 
- The scripts of the virtual user cannot be edited in common editors rather it has to be edited in editors like MPW or BBEDIT. 
- Through the virtual user, all the computers are linked over a network. 
- The minimum requirements of the virtual user are:
  1. Virtual user software package and
  2. 2 systems one as host and the other one as target.
- Today many firms have launched their virtual user software packages. 
- Though the virtual user sounds like it’s a very good automating tool but it has got many drawbacks :

  • It cannot tell you when something looks right on the test screen or it cannot tell you how a particular icon looks or what position it is at. 
  • The biggest draw back is that the virtual user has got no intelligence of its own and if by chance a crash occurs in the machine, virtual user will keep trying to run the script regardless of the crash.  

Monday, July 23, 2012

What is the difference between the graphical user interface testing and usability testing?

There are different types of testing and graphical user interface testing and usability testing are two of them. In this article we have taken up the discussion regarding two types of testings mentioned in the heading plus the differences between them.
Though the graphical user interface testing and usability testing sound quite similar to one another but they are quite different from each other. First we shall discuss about these two types individually and later we shall see the differences between them. 

Graphical User Interface Testing

- Graphical user interface or GUI testing is all about checking the graphical user interface of a developed software system or application as a measure for ensuring that it holds up to its specifications as mentioned in its documentation. 
- The GUI testing is carried out with the help of several varying test cases. 
- The generation of good and effective test cases depends a lot on the certainty of the test designers regarding whether or not the test suite designed by them will cover the over all functionality of the software system or application under question.
- It also depends on the extent up to which the designed test suite exercises the graphical user interface of the software system or application. 

Usability Testing

- Usability testing is more like an interaction design centered on the users for the evaluation of the software system or application.
- The usability testing has been known for giving the direct input regarding the usage of a particular software system or application by the real world users.
The usability testing actually falls in contrast with the other available usability inspection methods since in these methods the methods used for the evaluation of a graphical user interface are quite different.
- The primary focus of the usability testing is on the capacity of the human designed software system or application to fulfill its intended purpose. 
- The outcome of the usability testing is actually a measure of the ease with which the software system or application can be used. 
- Usability testing falls under the category of the black box testing. 
- The basic goal of the usability testing is to measure the following four aspects mentioned below:
  1. Efficiency
  2. Accuracy
  3. Recall and
  4. Emotional response
- Usability testing is carried with the aim of observing the users using the software system or application in order to catch the errors and spot the areas where improvement can be made. 

Differences between Graphical User Interface Testing & Usability Testing

Difference #1:
The purpose of the graphical user interface testing is to see the look and feel of a particular software system or application differs in different operating system. On the other hand the purpose of the usability testing is to make it convenient for the users to use the software system or application.

Difference #2:
Graphical user interface testing involves making confirmations regarding whether or not the software system or application adheres to its design requirements checking on the aspects like:
         a)   Colors
         b)   Fonts
         c)   Control placements and so on.
On the other hand the usability testing goes much deeper in to the above mentioned aspects like whether or not the controls have been arranged in a logical sequence.

Difference #3:
In Graphical user interface testing the tests are executed to check the following:
        a)  whether or not the standards are in place,
        b)   All the screen validations namely navigation conditions, validation 
conditions, aesthetic conditions and so on.
Usability testing involves asking questions like:
       a)   Is navigation intuitive enough?
       b)   Does the GUI make sense to the user? Etc.

Sunday, July 22, 2012

What is the difference between authentication and authorization?

In this article, we have taken two very important topics of the cyber world namely authentication and authorization. We shall also discuss the difference between the two terms which have a direct link to our security on the World Wide Web and other networks. 

Concept of Authentication

"Authentication involves the act of the confirmation of the truth regarding all the attributes of some entity or datum under the question". 

The authentication process is also linked up with the confirmation of the identity regarding the following aspects:
  1. Confirmation of a person’s or software system’s or program’s identity.
  2. Tracing of the origins of some artifacts.
  3. Ensuring that what the labelling and packaging claims to be is what is that is actually in the product. 
There are three types of authentication methods which we have discussed below:
  1. The first type: It involves accepting of identity proof given by some credible person who can provide evidence of the identity or the originator and the object under assessment in question.
  2. The second type: It involves a comparison between the attributes of the object itself and what is known about the objects of same origin. But authentication of this type is quite vulnerable to forgery and calls for expert knowledge
  3. The third type: It involves authentication on the basis of the external affirmations like documentation. 
Three factors need to be verified in authentication are:
  1. Owner ship factors
  2. Knowledge factors
  3. Inherence factors

Concept of Authorization

- The process of authorization involves the act of the specification of the access rights to the resources.
- These are the resources that are involved with the computer security or information security in general.
- In particular these resources are used to access control to the security system and other desired information.
- To say it simply, authorization is the process of providing a definition for the access policy. 
- While the system is in operation, it makes use of the access control rules for making decisions regarding the rejection or approval of the access requests from the authenticated users or consumers. 
- Resources can be anything like:
  1. Individual files
  2. Items data
  3. Computer devices
  4. Computer programs
  5. Functionality of the computer applications and so on.
- Consumers may be either computer users or computer programs or other devices on the system. 
- The access control process that is performed during the authorization involves two main phases as mentioned below:
  1. Phase 1: This phase is known as the policy definition phase and involves authorization of the access.
  2. Phase 2: This phase is known as the policy enforcement phase and involves acceptation or rejection of the access requests.

Differences between Authentication and Authorization

  1. Verification of your identity: It means verifying who you are is called authentication whereas the verification of what you are authorized to do is called authorization. This is the simplest difference between the two similar sounding processes. Both of these processes are carried whenever some connection attempt is made and whether the attempt has to be allowed or rejected is decided based up on these two factors only.
  2. The basic goal of the authentication process is to verify whether you are who you claim to be or not? On the other hand the goal of the authorization is to set the access scope of the user who has been authenticated in the previous process. 

Saturday, July 21, 2012

What is meant by DNS? What does it contain?

DNS or domain name system is a well known distributed system which is quite hierarchical in nature and is used for the following:
  1. Computers
  2. Services
  3. Resources that are connected to some private network or internet and so on.

What does DNS contain?

- With the aid of DNS, the domain names with various participating entities contains various information. 
- A domain name system is also known as domain name service and has taken up the responsibility of resolving the queries for the above discussed domain names into the corresponding IP addresses. 
- The basic purpose of this whole process is spotting the location of devices and computer services on the World Wide Web.
- The domain name system had lately become quite an essential part of the functionality of the internet because of the world wide service it provides regarding the distributed key word based redirection. 
- To put it simply it acts as a phone book in disguise for the internet. 
- It serves as a phone book in the way that it translates the human friendly computer host names in to their corresponding IP addresses. 
For example,
The domain name: www. Abc. Com translates in to the following IP address (say): 192. 0 . 34 . 11 (IPv4) and 2630 : 0 : 2c0 : 201 : : 10 (IPv6) etc.

- Though DNS serves all the purposes of an ideal phone book in terms of the internet, it differs from the phone book in one respect which is that the DNS can be frequently updated and these updates in turn can be distributed but in phone book these tasks cannot be performed so.
- With the help of such a process the location of a particular service on a network can be easily changed without having any affect on the end users who keep on continuing with the same host name. 
-This advantage is further reaped by the users while they recite the meaningful e- mail addresses as well as the URLs (uniform resource locators) without even knowing the way via which the services are actually located by the computers. 
With the help of domain name system, each and every domain in the network is assigned with an appropriate domain name and this domain name is mapped to corresponding IP addresses through the designation of the authoritative name servers for each and every domain. 
- These authoritative name servers hold the responsibility of their particular domains and also it helps in assigning the sub domains with their respective authoritative name servers.
- Such a mechanism has helped a lot in making the domain name system quite fault tolerant and distributed. 
- This mechanism in another way has eliminated the requirement of a single central register to be used continually for updating and consultation. 
- There is one more additional feature of the domain name system which is that the responsibility of the updating and maintenance of the master record of the domains is distributed among many domain name registrars.
- These domain name registrars are known for their competition for the domain owner’s and end user’s business. 
- The facility of moving the domains from one registrar to another has been very well provided in the domain name system.
- The technical functionality of the data base service as well as the DNS specification is also specified by the domain name system.
- This DNS protocol is a kind of detailed specification of the communication exchanges and data structures that are used in the domain name system which in turn forms a very important part of the whole internet protocol suite. 

Friday, July 20, 2012

Explain how the data is secured in HTTPS?

HTTP secure or HTTPS can be thought of as an extended version of the regular HTTP. This communication protocol is the widely used one next to the regular HTTP when it comes to having a secure communication path between the user and the server over a computer network. 
The HTTPS finds quite a wide deployment over the internet when compared to deployment over intranet. If we understand it deeply we will come to know that in actual it is not a protocol in itself as it seems so from outside. 
It is actually a regular hyper text transfer protocol (HTTP) simply layered over SSL/ TSL protocol. The SSL/ TSL protocol thus lends its security capabilities to the standard HTTP communications when HTTP is layered up on SSL/ TSL. 

In this article we discuss how the data is secured in HTTPS. As we mentioned above that it is quite deployed in the internet services and it is so because it provides a quite convenient means to authenticate the web site as well as the web server associated with it (with which the connection is being established).

How data is secured in HTTPS

Such an authentication is of much importance as it provides the protection against the man in middle attacks which usually occurs because of eavesdropping between our communications with the server. 
- Moreover, HTTPS provides bidirectional encryption of the communications or the data that is exchanged between the clients and the servers. 
- The ability of the bidirectional encryption by virtue of which it protects against tampering and eavesdropping which otherwise would forge the contents of the communications between the clients and the servers, makes it much necessary. 
- HTTPS comes with a reasonable guarantee that you get to communicate only with the web site which you intended to communicate with and with none else.  - Furthermore, a way to prevent the forgery of the contents of the communication that takes place between the users and the clients cannot be hampered or forged by any of the third parties is ensured by the http secure. 
In HTTPS, the entire HTTP is levied up on the top of the TSL or SSL thus enabling the total encryption of the HTTP communications content.
- This communications content includes:
  1. Request URL which states the particular web page that was requested.
  2. Query parameters
  3. Headers
  4. Cookies containing the identity information about the user and so on. 

Negative Points of HTTPS

Though the HTTPS has got many advantages, its minus point cannot be unseen.
-HTTPS cannot protect the disclosure of the communication content.
-This happens so because the addresses of the host web sites and port numbers form a necessary part of the TCP/ IP protocols that underlie the https. -To be seen practically, it means that the identity of the server can still be inferred by the eavesdroppers even on a correctly configured web server as well as the amount and duration of the communication.
-In the early years, the HTTPS was common to be used in the money transactions over the World Wide Web and other sensitive transitions like e- mails.
-In the recent years it has been known for the following:
  1. Authenticating the web pages,
  2. Providing security to the accounts,
  3. Maintaining the privacy of the user communications, web browsing and identity.
The HTTPS has also come to the rescue of the wi- fi since it is highly prone to attacks being un- encrypted. The importance of https is often more realized when the connections are made over tor or anonymity network.       

Thursday, July 19, 2012

What is UML and how to use it for software testing?

Unified modelling language or UML is considered to be the software engineering world’s only standardized general purpose modelling language when it comes to the field of object oriented software engineering. 
This standard was created by the OMG (the object management group) and later on managed by the same organization only. It was wholly adopted by the OMG organization in the year of 1997 and since then had become one of the best standards for the modelling of the software intensive systems. 
The visual models of the object oriented software intensive systems are created using the unified modelling language through a set of pre defined graphic notation techniques. Unified modelling language:
  1. Specifies,
  2. Visualizes,
  3. Modifies,
  4. Constructs, and
  5. Documents the artifacts of the object oriented software intensive systems that are under the question. 
In a way, we can say that a software system’s or application’s below mentioned elements can be visualized in a standard way using the unified modelling language:
  1. Activities
  2. Architecture blue prints
  3. Data base schemas
  4. Logical components
  5. Business processes
  6. Reusable software system components
  7. Statements of the programming language and so on.
Some techniques from the following other modelling standards are incorporated in to the unified modelling language:
  1. Entry relationship diagrams from data modelling
  2. Work flows from business modelling
  3. Object modelling and
  4. Component modelling
The best thing about the unified modelling language is that it always works no matter what the platform or the process is being followed throughout the SDLC or software development life cycle. The software engineering world has come to witness the synthesization of the notations obtained from the following techniques or methodologies:
  1. The booch method
  2. OOSE or object oriented software engineering and
  3. OMT or object modelling technique etc.
The notations are synthesized by carrying out a fusion process among them in order to produce a common, single and widely usable modelling language and so we have UML today.  
Software testing does not just consists of one phase only rather there are so many like:
  1. Unit testing
  2. Function testing
  3. Regression testing
  4. System testing
  5. Solution testing and the list goes on. 
A different class of UML diagrams are used for each and every kind of testing:

  1. Unit testing:  The type of uml diagram that is used is the class and state diagrams. For unit testing the code coverage criteria is used and the fault model is used for checking the following:
a)   correctness, invariants,
b)   pre/ post conditions.

  1. Functional testing: The type of uml diagram that is used is the interaction and class diagrams. For functional testing the functional coverage criteria is used and the fault model is used for checking the following:
a)   Functional behavior
b)   API behavior
c)   Integration issues etc.

  1. System testing: for system testing the operational scenarios coverage criteria and the fault model is used for checking the following:
a)   Work load
b)   Contention
c)   Synchronity
d)   Recovery etc.
       Here many types of uml diagrams are used like:
a)   Use cases
b)   Activity diagrams and
c)   Interaction diagrams

  1. Regression testing: For regression testing the functional coverage criteria is used and the fault model is used to check the following:
a)   Unexpected behavior from new or changed functions.
      Two kinds of uml diagrams are used here namely:
    a)   Interaction diagrams and
    b)   Class diagrams
 We can say that the need of uml diagrams here is same as of the functional testing.

  1. Solution testing: For solution testing, inter communication coverage criteria is used and the fault model helps in detecting the inter operating problems. The uml diagrams used are:
a)   Use case diagrams and
b)   Deployment diagrams. 

Facebook activity