Subscribe by Email


Showing posts with label Pitfalls. Show all posts
Showing posts with label Pitfalls. Show all posts

Thursday, July 26, 2012

How can data caching have a negative effect on load testing results?


It is quite a heavy task to retrieve data from a certain repository if we see it through a performance point of view. It becomes much more difficult when the data repository lies too far from the application server.Retrieving data becomes difficult also when a specific type of data is accessed over and over again. 
Caching is a technique that has been developed as a measure for reducing the work load and the time consumed for retrieval of the data. 
In this article, we have discussed about the negative effects that simple data caching can have up on the load testing. 

Rules for Caching Concepts


Some rules have been laid down for the caching concepts which have been mentioned below:

1. The data caching is useful if used only for a short period of time and does not works when used through the life cycle of the software system or application.
2. Only that data which is not likely to be changed quite often should be cached.
3. There are certain data repositories that have the capability of supporting the notification events in case if the modification of the data takes place outside the application.

If these above stated rules are not followed properly, the data caching is sure to have a negative impact up on the load testing. 

How data caching produces a negative impact on load testing?


- This is so because the data caching has got some pitfalls which come in our observation only when there are potential situations when there is a possibility of data expiry and software system or application using inconsistent data. 
- Using caching technique is quite simple but any fault can cause an impact on load testing.
- Load testing involves putting demands on the software system or application in order measure its response. 
- The outcomes of the load testing helps in measuring the difference between the responses of the software system or application under normal as well as peak load conditions. 
- Load testing is usually used as a means to have a measure of the maximum capacity at which the software system or application can operate easily. 
- Data caching initiates quick response from the software system or application for obtaining cookies etc. 
- Though data caching responds faster than the usual memory transactions, it has a negative impact on the result of the load testing i.e. you will not get the original results rather the results you will get will be the altered ones. 

What you will get to see is the wrong performance of the software system or application. 

What is the purpose of caching?


- Caching is done with the purpose of storing certain data so that that data in the subsequent stages can be served faster. 
- Data caching affects load testing results in a way until and unless the cache is cleared up by the testing tool after every iteration of the virtual user, an artificial faster page load time is started to be given by the caching mechanism. 
- Such artificial timings will alter your load testing results and invalidate them. - In caching, all the recently visited web pages are stored. 
- When we carry out load testing, our aim is always to check the software system or application under load. 
- So if by chance the caching option is left enabled, what will happen is that the software system or application will try retrieving the data from the data that is locally saved giving false measure of the performance determination. 
- So, the caching option should always be disabled while you carry out load testing. 


Thursday, August 28, 2008

Weakness of Function Point Analysis

Function Point Analysis is seen as a very important and useful technique for requirements estimation, and for numerous other benefits (see previous post for more details). However, even such a famous method has its detractors, with a number of people / studies pointing out issues with the technique. Here are some of these issues / weaknesses / problems:

- FPA is seen as not being fully suited for object oriented work with an objection that the core of the technique, function points cannot be reasonably counted for object-oriented requirements specifications. The problems are that several constructs of object oriented specifications representation can be interpreted in various ways in the spirit of FPA, depending on the context.
- Function point counts are affected by project size; ideally Function Points should not be affected by project size since they measure each function, but this does not work out in actual practise
- Function Point Counting techniques have been found to be not easy to apply to systems with very complex internal processing or massively distributed systems
- Difficult to define logical files from physical files
- The validity of the weights that were present in the initial technique that the founder of FPA, Albrecht, set up as well as the consistency of their application has been challenged
- Different companies do calculate function points slightly different (actually depending on the process and people who do the actual Function Counts), making intercompany comparisons questionable and negating one of the core benefits of having standardised Function Counts
- There is a conflict in the usage of FPA for project size purposes, this conflict being with another standard measure of counting - The number of lines of code is the traditional way of gauging application size and is claimed to be still relevant because it measures what software developers actually do, that is, write lines of code. At best it can be used along with Function Counts.
- Doing FPA means that you are depending on converting back the available information to actually do essentially the same thing as requirements specification, and should end up with the same types of errors. This process is touted as a big error prone area.
- Function points, and many other software metrics, have been criticized as adding little value relative to the cost and complexity of the effort which are major factors in decision making
- The effort in computing function points has some base errors inherent because much of the variance in software cost estimates are not considered (such as business changes, scope changes, unplanned resource constraints or reprioritizations, etc.).
- Function points don't solve the problems of team variation, programming tool variation, type of application, etc.
- FP was originally designed to be applied to business information systems applications. So, the data dimension was emphasized and as a result, FPA was inadequate for many engineering and embedded systems.
- Another problem, this one dealing with the technical process of FPA comes up when assessing the size of a system in unadjusted function points (UFPs). The
classification of all system component types as simple, average and complex is not sufficient for all needs.
- Counting FP's is a major factor, requiring the presence of a skilled counter. Many companies get this work done by people not having the desired skill level (this happens for other tools as well, but counting correct FP's is critical to the whole system)
Inspite of these problems, FPA is a very useful tool, and probably a very good fitment for doing estimation.


Facebook activity