Subscribe by Email


Showing posts with label optimization. Show all posts
Showing posts with label optimization. Show all posts

Thursday, May 14, 2015

Ways to improve meetings held for conflict / issue resolution

What I write below may seem like common sense, the kind of stuff that you would get if you were to sit back and think about how to do a meeting. This is even more important if you were to read my previous post (Reducing the number of meetings held) about how to make an effort to reduce the number of meetings that are held, which can suck up the time available with the development and testing teams. In addition, with the number of meetings being held being high, the continuous time that the teams need when they are in the middle of something gets interrupted and this can have an incredibly bad effect on productivity.
During the tension and stress of an ongoing software development cycle, teams need to resolve issues fast. As a result, when there is some kind of issue that need resolution, teams forget all the basic concepts of how to setup a meeting, what are the preliminary steps that need to be done before starting a meeting, and the net result is that even though conflicts are resolved or atleast progress made, the teams end up spending a large amount of individual time on these meetings; and we know the impact this has on the team productivity.
So what is to be done in the case of some conflict or issue that needs to be resolved. Even though it may seem logical to quickly call a meeting to get a solution for the issue, this method should be retained only for those issues that are of a show-stopper nature (for example, a defect that is stopping the launching of the application, or something equally similar). In most other cases, if you spent a bit of time, you would realize that an hour or so before the meeting can help make the meeting more productive. What are some of the steps that could be done before the meeting ? In the below steps, there may be multiple people who may need to be consulted depending on the steps (these could be team leads, project manager, development manager, quality manager, etc)
- For the issue, need to identify the people who are most competent in the issue and who would be able to contribute the maximum to this issue
- For these people, need to identify whether they are available or are involved in some other critical issue. If they are so heavily involved that it would not be good to get them out of whatever they are doing, then somebody else who can provide a solution or inputs would be needed to be identified.
- Next, a brief write-up about what the issue is would need to be written up, especially focusing on what the problem and if there are any elements of a possible solution; and this would need to be sent out atleast an hour or more so before the meeting.
- Before sending out these items, the meeting that needs to be setup should also make it clear as to who all is important for the meeting, and who would be optional for the meeting. What this does is to give team members a say in whether they need to come for the meeting or not. In most cases, what would happen is that team members who are on the optional list would evaluate whether they can contribute or not and decide accordingly. This atleast helps in saving time for those who decide not to come, and even those who come for the meeting are more informed, which makes for a quicker meeting and hence more saving of time. 


Ensuring the minimum number of meetings necessary ...

In software companies, if there is one item that is necessary, it is the concept of meetings. Meetings are the life blood of companies, and sometimes, one feels that the number of meetings that are held are excessive. In my experience, I have had developers and testers coming and complaining that the number of meetings that they are being invited to is so excessive that they feel the time involved for work is getting impacted. Even further, when a person is getting into the core of their work - whether it being the design of a new feature, or the solution of a difficult defect, or the execution of a difficult test case, facing a break in between is guaranteed to rankle. And there are numerous meetings where the team members (developers / testers / others) could be called for meetings, where it would seem that they are necessary.
Some of these meetings could be:
- Feature discussion meetings (numerous)
- Schedule planning meetings
- Task definition meetings and estimation
- Issue resolution meetings
- Daily / Periodic status meetings
- Add your own set of meetings here
You get the idea. It can be pretty frustrating. I have seen weeks where there were atleast 4-5 meetings per day, and even though you would think that these meetings are important, they do cause other problems. And at some point, if the people invited for these meetings start feeling that the subject of the meeting is not important for them, then you will start getting resistance in ensuring that these meetings are held.

So what is the solution ? There is no magic bullet, no quick solution I can give that will reduce the number of meetings you need to do. It requires careful planning, discussions between the respective managers (such as the project / program manager, the development manager, and the testing manager). Some of the questions that need to be asked before a meeting is set are the following (and there would be other questions that are not in the list below, it is just an indicator of the thinking that needs to happen):
1. The invitee list for the meeting needs to be evaluated. Is each person really necessary for the meeting. Can some people be marked optional, so that they can determine for themselves whether it is necessary.
2. If there are some minor issues that could be settled in a meeting, but could also be settled through a quick phone call or a quick hallway discussion, then that makes sense. In many issues, one person really needs to make a call, and the others need to just be informed. In such cases, having a meeting is really not necessary.
3. Is the issue really necessary to be discussed urgently ? What this means is, if the meeting is pushed out by a couple of days or a week, would it still be fine ? If so, then an alternate would be to postpone the meeting and see whether the issue can be resolved through phone calls or an email discussion.
4. In some meetings, the discussion is about nobody willing to take a quick call, or because there are no real solutions. In such cases, the meeting in a number of cases will fail or another meeting will need to be held. It would be better to prepare a solution or a note that clarifies issues, and then send that out. In many cases, this would be accepted by most of the people who have the authority to accept it.


Tuesday, December 3, 2013

What is Orthogonal Array testing? - An explanation

There are a number of black box testing techniques; the one that is being discussed in this post is the orthogonal array testing. This technique provides a systematic as well as statistical strategy for testing software. The number of inputs to the system in this technique is small but large enough for testing each and every possible input as in exhaustive testing. This technique has proved to be quite helpful in discovering errors that show a faulty logic in the software systems. Orthogonal arrays can be applied in various testing types such as following:
- User interface or UI testing
- System testing
- Regression testing
- Configuration testing
- Performance testing and so on.

The permutations for the factor levels that consist of only one treatment have to be chosen in an uncorrelated way, so that every single treatment gives you a piece of information that is different from the others. The advantage of organizing testing in such a way is that a minimum number of experiments are required for gathering same information. Orthogonality is a property exhibited by the orthogonal vectors. Properties exhibited by the orthogonal vectors are mentioned below:
- Information conveyed by each vector is different from the information by other vectors in the sequence. That is, as we mentioned information conveyed by each treatment is unique to it. This is important otherwise there will be redundancy.
- It is easy to separate these signals on a linear addition.
- All the vectors are statistically independent of each other which means that there is no correlation between them.
- When these individual components are added linearly, the result is an arithmetic sum.

Suppose a system is having 3 parameters each of which has 3 values. We require total 27 test cases for testing all of the parameter combinations which is quite time consuming. So we use an orthogonal array for selecting a combination subset from these combinations. As a result of using the orthogonal array testing, maximization of the test coverage area is possible. At the same time this minimizes the number of test cases that have to be considered for testing. The pair that is selected is assumed to have the maximum number of defects. The technique works based up on this assumption. These many combinations are sufficient for catching the fault. The interaction of the input parameters between themselves has also to be considered. The array is said to be orthogonal because the occurrence of all pair wise combinations is once. The results of the test cases are assessed as follows:
- Single mode faults
- Double mode faults
- Multimode faults

Below mentioned are the major benefits of using this technique:
- The testing cycle time is reduced.
- The analysis process gets simpler
- Test cases are balanced which means that the defect isolation and performance assessments are straightforward.
- Saves up on costs when compared to the pair-wise testing. The coverage to all the defects can only be provided by testing all the combinations that are possible. But our schedule and budget often do not permit this. Therefore we are forced to select only a sample of the combinations from the test domain. Orthogonal array testing is a means for generating samples that provide high coverage for the validation of test domain effectively. This has made the technique particularly useful in the integration testing and testing of the configurable options. Software testers often face a dilemma during selection of the test cases. The quality of the software cannot be tested but only the defects can be detected. And the exhaustive testing is difficult even in the small systems. 


Thursday, August 22, 2013

What is a spanning tree?

Spanning tree is an important field in both mathematics and computer science. Mathematically, we define a spanning tree T of an un-directed and connected graph G as a tree consisting of all the vertices and all or some edges of the graph G.
- Spanning tree is defined as a selection of some edges from G forming a tree such that every vertex is spanned by it. 
- This means that every vertex of graph G is present in the spanning tree but there are no loops or cycles. 
- Also, every bridge of the given graph must be present in its spanning tree. 
We can even say that a maximal set of the graph G’s edges containing no cycle or a minimal set of the graph G’s vertices forms a spanning tree. 
- In the field of graph theory, it is common finding the MST or the minimum spanning tree for some weighted graph. 
- There are a number of other optimization problems that require using the minimum spanning trees and other types of spanning trees. 

The other types of spanning trees include the following:
Ø  Maximum spanning tree
Ø  An MST spanning at least k number of vertices.
Ø  An MST having at the most k number of edges per vertex i.e., the degree constrained spanning tree.
Ø  Spanning tree having the largest no. of leaves (this type of spanning tree bears a close relation with the “smallest connected dominating set”).
Ø  Spanning tree with the fewest number of leaves (this spanning tree bears a close relation with the “Hamiltonian path problem”).
Ø  Minimum diameter spanning tree.
Ø  Minimum dilation spanning tree.

- One characteristic property of the spanning trees is that they do not have any cycles. 
- This also means that if you add just an edge to the tree, a cycle will be created. 
- We call this cycle as the fundamental cycle. 
- For each edge in the spanning there exists a distinct fundamental cycle and therefore there arises a one – to – one correspondence among the edges that are not present and the fundamental cycles. 
- For a graph G that is connected and has V vertices, there are V-1 edges in its spanning tree. 
- Therefore, for a general graph composed of E edges, its spanning tree will have E-V+1 number of fundamental cycles.
- For the cycle space of a given spanning tree these fundamental cycles are used. 
- The notion of the fundamental cut set as well as of the fundamental cycle forms a dual.  
- If we delete even one edge from the spanning tree, two disjoint sets will be formed of the vertices. 
- The set of the edges that if taken out from the graph G partitioning the vertices in to same disjoint sets is defined as the fundamental cut set. 
- For a given graph G there are V-1 fundamental cut sets i.e., one corresponding to each spanning tree edge. 
- The fact that the edges of the cycles that do not appear in the spanning tree but only in the cut sets of the edges can be used to establish the relationship between the cycles and the cut sets.

What is Spanning Forest?

- The sub-graph generalizing the spanning tree concept is called the spanning forest. 
- A spanning forest can be defined as a sub-graph consisting in each of the connected component a spanning tree of the graph G or we can call it a maximal cycle free sub graph.
- For counting the number of spanning trees for a complete graph the formula used is known as the cayley’s formula.



Monday, August 19, 2013

What is meant by multi-destination routing?

- So many routing algorithms have been devised to aid in routing under different conditions.
- Effective routing algorithms have been developed that are capable of routing the messages from one source node to a number of receiving nodes i.e., the multiple destination nodes.
- These algorithms are termed as the multi – destination routing algorithms and the process is therefore called as the multi – destination routing.
- This type of routing has been developed for the minimization of the cost of the network i.e., NC (network cost).
- Network cost can be defined as the sum of all the links’ weights that consist of the routing path.
- There are many heuristic algorithms available for determining the NC min path.
- This problem falls under the category of the NP – complete problems.
- Heuristics are available for the traveling salesman problem and MST (minimum spanning tree) variations.
- Global information is used by both of them.
- Another set of such heuristics is available that uses only shortest paths for reaching the destinations.
- The best worst case performance is exhibited by the MST algorithm.
- However, one study revealed that effectiveness of the simpler heuristics is higher.
- The network cost (NC) is often compared with the destination cost (DC).
- Destination cost is the sum of the cost of all the shortest paths that lead to the destination.
- A scheme of algorithms has been developed for trading off between these two costs i.e., the NC and DC.  
- The sender of the transmitted data cannot be taken as a single node in a network where the cooperative communication is supported.
- This asks for the re-investigation of the traditional link concept.
- Any routing scheme thus depending up on this link concept needs to be reconsidered.
- Also, the potential performance gain resulting because of the cooperative communication needs to be exploited.

- Routing often gets complicated for some networks where the selection of the paths is no longer the job of a single entity.
- Rather, a number of entities are involved in the selection of the paths.
- Multiple entities can even select specific parts of a path.
- If these selected paths are chosen by the entities for their own objectives optimization then it can lead to inefficiency or serious complications in the network since they may or may not conflict with the other entities’ objectives.
- This would become clear from the following example, consider traffic moving in a system of roads.
- Now here each driver selects a path that would minimize only his/her traveling time.
- In this kind of routing, there are longer equilibrium routes (i.e., longer than the optimal.) for almost all other drivers.
- This is often termed as the Braess paradox.
- Another example is of the routing the AGVs (automated guided vehicles) by a model on some terminal.
- For prevention of the simultaneous usage of the infrastructure’s same part reservations are made. - This is called as the context – aware routing.
- The internet is divided in to a number of divisions which are nothing but Ass i.e., the autonomous system like ISPs.
- All these systems have control over the routes that lie in their own network at various different levels.

Following steps are involved in multi – destination routing:
1. The BGP protocol is used for selecting the AS – level paths.
2. A sequence of autonomous systems is produced by the BGP protocol via which the packet flow will take place.
3. The neighboring Ass offer multiple paths for each of the AS from which it can choose. Paths are selected based up on the relationships between the neighboring systems.
4. Each selected path refers to multiple corresponding router level paths.


Friday, August 16, 2013

What is meant by flow based routing?

- The routing algorithm that considers the flow in the network is known as flow based routing. 
- It takes into consideration the amount of traffic flowing in the network before making a decision regarding the outgoing link over which the packet has to be sent. 
- The ability to characterize the traffic flow’s nature with respect to time is the key to the successful implementation of the flow based routing. 
- For any given line, if we know what is its average flow or capacity we can very well compute the mean packet delay of the line using the queuing theory. - This is the basic idea behind the implementation of this algorithm. 
- This idea reduces the size of the problem i.e., only the minimum average delay has to be calculated for the sub net and nothing else. 
- Thus, the flow based routing considers the load and topology of the network while other routing algorithms do not. 
- In few networks, the mean data flow existing between two nodes might be predictable as well as relatively stable. 
- There occur such conditions under which the average traffic between the two points is known. 
- In such conditions the mathematical analyzation of the flow is possible. 
- This calculation can be used in the optimization of the routing protocol. 
- The flow weighted average can be straightforward calculated which in turn can be used for the calculation of the mean packet delay of the entire sub-net. 

The flow based routing algorithm requires the following things in advance:
Ø  Topology of the subnet
Ø  Traffic matrix
Ø  Capacity matrix
Ø  A routing algorithm
- Information flow based routing algorithms are commonly used in the wireless sensor networks. 
- These days, the measure of information is being used a criterion for the analyzation of the performance of the flow based routing algorithms. 
- One research has put forward an argument stating that since the sensor network is driven by the objective of the estimation of a 2D random field, the information flow must be maximized over the entire field and the sensor’s lifetime. 
In response to this algorithm two types of flow based routing algorithm have been designed namely:
  1. Maximum information routing (MIR) and
  2. Conditional maximum information routing (CMIR)
- Both of these algorithms have proved to be quite significant when compared to the exiting algorithm – maximum residual energy path or MREP.

About MREP Algorithm

 
- This proves to be quite effective in conservation of the energy. 
- The battery energy which is limited is taken as the most important resource. - For the maximization of the lifetime, the energy consumption has to be balanced throughout the nodes. 
- This should be done in proportion to the resource reserves. 
- This is better than routing for the minimization of the absolute consumed power.

About MIR Algorithm

 
- The ideology behind the MIR algorithm is that there is inequality between the nodes. 
- For example, two very close nodes might not provide twice as much information provided by a lonely node. 
- Therefore, the nodes that provide more information are only given preference. 
- An additional penalty according to the node’s contribution is added to the node for achieving the above mentioned preference. 
- Dijkstra’s algorithm is used for the computation of the shortest path. 
- This helps in sending the data to the sensor as per both the information of the origin and the power consumed.


About CMIR Algorithm

- This one is a hybrid algorithm and makes use of MIR to some extent and then uses MREP algorithm for the rest of the cycle. 
- This hybrid version is better than the above two standalone algorithms since it runs better. 


Thursday, May 30, 2013

Preparing test cases so that a subset can be extracted for use as well ..

Do a good job while doing the initial design and preparation, and it can be of big help later. How many times would you have heard this, but it is true, and we recently had an example of how relevant this actually is. Consider that you have a test plan and test cases to deal with the testing of a feature. It is a large feature with many workflows, and a number of combinations of inputs and outputs. These different input and output variables are what make testing of the feature difficult and lengthy, since each of these variables has to be tested along with the possible combinations. Overall, in any big software product, there can be many such features that need to be tested thoroughly, and possibly tested many times during the development of the product. So, if there is a big set of test cases dealing with the feature, then it makes it easier to test on a regular basis.
Now, let us consider the cases where the feature does not need a complete testing. If you consider that the complete testing of the feature may take more than 2 days, there will be times when you will be able to spend not more than a couple of hours on this feature. And if you do not have an automation run for the test cases, then let's continue with this post (if you have built automation for testing these cases, then it would not take 2 days, much less, more like the 2 hours that is part of the requirement, but building automation cases also takes time and effort, and most teams cannot build automation for all of their test cases).
However, as you get closer to your release times, you cannot afford to spend the full testing cycle. You diminish the risk by controlling the changes that are made, and then do a reduced testing of the features given the constraints on time available. And then there is the concept of dot releases or patches. These typically have far less time available for the entire project from start to end, and yet, there needs to be a quick checkup of the application including the features before there can be a release. Another example is when the team is releasing the same application in multiple languages and operating systems. If the same application is released across Windows XP, Windows 7 and Windows 8, Mac OS, and in a number of languages (and the large software products are released in more than 25 languages each), then it is not a realistic assumption that each feature needs to be tested on these various languages and operating systems in full detail. In fact, most testing teams do a lot of optimization of these testing strategies, and try to do a minimum of testing for some of these platforms. 
But how do you get there ? When the testing team is preparing their test cases, they need to think of these situations. The tendency is to create test cases that flow from one to the next one, and are meant to be followed in a sequence. But to handle the kinds of situation above, the test cases need to be structured in such a way that they can be broken up into pieces for situations where the testing needs to done in shorter periods of time, and yet make the team still feel fairly confident that the testing has been done to the extent required. It also requires this kind of breakup information to be listed in a way that another tester later on who was not informed during the preparation of the test cases needs to use a subset of the cases for one of the special needs mentioned above (which can happen all the time, the original tester may not be a part of the team any longer, and may not even be a part of the organization).


Monday, June 25, 2012

How is optimization of installation testing done?


It becomes very important to carry out installation testing towards the end of the release process. It is very important to subject a software system or application to installation testing so that it can be released without any hesitation to the customers. 
One can make sure without installation testing that the software system or application will successfully install on the user’s system in the first go itself! The user might need to install the software system or application again and again which in turn might frustrate the user. You will loose customers as a consequence of this. Therefore perform installation testing becomes absolutely necessary. 
In this article we have discussed about the need for optimizing the installation testing and how to optimize it. by the optimization of the installation testing we mean modifying it further so that  some aspects of it use less resources and work more efficiently. 

What is the need to optimize?


- When you optimize a testing process you can run it more quite rapidly and that too it uses very less memory then. 
- The optimization though has been named as “optimization”, it can never produce a truly optimal system. 
- The amount of time taken by a test case of the installation can be reduced by making it consume more memory.
- In cases where the memory is limited, slower algorithms can be used.
- While optimizing the installation testing one should keep in mind that there is no such design that fits all the test cases. 

On what levels installation testing is optimized?


The installation testing process much like the other processes is optimized through the following number of levels:

Design Level: 
- This is the highest level of the above mentioned process. 
- Design is the best aspect that can be optimized to utilize the resources in a much better way.
- The implementation of the optimized design can be made more efficient by using efficient algorithms which in turn have been benefited from code of good quality. 
- It is obvious for the architecture of the test design to have an overwhelming affect on its performance.
- In some cases optimization process may also involve the following:
(a)  More elaborate algorithms
(b)  Special tricks
(c)   Special cases
(d)  Complex trade offs
- A fully optimized installation testing process becomes difficult to comprehend and may also house more errors than the un- optimized processes.

Source code Level: 
- This step involves rooting out the bad quality code from the test cases and therefore it also improves performance.
- When the malfunctioning code is removed, the obvious slow downs are avoided. 
- However, it is possible that any further optimization made to that test case may decrease its maintainability. 
- But, today there are available what are called “optimizing compilers”.

Compile Level:
- At this stage, it is ensured by the optimizing compiler that the test case is optimized to such a level at which the test case can be compiled by the compiler.

Assembly Level:
- This is the lowest level of optimization process and involves taking advantage of the full repertoire of system instructions. 
- This is the reason why most of the operating systems use the embedded systems are written in assembly code.
- Many of the test cases are compiled from a high level language to assembly language and from there it is manually optimized.

Run time: 
- Run time optimization is needed and are performed by the assembler programmers and just in time compilers which exceed the capability of static compilers.

Two types of optimizations can be performed on the installation testing process namely:
  1. Platform dependent optimization (uses specific properties of one platform) and
  2. Platform independent optimization (effective on all platforms)


Friday, June 22, 2012

How is optimization of smoke testing done?


Smoke testing being one of the quick and dirty software testing methodologies, needs to be optimized. This article focuses on the need of optimizing the smoke tests and how to optimize them. 

Let us first see the scenario behind the need for optimization of the smoke tests that are carried out on the software system or application. 
- In most cases of the development of software systems or applications, the code can be executed either directly from the command prompt or via a sub- routine of the larger software systems or applications. This is called the command prompt approach. 
- The code is designed in such a way that it has the qualities like self awareness as well as it is autonomous. 
- By the code being self aware, we mean that if anything goes wrong during the execution, the code should explain it. 
- Commonly two types of problems are encountered during testing and have been mentioned below:
  1. The path was compiled with too much of optimization and
  2. The data directory is not pointed out properly by the path.

Steps for Optimization of Smoke tests


- In order to confirm that the code was compiled correctly one should run the test suite properly at least once. 
- The first step towards optimization of the smoke test is to run it and then examine the output. 
- There are two possibilities that either the code will pass the test or it won’t. 
- If it is the second case then there are two possibilities that where your smoke test went wrong:
  1. Compiler bugs and errors or
  2. The path has not been set properly.

Compiler Bugs and Errors


Let us take up the first possibility, i.e., the compiler bugs and errors. 
-  It is probable that the correct code might not have been produced by the compiler. 
- In some cases of the serious compiler bugs, it is possible that there might be some hardware exception and these kinds of errors are caused mainly by the compiler bugs. 
- In this case the optimization level of the code should be minimized. 
- After this the code should be recompiled and observed again. 
- Optimizing is good for code but if it is of aggressive kind then it will definitely cause problems.

If path is not set properly


- It is obvious that if the code is not able to trace its data files then it will definitely show up some error and this happens because the path has not been set properly. 
- In such cases you need to check which path was it, fix it and recompile the whole code and execute once again.

When does a system or an application crash?


Don’t think that the software system or application crashes only when there has aggressive optimization of the code! 
- Crashes also happens with those programs in which there is no optimization of the code. 
But in such cases, only compiler errors can be blamed since it happens only if the compiler has not been set up properly on your system. 
- If no program executes, it means that your compiler is broken and you need to talk about this to your system administrator.

How to optimize smoke tests?


- A lot of help comes from MPGO (managed profile guided optimization). 
- The best way to optimize any kind of testing is to maintain a balance between the automated and manual testing.
- You need to run MPGO tool along with the necessary parameters for the test and then run the test. The test now will be optimized. 
- It is actually the internal binaries that are optimized either fully or partially. 
- Partially optimized binaries are deployed only in automated smoke testing. 


Monday, January 30, 2012

What are different aspects of iterative relaxation method?

This article explains relaxation in terms of iterative methods. This piece of writing is all about iterative methods or techniques for solving system of equations. Relaxation methods are iterative methods defined for numerical mathematics.

They are extensively used for solving system of equations which include the following types:

- Large sparse linear systems
Relaxation methods are used to solve large sparse linear systems which were like discretizations of finite difference of differential equations.

- Linear equations
Relaxation methods are used for solving linear equations for problems like that of linear least squares problems.

- System of linear inequalities
Iterative or relaxation methods effectively solve the system of linear inequalities similar to the problems that arise during linear programming.

- Non linear system of equations
These days, iterative methods or relaxation methods have been developed for solving non linear system of equations.

SIGNIFICANCE OF ITERATIVE RELAXATION METHODS

1.) Relaxation methods or iterative methods prove to be very effective and important methodology in providing solutions for linear system of equations and especially for the ones that are used to model elliptical partial differential equations such as Poisson’s equation and Laplace’s equation along with its generalization.

2.) These linear systems of equations are basically used to generally use to describe problems related to boundary values in which the value of the function in the solution is indicated or specified on the boundary of a specified domain.

The basic problem is to compute a solution within the boundaries. People often confuse between iterative methods for relaxation and relaxation methods for mathematical optimizations.

The iterative methods for relaxation techniques are not to be confused with relaxations for mathematical optimizations that are used to approximate a difficult problem by a comparative problem which is more simpler than the former one and whose relaxed or iterated solution provides information about the solution which can be taken in to account for the original problem.

The relaxation method for two dimensional problems is used to readily generalize the other numbers of the dimensions.

- The relaxation iterative methods converge under general conditions.
- But, these methods make slow progress as compared to the other competing methods.
- The study of the iterative relaxation methods constitute an essential part of linear algebra since the transformations of the relaxation methods provide pre conditioners for newer methods that are in a way quite excellent.
- In some cases multi grid methods can be used in order to accelerate the methods.
- It is a common problem in path oriented testing to generate the test data that is required to make the program follow a given path.
- This problem is over come using iterative relaxation method.


Tuesday, January 24, 2012

What are different characteristics of Capability Maturity Model (CMM)?

Capability maturity model or CMM as it is often abbreviated. It is a development model developed after a prolonged study of the data collected from various organizations from all over the world.

Characteristics of Capability Maturity Model
1.The development of this model was funded by the USDD (United States department of defence).

2.The capability maturity model became the foundation for the development of software engineering institute or SEI as it is popularly known as.

3.The term “maturity” emphasises process optimization and level of formality.

4.Processes are optimized from ad-hoc practices to steps that have been formally defined.

5.Nowadays this model is being used effectively for management of result metrics.

6.Capability maturity model has proved to be great help in active optimization of the processes.

7.This model allows improvement in the development processes of an organization.

8.It is an effective and good approach towards improvement of any organization’s development processes.

9.This model is securely based upon the frame work of process maturity which was developed in 1989.

10.Initially it was used for objective assessment of the processes carried out by the contractors of the government to keep a track on the project.

11.CMM is not only used in the field of software engineering but, it is also applied to organizational processes of a business.

12.It is used in other fields like:
- Software development
- System engineering
- Software maintenance
- Project management
- System acquisition
- Risk management
- Information technology
- Human capital management and
- Services

Where is Capability Maturity Model used?

Capability maturity model is being extensively used in various organizations like in commerce, government offices, software development organizations and industry.

What was the need for Capability Maturity Model?
- In the 20th century the use of computers was wide spread.

- Computerized processes were thought to be less costly, effective and flexible way to carry out tasks.

- As more and more organizations started adopting computerized processing systems, the demand for software development eventually rose.

- As a result CMM was developed, of course with lots of failures.

- The computers were a new technology at that time and so there was a lot of pressure on developers to deliver quality products within a stipulated period of time.

- The US military was in havoc that because all their projects were running out of budget and time.

- So in order to know the reason behind all this, they funded the study at SEI. Active study started at SEI.

- It was watts Humphrey who actually came up with the actual idea of CMM.

- He based his approach on the evolution of software development practices.

- He concentrated on all the processes as one instead of concentrating on just one software development process.

- Since then the CMM has become popular among various organizations and is used as a powerful tool for improving overall performance of the business.

Though CMM proved to be a very effective tool for business but many times it caused problems in software development.

- CMM didn’t allow use of multiple software development practices. It was superseded by CMMI.

- These days still the capability maturity model is being used as the model with the capability of handling general processes when it comes to public domain.

Some Important Facts About CMM
1.CMM is still maintaining its position as a model of maturity of process.

2.CMM provides a place to start the development.

3.It uses a common language and the development is based upon a shared vision.

4.It effectively develops a frame work for actions according to their priority.

5.For an organization it defines the ways for improvement.

6.It is used an aid for better and effective understanding.

CMM has 5 aspects:
- Maturity level
- Key process area
- Goal
- Features
- Practices.


Tuesday, June 28, 2011

What are Search Engine Positioning, Organic Search Engine Optimization?

Search Engine Positioning (SEP) includes:
- defining the business,
- set the targets,
- translating the business definitions into search phrases,
- structuring site around search phrases,
- designing pages to attract traffic,
- submitting site to directories and search engines and,
- linking site from other sites and monitoring the results.

Search engine positioning is actually the exact keywords and key phrases or search terms used that your prospective customers will typically write in the search box when looking for a particular product or service.

Organic search engine optimization is a process which improves the unpaid listings of the website in search engines. It involves the optimizing the website in natural way both on page and off page. The benefits of organic seo are:

- Organically optimized websites are often clicked by people. People tend to click on sites that are shown as part of results, not those that are shown as paid results (and search engines also differentiate between them).
- Organically optimized website search results lasts long. A paid result (or where some sort of short term strategy is applied will fail the next time the search engine logic is changed, or the money runs out). If the site makes it because of its own worth, it will only keep increasing in search engine ranking.
- Organically optimized website search results builds greater trust. Putting real and relevant content ensures that your readers will bookmark your blog, share your blog, and trust you more.
- Paid listings are more expensive than organic seo search results. For sites where you want to rank high, it is likely that paid clicks are very expensive, and trying to get more visitors through this route can be expensive.


Monday, June 27, 2011

What are different search engine optimization techniques?

Search Engine Optimization increases your website's appearance in search results. SEO techniques are tasks that would be performed by a Search Engine Optimization Company, when employed by a Client who desires high search engine positions to attract targeted traffic, with the intention of increasing their conversion rates and brand awareness.

- The search engine optimizer should have a long term outlook as SEO algorithms are changing.
- Patience is very important in SEO job and having a proper knowledge of the SEO company is also important.
- Build a good website with good and unique content.
- Site map should be included which tells the hierarchy of the site.
- The URLs should be SEO friendly.
- Select proper keywords and keyword density.
- The page title and meta tags for the page should be unique.
- Pay per click account should be opened.
- Write, keeping in mind and targeting the users.
- Keywords should be used as anchor text which helps to find out what the linked page is about.
- The links should be built intelligently.
- Participate with other blogs.
- Indulge yourself with social media sites and marketing intelligently.
- Links should be appropriate and avoid excessive linking.
- Develop relationships with other sites.


Saturday, June 11, 2011

What is Search Engine Optimization? How search engines work?

Search Engine Optimization is a web marketing technique. It's a normal tendency of a user to search the top websites. It becomes really important that your website appears in the top list. This is where search engine optimization comes into picture. It is a technique which puts your website higher than other websites when a particular search is made. SEO ensures web pages to be accessible to search engines and are more focused.
Search Engine Optimization are text driven.

The benefits of search engine optimization includes:
- SEO helps to put your website in the top rankings.
- SEO helps to build the company image and reputation.
- SEO helps in getting more business oppurtunities.
- SEO saves time and money both.
- SEO improves the competitive edge.
- SEO enhances sales.
- SEO improves the target audience.
- SEO improves the return that is got on investment.

The life cycle of search engine optimization includes:
- Generating key phrases.
- Cost per Click compaigns.
- Priortizing the phrases, traffic.
- Feed phrases into SEO.
- Competitor analysis and ranking audit.
- Audit ranked cmpetitor sites.

Some important points to remember about search engine optimization are:
- One has to keep patience. There are no shortcuts available that would make your website in the top list.
- To be able to found on net, the most important thing to remember is to write good and fresh content.
- Write content that is grammatically and factually correct.
- Page titles should be simple yet descriptive and easy to find because page titles links your site from search engine listings.
- Use the h1 - h6 elements for headings.
- The URLs that you use should be user friendly. Use search engine friendly, human readable URLs instead.
- Incoming links are very, very important for SEO.
- Use valid, semantic, lean, and accessible markup.
- Submitting a site to directories and search engines can be useful.


Facebook activity