Subscribe by Email


Showing posts with label Values. Show all posts
Showing posts with label Values. Show all posts

Sunday, May 26, 2013

Where are artificial neural networks applied?


The artificial neural networks have been applied to a number of problems in diverse fields such as engineering, finance, medical, physics, medicine, and biology and so on. 
- All these applications are based on the fact that these neural networks can simulate the human brain capabilities. 
- They have found a potential use in classification and prediction problems. 
These networks can be classified under the non-linear data driven self adaptive approaches. 
They come handy as a powerful tool when the underlying data relationship is not known. 
- They find it easy to recognize and learn the patterns and can correlate between the input sets and the result values.
- Once the artificial neural networks have been trained, they can be used in the prediction of the outcomes of the data. 
- They can even work when the data is not clear i.e., when it is noisy and imprecise. 
- This is the reason why they prove to be an ideal tool for modeling the agricultural data which is often very complex. 
- Their adaptive nature is their most important feature.
- It is because of this feature that the models developed using ANN is quite appealing when the data is available but there is a lack of understanding of the problem.
- These networks are particularly useful in those areas where the statistical methods can be employed. 
- They have uses in various fields:

    1. Classification Problems:
a)   Identification of underwater sonar currents.
b)   Speech recognition
c)   Prediction of the secondary structure of proteins.
d)   Remote sensing
e)   Image classification
f)    Speech synthesis
g)   ECG/ EMG/ EEG classification
h)   Data mining
i)     Information retrieval
j)    Credit card application screening

  1. Time series applications:
a)   Prediction of stock market performance
b)   ARIMA time – series models
c)   Machine robot/ control manipulation
d)   Financial, engineering and scientific time series forecasting
e)   Inverse modeling of vocal tract

  1. Statistical Applications:
a)   Discriminant analysis
b)   Logistic regression
c)   Bayes analysis
d)   Multiple regression

  1. Optimization:
a)   Multiprocessor scheduling
b)   Task assignment
c)   VLSI routing

  1. Real world Applications:
a)   Credit scoring
b)   Precision direct mailing

  1. Business Applications:
a)   Real estate appraisal
b)   Credit scoring: It is used for determining the approval of a load as per the applicant’s information.
c)   Inputs
d)   Outputs

  1. Mining Applications
a)   Geo-chemical modeling using neural pattern recognition technology.

  1. Medical Applications:
a) Hospital patient stay length prediction system: the CRTS/ QURI system was developed using a neural network for predicting the number of days a patient has to stay in hospital. The major benefit of this system was that money was saved and better patient care. This system required the following 7 inputs:
Ø  Diagnosis
Ø  Complications and comorbidity
Ø  Body systems involved
Ø  Procedure codes and relationships
Ø  General health indicators
Ø  Patient demographics
Ø  Admission category

  1. Management Applications: Jury summoning prediction: a system was developed that could predict the number of jurors that were actually required. Two inputs were supplied: the type of case and judge number. The system is known to have saved around 70 million.
  2. Marketing Application: A neural network was developed for improving the direct mailing response rate. This network selected those individuals who were likely to respond to the 2nd mailing. 9 variables were given as the input. It saved around 35 % of the total mailing cost.
  3. Energy cost prediction: A neural network was developed that could predict the price of natural gas for the next month. It achieved an accuracy of 97%. 


Friday, May 3, 2013

What is a Dispatcher?


A number of types of schedulers are available that suit the different needs of different operating systems. Presently, there are three categories of the schedulers:
  1. Long-term schedulers
  2. Medium-term schedulers
  3. Short-term schedulers
Apart from the schedulers there is one more component involved in the scheduling process and is known as the dispatcher. 
- It is the dispatcher that gives a process power to control the CPU. 
- To which process this control is to be given is selected by the short-term scheduler. 
- This whole process involves the following three steps:
  1. Switching the context
  2. Turning on the user code
  3. Making a jump to the exact location of the program from where it has to be restarted.
- Values taken from the program counter are analyzed by the dispatcher and accordingly it fetches instructions and feeds data in to the registers. 
- The dispatcher unlike the other system components needs to be very quick since it is invoked during all the switches that occur. 
- Whenever a context switch is invoked, the processor gets in to an idle state for a very small period of time. 
- Hence, it is required that the context switches that are not necessary might be avoided. 
- The dispatcher takes some time for stopping one process and start running the other one. 
- The dispatch latency is what we call this time.

- Scheduling and dispatch are complex processes and interrelation to each other. 
- These two are very much essential for the operation of the operating system. 
Today, architectural extensions are available for the modern processors that provide a number of banks of registers.
- Swapping of these registers in hardware is possible and therefore a certain number of tasks are capable of retaining their full registers. 
- Whenever an interrupt triggers the dispatcher, it sends to it the full set of the registers belonging to the process that was being executed at the time of occurrence of the interrupt. 
- Here, the program counter is not considered. 
- Therefore, it is important that the dispatcher should be written carefully for storing the present states of the registers on being triggered. 
- In other words, we can say that for the dispatcher itself, there is no immediate context. 
- This saves it from the same problem. 

Process of Dispatcher

Below we try to describe in simple words what actually the process is.
  1. The program presently having the context is executed by the processor. Things used by this program include stack base, flags, program counter, and registers and so on. There is a possible exception of the reserved register that is native to the operating system. The executing program does not have any knowledge regarding the dispatcher.
  2. For dispatcher a timed interrupt is triggered. Here the program counter jumps to the address listed in the BIOS interrupt. This marks the beginning of the execution of the dispatch sub routine. The dispatcher then deals with the stacks and the registers etc. of the program that raised the interrupt.
  3. Dispatcher like the other programs consists of some sets of instructions that operate up on the register of the current program. These instructions know everything of the previously executed programs. Out of these, the first few instructions are responsible for storing the state of the program.
  4. Dispatcher next determines which program should be given the CPU next for executing. Now it deletes all the statistics of the previously executed state and fills in the details of the next process to be executed.
  5. Dispatcher jumps to the address mentioned in the program counter and establishes a full context on the processor.
- Actually dispatcher does not really require registers since its only work is to write the current state of the CPU in to a memory location that has been predetermined. 
- It then loads in to the CPU another process from other predetermined location. 


Friday, March 8, 2013

What are benefits of agile process improvement?


Agile methodologies that we have today are a resultant of the experiences gained from the real life projects that were undertaken by the leading software professionals. These professionals were thorough with the challenges and limitations imposed by the traditional development methodologies on various projects. 

- The agile process improvement directly addresses the issues of the traditional development methods both in terms of processes and philosophy behind it. 
Agile process improvement provides a simple framework to the development teams suiting varying scenarios while focusing up on the fast delivery of the business values. 
- With all these benefits of the agile process improvements, the organizations have been able to reduce the associated overall risk with the development of the project. 
- The delivery of the initial business values is accelerated well by the agile process improvement. 
- This is achieved through a process of constant planning and feedback. 
- Agile process improvement ensures that the business values are maximized throughout the development process. 
- With the API’s iterative planning plus feedback loop, it becomes possible for teams to align the software process with the business needs as required. 
Another major benefit of the agile process is that the software development process can adapt to the ever–changing requirements of the process and business. 
- By taking a measure and evaluation of the status based up on the amount of work and testing done, visibility can be obtained to a more accurate value. 
- The final result of the agile process improvement is a software system that is capable of addressing the customer requirements and the business in a much better way. 
- By following an agile process improvement program, not only just deployable, tested and working software can be delivered on an incremental basis but also increased visibility, adaptability and values are delivered earlier in the software development life cycle. 
- This proves to be a great thing in reducing the risk associated with the project. 
- There are a number of problems with the traditional development methods. 
In a research it was found that the waterfall style development methodology was the major factor in the contribution of failure of the software. 
- Some other software could not meet the real needs. 
- They had the inability in dealing with the changing requirements and late integration. 
- All this has proven that the traditional development methods prove to be risky as well as a costly way for building software. 
- Thus the majority of the industry has turned towards agile development.
- There is a continuous feedback input from the customers and a face to face communication among all the stake holders. 
- The business needs associated with the agile process improvement are ever changing. 
- Organizations want quick results from what they invest. 
- They want their improvement programs to keep pace with these changing business needs. 
- The agile process improvement is composed of several mechanisms using which all this can be achieved. 
- Working iteratively lets you deliver the product before the deadline to the customer. 
- It lets you deliver only the things are actually required i.e., it does not let you waste your time on the un-required things. 
- Also, early and regular feedback from the customer lets you deliver the product with quality as desired by the customer.
- Agile projects are distributive in nature i.e., the work is divided among people. 
- Agile software development is still an immature process and there is a need for improving it for the betterment of the software industry. 
- Agile process improvement is one way to do this.


Sunday, December 9, 2012

What is Data Driven Testing technology in IBM Rational Functional Tester?


- DATA DRIVEN TESTING or DDT involves testing the software systems or applications using a number of sets of input data and output data stored in a data table.
- The testing process is carried out in an environment where the controls as well as the settings are not hard coded i.e., where the controls and settings are modifiable.
- The input data values from the rows and columns of the table are supplied to the program by the tester and the output obtained is compared with the expected output of the corresponding row and column.
- The values stored in the table are related to the partition input spaces and boundary.
- However, when the control methodology is applied, the configuration for the test is read from the data base.
- Today several methodologies have been developed for implementing the data driven testing.
- All such methodologies are known to co – exist since there is a difference in the effort they demand for the creation and subsequent maintenance.
- The main advantage of applying data driven testing is the ease with which the inputs can be added to the data table on the discovery of new partitions and there addition to the system under test or product.
- Another thing about the data driven testing is that it costs more to implement it manually rather than automated implementation which is comparatively cheaper.
- Data driven testing involves creation of the test scripts together with the related data sets present in the frame work.
- The re – usable test logic is obtained from this frame work only and is used for the reduction of the maintenance and improvement of the test coverage.
- The input as well as the result is stored in either one or more than one central data sources which we commonly called as data bases though the organization of the data and actual data depends up on the implementation.
- The data that is used for driving the tests consists of both the input values and the expected output values or the verified output values.
- The data is got via a sniffer (or a purpose – built custom tool) from a running system.
- The data driven testing then performs a play back of the harvested data thus acting as a powerful automated tool for regression testing.
- The following things are coded in the test script:
1. Navigation through the application
2. Reading data sources
3. Logging test status
4. Information

- Any variability which has a potential to make a change is taken out from the test logic or the scripts and move to the set of external assets.
- Such set is called a test data set or a configuration.
- Data driven testing is one of the key technologies that is implemented by the IBM’s rational functional tester.
- The hard coded scripts have got a few limitations.
- Whenever a test script is recorded using some literal values, the data gets hard coded in that script.
- This script then can be executed only with one test case or with one set of valid inputs.
- Also, such scripts are difficult to be put in to reuse and maintenance is cumbersome.
- The rational functional tester separates the data from the script so that it can be modified without having any effect on the test scripts and new test cases can be added through the modification of the data rather than modifying the test script.
- Three scenarios have been defined for the implementation of the data driven testing in the rational functional tester:
1. Creation of a data pool while the recording of a data driven script is n progress within the functional tester. Modification of the data pool:
2. Importing a data pool that was created externally in to functional tester and associating it with a test script.
3. Creation of data pool while the recording of a script is going on within functional tester. Exporting the data pool and editing it externally. Importing this data pool for driving a test script.


Tuesday, October 9, 2012

What is a Test Object Model in QTP?


Test object model is an important concept of quick test professional to be understood. In this article we have focused on the test object model of the quick test professional itself. 


What is a Test Object Model?

- The test object model is considered to be a large set of classes or objects of type class which are used by the quick test professional to represent the objects present in the software system or application.
- There is a list of properties associated with every class of the test object. 
Using the properties from this property list, the objects belonging to that particular class can be uniquely identified. 
- Also, the identification of a set of relevant methods that can be recorded for the object by the quick test professional can be easily identified. 

First let us clear up few terms associated with the test object model one by one:

1. Test object: 
This is the object that is created by the quick test professional in the test component or the test as a means of representation of the actual object present in the AUT or application under test. The information regarding the object is stored by the quick test professional since later it is required for many purposes like identification of the object and checking the working of the object during the run session.

    2. Run time object: 
   This object is the actual object present in the application under test. On this object only the various methods are performed while the identification process is in progress or we can say during the run session.
Whenever the user carries out an operation on an object in the application, the  following steps are taken by the quick test professional:
a)    Identification of the test object class of the quick test professional which is said to represent the object up on which the user performed the desired operation.
b)   Creation of appropriate test object based up on the identification in the previous step.
c)     Capturing of the current values of the properties of the objects residing in the application under test and preparing a list accordingly. This list is then saved along with the test object.
d)   Giving of a unique name to the test object up on the condition that it should reflect the value of one of the objects prominent properties.
e)    Recording of the operations that the user carried out on the test object by making use of the appropriate quick test professional test object method.

There are certain points about the test object model which are always helpful:
  1. Each and every test object method that is executed during the recording session forms a separate step in the recorded test. When the command for execution of the test is encountered, this recorded test object method is played up on the run time object.
  2. The source from where the properties of the object are captured is the object itself. These properties are important since there values are used for the identification of the run time objects while a run session is in progress.
  3. The properties of the objects have a tendency to change during the run session and so this would present some difficulty while matching the objects with the description. To avoid such a situation you have the option to make a manual modification of the test object properties while designing the test component during a run session. Some times even the regular expressions can be used as a substitute for the identification of the property values.
  4. The test object property values can be viewed as well as modified and stored through the object repository dialog box. 


Monday, October 8, 2012

How many ways we can parameterize data in QTP?


Parameterization is one of the important provisions we have in quick test professional which has enabled the passing of the values to the tests very simple. This parameterization feature enables one to pass multiple values at a time to a test.
And what more? 
The process of parameterization has proven to be a great helping hand while carrying out the data driven testing. Data driven testing is the kind of testing that involves working with multiple sets of data on the same tests. 
The quick test professional comes with various ways for carrying out the process of parameterization:
  1. Parameterization via loop statements
  2. Parameterization via data table
  3. Dynamic test data submission
  4. Obtaining test data via front end objects
  5. Getting test data directly from the spread sheets, flat files or we can say external files.
  6. Getting test data directly from the oracle, MSaccess or we can say data bases.
Now we shall discuss the above 6 different ways of parameterizing the tests in detail.
1. Parameterization via loop statements
In this method the sequential numbers or logical numbers can be passed via the loop statements however you cannot pass strings to the tests.

2. Parameterization via data table: 
One data table or spread sheet is provided along every test in quick test professional. The provided data table along with the test can be used very well for the data driven testing. Furthermore the following 3 purposes are served by the data table:
a)   To import the test data from external spread sheets: for doing this open the data table and place the pointer. Then right click and select the import from file option. Then you need to enter the path of the spread sheet to be imported from and hit ok. Later connect to the test.
b)   To import the test data from the external flat files: for doing this open the data table and place the pointer. Then right click and select the import from file option. Then you need to browse the path of the file to imported and press ok. Later connect to the test.
c)   To import the test data from the data bases: for doing this the user needs to first create a DSN of the data base i.e., the data source name by making use of the SQL commands. This can be done by creating a test data base and saving the created data base with .mdb extension in msaccess. Next step involves creation of a data table and filling it up with data. The last step is to create a DSN and you can start importing the data.

    3. Dynamic test data submission: 
   This also involves the use of the loop statements however the data has to be entered again and again by the user.

    4. Obtaining test data via front end objects

   5. Getting test data directly from the spread sheets, flat files or we can say external files.

   6. Getting test data directly from the oracle, MSaccess or we can say data bases.

There is one more method for parameterizing the data apart from those mentioned above and is also less followed. The method makes use of the dictionary object for the purpose of parameterization. There are several types of parameterization namely:
  1. Data table
  2. Environment variable
  3. Random number
  4. Test and action
The data table consists of the following parameters:
  1. Input parameter
  2. Out put parameter
  3. Random number
  4. Environment variable. 


How Does Run time data (Parameterization) is handled in QTP?


Efficient run time data handling is quite important for proper test automation through quick test professional. In this article we take up discussion regarding how run time data is parameterized or handled during a run session in quick test professional.
The parameterization of run time data is necessary in quick test professional  as it enhances test components. 

What happens in Run-time Data Parameterization?

- In run time data parameterization or handling, a variable is passed as a parameter or an argument via an external source which is generally a generator in most of the cases. 
- This variable or parameter which is passed essentially consists of a value that is assigned via the generator as mentioned in the previous line. 
- The parameterization of the variables in the test component can be done in a series of different check points as well as in a series of a set of different steps as it is required by the situation.
- Apart from normal values, the parameterization of the action data values can also be done. 
- For parameterizing the same value in a series of several steps the data driven wizard can be used.  

How it is handled in QTP?

- Quick test professional test automation suite comes with an in built integrated spread sheet when the run time data table is filled up with the test data. 
- This spread sheet is just like the excel spread sheet and thus multiple test iterations can be created which in turn will save you a big deal on the side of programming efforts. 
- The run time data can be entered either manually or in an automatic way by importing the data from the spread sheets, text files, data bases and so on. 
The spread sheets in quick test professional come with full functionality as that of the excel spread sheets. 
- Using these spread sheets following tasks can be achieved:
  1. Manipulation of the data sets
  2. Creation of multiple iterations of the tests
  3. Expanding test coverage without unnecessary programming and so on.
- In simple form, parameterization can be defined as providing multiple test data or inputs to the tests. 
- While working on quick test professional, input can be supplied in the below mentioned 4 ways:
  1. Input through note pad
  2. Input through key board
  3. Input or import via a data base
  4. Input through excel spread sheet.

What is Run Time Data

Run time data is nothing but a live version of the data that has a current association with the test that is currently under execution. 
There are two methods available for the parameterization of the run time data as mentioned below:
  1. DTSheet.GetParameter: with this parameterization method the specified parameter can be retrieved from the run time data table.
  2. DTSheet.AddParameter: with this parameterization method a new column is added.
Properties of Run-Time Data
  1. Name property: It defines the name of the column in the run time data table.
  2. Raw value property: It defines the raw value of a cell residing in the current row of the parameter under consideration. The actual string written in the cell before computation is called the raw value for example actual text present in some formula.
  3. Value property: This is the default property of the parameter and used for retrieving as well as setting the value of the cell of the active row in run time data table.
  4. Value by row property: It is used for retrieving the value of the in the row specified by the parameter.  


Tuesday, October 2, 2012

What is smart Identification in QTP?


Smart identification mechanism is one of the most important and effective mechanisms of the quick test professional. Usually for the identification of an object the quick test professional goes around the usual i.e., the normal identification process.
But what the quick test professional is supposed to do when this usual object identification routine fails? At this point the smart identification mechanism of the quick test professional comes to the rescue of the testers.

Smart Identification in QTP

- The smart identification mechanism has got many positive attributes.
- It is more flexible and works with more efficiency for the identification of the difficult objects present in the application that cannot be found with the normal identification mechanism. 
- But for the smart identification to work, it is required that you enable this option in the object identification settings. 
- Smart identification mechanism works up on two types of properties as mentioned below:
  1. Base filter properties and
  2. Optional filter properties.
- The first category of the properties i.e., the base filter properties constitute of the most fundamental properties pertaining to a particular class of test object. 
- The values of the properties falling in this category cannot be changed or altered without changing the same properties in the original object.
The second category of the properties i.e., the optional filter properties consists of the other properties that contribute in the identification process of the objects. 
- In the process of smart identification of an object, the quick test professional first erases from its memory the learnt description or the description that was entered by the user in to the physical description field of the object. 
- Instead of this single object, a list of matching objects (in this case such objects are called candidates) is created by the quick test professional called the candidate list. 
- This list is created based up on the properties that have been defined in the category of base filter properties and thus all the objects match a little or lot (i.e., to say one or more properties) with the base filter properties. 
- Base filter properties as the name suggests are the major properties that help cut down or reduce the number of objects in the candidate so that an exact match can be found or we can say that the area of search is reduced. 
The idea is to be left with only one object matching with most or all of the properties mentioned in the saved description. 
- In some cases it may happen that it may be required to invoke smart identification mechanism during the run session. 
- In such cases a warning message is generated by the test results tree which indicates about the invocation of the smart identification and insertion of a smart identification process. 
- It is said that the smart identification is applicable only for the web based applications. 
- The object is recorded from the AUT or application under testing and its properties are identified accordingly and finally the scripts are executed. 
- Some times you may receive a warning alert in the results of the tests. 
- You just need to navigate over this message in the file which stores the messages and read what it says and follow accordingly. 
- Usually the normal identification routine fails whenever there occurs a dynamic change in the properties of the object to be identified. 
- Because of such dynamic changes the value of the object properties keep on changing thus making it difficult for the quick test professional to track that particular object. 
- These dynamic changes are shown only in the result and are not stored in the local object repository and are accessible only for the run  time. 


Facebook activity