Subscribe by Email


Showing posts with label Approach. Show all posts
Showing posts with label Approach. Show all posts

Friday, March 22, 2013

What is an Artificial Neural Network (ANN)?


- The artificial neural network or ANN (sometimes also called as just neural network) is a mathematical model that has got its inspiration from the biological neural networks. 
- This network is supposed to consist of several artificial neurons that are interconnected. 
- This model works with a connectionist approach for computing and thus processes information based up on this only. 
- In a number of cases, the neural network can act as an adaptive system that has the ability of making changes in its structure while it is in some learning phase. 
- These networks are particularly used in searching patterns in data and for modeling the complex relationships that exist between the outputs and inputs. 
An analogy to artificial neural network is the neuron network of the human brain. 
- In an ANN, the artificial nodes are termed as the neurons or sometimes as neurodes or units or the ‘processing elements’. 
They are interconnected in such a way that they resemble a biological neural network. 
- Till now, no formal definition has been given for the artificial neural networks. - These processing elements or the neurons show a complex global behavior. 
The connections between the neurons and their parameters is what that determines this behavior.
- There are certain algorithms that are designed for altering the strength of these connections in order to produce the desired flow of the signal. 
- The ANN operates up on these algorithms. 
- As in biological neural networks, in ANN also functions are performed in parallel and collectively by the processing units.
- Here, there is no delineation of the tasks that might be assigned to different units. 
- These neural networks are employed in various fields such as:
  1. Statistics
  2. Cognitive psychology
  3. Artificial intelligence
- There are other neural network models that emulate biological CNS and are part of the following:
  1. Computational neuroscience
  2. Theoretical neuroscience
- The modern software implementation of the ANNs prefers a more practical approach than biologically inspired approach. 
- This practical approach is based up on the signal processing and statistics. The former approach has been largely abandoned. 
- Many times parts of these neural networks serve as components for the other larger systems that are a combination of non – adaptive and adaptive elements.
- Even though a more practical approach for solving the real world problems is the latter one, the former has more to do with the connectionist models of the traditional artificial intelligence. 
- Well the common thing between them is the principle of distributed, non – linear, local and parallel processing and adaptation. 
- A paradigm shift was marked by the use of neural networks during the late eighties. 
- This shift was from the high level artificial intelligence (expert systems) to low level machine learning (dynamical system). 
- These models are very simple and define functions such as:
f: X à Y
- Three types of parameters are used for defining an artificial neural network:
a)   The interconnection pattern between neuron layers
b)   The learning process
c)   The activation function
- The second parameter updates the weights of the connections and the third one converts the weighted input in to output. 
- Learning is the thing that has attracted many towards it. 
- There are 3 major learning paradigms that are offered by ANN:
  1. Supervised learning
  2. Un – supervised learning
  3. Reinforcement learning
- Training a network requires selecting from a set of models that would best minimize the cost.
- A number of algorithms are available for training purpose where gradient descent is employed by most of the algorithms.
- Other methods available are simulated annealing, evolutionary methods and so on.


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Tuesday, March 5, 2013

What is meant by Ovonic Unified Memory?


There is much requirement in the IT industry for a high – speed memory plus that is non–volatile too. A solution to this is provided by the ovonic unified memory
- Ovonic Unified Memory is an approach to such a memory. Further, it offers reduced bit rate and cost. 
- There are some other characteristic features of this memory:
  1. High endurance
  2. Low power consumption
  3. Non – volatile RAM
  4. Readily scaled.
  5. Merged memory/ logic simplified
- Since the ovonic unified memory is readily scaled it is not required to scale the barriers of flash and DRAM memories.
- This represents a new semiconductor technology – a creation of the Energy Conversion Devices, inc. however later it was licensed to the ovonyx inc.
- This technology makes use of a structural phase change that is reversible i.e., from a crystalline phase to the amorphous phase. 
- The material used here is the thin–film chalcogenide alloy. 
- This all constitutes the data storage mechanism of ovonic unified memory (OUM). 
- Each memory cell consists of an active medium in a small volume that acts as the programmable resistor. 
- This resistor switches between the low and high resistance in the dynamic range of greater than 40x. 
- The phase change technology is currently being used in the PD, CD RW, DVD RW and DVD RAM. 
- The basic advantage offered by OUM is in the terms of performance and cost when compared to its conventional counterparts namely the flash and the DRAM memories. 
- OUM has got compatibility with the merged memory/log. 
- A conventional CMOS process is used in the OUM technology along with some additional layers in order to form the memory elements. 
- The OUM products have been commercialized under various licensing agreements. 
- The alloy used in OUM consists of Se and Te elements.
- They exhibit the property of electronic threshold switching phenomenon because of which the OUM memory cells can be programmed at quite low voltages irrespective of which state they are in i.e., whether conductive or resistive. 
- The measurement of the resistance of cell is used to read the information stored. 
- The programming of the OUM devices is done electrically by the alteration of its structure of the alloy. 
- These OUM devices show metallic behavior are independent of the temperature.
- The OUM devices are known for their excellent data retention property in the case of high density array applications. 
- Also, the OUM cells have more than normal life cycle i.e., they can tolerate up to 1013 write and erase cycles without any failure. 
- These devices possess quite a large dynamic range.
- This allows them to be programmed for enabling the multi–state data storage at intermediate resistance values.
- For multi–stage data storage, every cell needs to support multiple–bit storage. 
- The technology behind the ovonic unified memory is the device modeling. 
Here, simple analytical methods show the trends in the properties and size of the material for structures that are spherically equivalent. 
- Other numerical models are inclusive of the mesh evaluation plus the device geometry.
The behavior of the OUM devices can be predicted using the numerical simulation. 
- The behavior of the OUM material depends up on its bulk properties which have a characteristic that they can be quantified. 
- There are 3 considerations of this model:
  1. Phase–change: It includes heat of fusion, crystal growth and nucleation.
  2. Electrical: It includes current density, electric field and percolation conduction.
  3. Thermal: It includes percolation conduction and the heat equations.
Apart from the cost, another advantage of OUM is its near – idle memory qualities such as:
  1. Static
  2. Random accessible
  3. Non – destructive read




Monday, March 4, 2013

What are Software Process Improvement resources?


A supportive and effective infrastructure is required for facilitating the coordination of various activities that place during the course of the whole program. In addition to these qualities the infrastructure should be quite flexible so as to be able to support the changing demands of the software process improvement with time. 
Resources for this program include:
  1. Infrastructure and building support
  2. Sponsorship
  3. Commitment
  4. Baseline activities
  5. Technologies
  6. Coordinate training resources
  7. Planning expertise
  8. Baseline action plan and so on.
- When this program is initiated, a primitive infrastructure is put in to place for the management of the activities that would be carried out by the organization under SPI. 
- The resources mentioned above are also the initial accomplishments that tell how well the infrastructure has been performing. 
- It is the purpose of the infrastructure to establish a link between the program’s vision and mission, to monitor it and guide it and obtaining resources and allocating them.
- Once the SPI program, a number of improvement activities will be taking place across the different units of the organization. 
- These improvement activities cannot be performed serially rather they take place in parallel. 
- The configuration management, project planning, requirements management and reviews etc. are addressed by the TWGs (technical working groups). 
- But all these activities are tracked by the infrastructure.
- Support for the following issues must be provided by the infrastructure:
  1. For a technology that is to be introduced.
  2. Providing sponsorship
  3. Assessment of the organization impact
- As the program progresses, the functions to be performed by the infrastructure increase. 
- There are 3 major components of the SPI program:
  1. SEPG or software engineering process group
  2. MSG or management steering group
  3. TWG or technical work group
- It is third component from which most of the resources are obtained including:
  1. Human resources
  2. Finance
  3. Manufacturing
  4. Development
- However, the most important is the first one and is often called the process group. 
- It provides sustaining support for the SPI and reinforcing the sponsorship. 
The second component i.e., the MSG charters the SEPG.
- This is actually a contract between the SEPG and the management of the organization. 
- Its purpose is to outline the roles and the responsibilities and not to forget the authority of the SEPG. 
- The third component is also known as the process improvement team or process action team. 
- Different work groups created focus on different issues of the SPI program. 
- A software engineering domain is addressed by the technical work group. 
- It is not necessary for the TWGs to address the technical domains; they can address issues such as software standardization, purchasing, travel reimbursement and so on. 
- The team usually consists of the people who have both knowledge and experiencing regarding the area under improvement. 
- The life of TWGs is however finite and is defined in the charter. 
- Once they complete their duties, they return back to their normal work. 
- In the early stages of SPI program, the TWGs might tend to underestimate the time that would be required for the completion of the objectives assigned to them. 
- So the TWGs have to request to the MSG for allotting them more time. 
- Another important component could be the SPIAC or software process improvement advisory committee. 
- This is created in organizations where there are multiple functioning SEPGs. 


Sunday, March 3, 2013

What is the need of Agile Process Improvement?


It is commonly seen that a number of change projects are designed and published but none of actually goes into implementation. Most of the time is wasted in writing and publishing them. This approach usually fails. We should stop working with this methodology and develop a new one. Below mentioned are some common scenarios in the modern business:
  1. Developing a stronger project
  2. Changing the people working on it.
  3. Threatening that project with termination
  4. Appointment of a committee that would analyze the project
  5. Taking examples from other organizations to see how they manage to do it.
  6. Getting down to a dead project
  7. Tagging a dead project as still worth of achieving something.
  8. Putting many different projects together so as increase the benefit.
  9. Additional training
-Drops in the delivery of the normal work always follow a change. 
-Big change projects are either dropped or rejected.
-It all happens because the changes introduced by such projects are mandatory to be followed.
-This threatens the normal functioning of the organization. 
-So, the organization is eventually compelled to kill the whole process and start with the old way of work again. 
-Instead of following this approach, a step by step process improvement can be followed that is nothing but the agile process improvement. 
Now you must be convinced why agile process improvement is actually needed. 
The changes needs to be adaptive then only the process will be balanced. 
- An example is the CMMI maturity level. It takes 2 years approx. for completion and brings in the following:
  1. Restructuring
  2. New competitors
  3. New products
-Only agile methods make these changes adaptive in nature.
-The change cycles when followed systematically produce results in every 2 – 6 weeks.
-Thus, your organization’s workload and improvement stay perfectly balanced. -The early identification of the issues becomes possible for the organizations thus giving you it a chance to be resolved early. 
-By and by the organization learns to tackle the problems and how to improve work.
-At the end it is able to adapt to the every changing needs of the business.
-The responsibility of the deployment and evaluation of the improvement is taken by the PPQA. 
-Whole process is implemented in 4 sprints:
  1. Prototyping
  2. Piloting
  3. Deploying
  4. Evaluating
-A large participation and leadership is required for these changes to take place.
-Some other agile techniques along with scrum can also be used in SPI.
-We can have the improvements continuously integrated in to the way the organization works. 
-The way of working can also be re-factored including assets and definitions by having an step by step integration of the improvements.
-Pair work can be carried out on improvements. 
-A collective ownership can be created for the organization. 
-Evaluations and pilots can be used for testing purpose. 
-In order to succeed with the sprints is important that only simple solutions should be developed. 
-An organization can write the coaching material with the help of the work description standards.
-This sprint technique helps the organization to strike a balance between the improvement and the normal workload. 
-In agile process improvement simple solutions are preferred over the complex ones.
-Here, the status quo and the vision are developed using the CMMI and SCAMPI. 
-Status quo and vision are necessary for the beginning of the software process improvement.
-SPI when conducted properly produces useful work otherwise unnecessary documentation has to be produced.
-An improvement in the process is an improvement in the work. 
-Improving work is what that is preferred by people. 


Sunday, February 17, 2013

Explain Jubula - Web Functional/Regression Test Tool


Today we have around 100 tools available for functional and regression testing of the web based applications. Jubula is one among such tools and will be discussed here in detail. 

About Jubala

- This is an open source tool that is available both as a standalone application and as a part of the eclipse package.  
- The former comes with an installer that is quite easy to use and the latter one can be downloaded from an update site. 
- Jubula client can be used for viewing the actual and the expected results and also it can be used for viewing the current as well as the previous results. 
- Here, you can easily the automatic screenshots whenever an error occurs. 
Jubula supports many applications such as:
  1. Html
  2. Swing
  3. SWT/ RCP/ GEF and so on.
- Some features of Jubula tool have been listed below:
  1. Heuristic object recognition
  2. Command line client
  3. Context sensitive help in client
  4. Multi – user database for storing projects
  5. Portability
  6. Version control
- Command line has been included in this tool so as to facilitate continuous integration.
- Version control is achieved via the exports in XML format. 
- The contribution in this regard was made by BREDEX GmbH (also the developer of GUI dancer). 
- The GUIdancer was actually built up on the lines of Jubula core and offer some more features. 
- Jubula is not dependent on any platform windows, unix, linux or MAC. 
- A project by the name of ‘jubula functional testing tool’ was created as an open source project. 
- This project was a part of the eclipse technology project. 
- Other functional testing tools work up on the thing that the saving and management of the scripts is done through coding. 

Now depending up on the testing approach you are following, your test automation process may involve steps like recording, generation or straightaway writing of the code. 

What differentiates Jubala from other testing tools?

- It considers the importance of the automated acceptance tests as same as that of the project code. 
- According to Jubula, same best practices (reusability, modularity, readability etc.) should be used for creating these tests as well without requirement of any extra code. 
- Thus, the power of testing is given in the hands of the software testers and at the same time for the customers the access improves for monitoring the tests. 
This results in a code free approach that minimizes the need for test maintenance, thus allowing the users to write the tests from their perspective. 
If the team focuses on the software from the viewpoint of the user, it can gains a whole lot of quality info which often goes un-utilized when only JUnit tests are run.
- Jubula comes with a library of actions which is used for creating the tests. 
- You can combine the actions contained in the library using drag and drop.
This library contains all the necessary actions that one may require for writing the tests. 
- This library is also independent of platform and application which implies that you can start writing the tests before the software is actually made available to you. 
- When you work with Jubula, you save on time since as soon as the code is committed the tests can be executed. 
- The time between the introduction of an error and catching it is reduced.

Objectives of Jubala

Jubula has got the following objectives:
  1. Providing tooling for carrying out automated GUI testing on html and java based applications.
  2. Specifying and maintaining a model specifications and requirements of the tests.
  3. Providing API for projects so that the test scenarios can be generated, test can be executed and results can be obtained. 


Friday, February 8, 2013

What is TOSCA Test suite? Explain the architecture of TOSCA?


TOSCA test suite is another software tool in the line of tools for automated execution of the regression and functional testing. But what makes it different from the other tools in the same category is the TOSCA’s following features:
  1. Integrated test management
  2. Graphical user interface (GUI)
  3. A command line interface (CLI)
  4. Application programming interface (API)

History and Evolution of TOSCA

- The developer of TOSCA is TRICENTIS technology and consulting GmbH which is an Austrian company based in Vienna. 
- The year of 2011 saw the inclusion of TOSCA in magic quadrant’s report (developed by Gartner Inc.’s) as a visionary. 
- In year 2011 the TOSCA test suite were recognized as the second most widely used test automation tool by Ross report in New Zealand and Australia. 
- The tool was further recognized by the scientific community when it was presented at two important international conferences namely:
  1. Euromicro SEAA and
  2. IEEE ICST
- Since then TOSCA has been awarded a number of awards for its web and customer support. 
- TOSCA is software testing tool that serves purposes of numerous other tools such as:
  1. Test management tool
  2. Test design tool
  3. Test execution tool
  4. Data generation tool for regression and functional testing

Architecture of TOSCA

The architecture of TOSCA is composed of the following:
  1. TOSCA commander: This is the test execution tool of TOSCA which can be used for the creation, administration, execution and analyzation of the test cases.
  2. TOSCA wizard: This is the model building tool of TOSCA which can be used for building application model and storing related info in to modules that are essentially XML – GUI maps.
  3. TOSCA executor: This TOSCA tool is responsible for the execution of the test cases and displaying the results obtained in the TOSCA commander.
  4. TOSCA exchange portal: This TOSCA portal is the place where the exchange and use of the special modules, TOSCA commander components that have already been built and extensions by the customers takes place.
  5. TOSCA test repository: This has been made integrated by the TOSCA is responsible for holding the assets. A number of users can access the repository at the same time.

Functionality of TOSCA

Business Dynamic Steering
- The model–driven approach is the concept behind the TOSCA commander.
- This approach focuses in making the whole test dynamic in nature rather than just making the input data dynamic. 
- You can drag and drop the modules and enter the actions and values for validation in order to create the test cases. 
- Making the whole test dynamic is of great advantage for enabling the business – based description of both the automated and manual test cases. 
- This lets the non – technical users (SMEs) to design, automate, maintain and specify the test cases.
- TOSCA supports the following technologies for automation of software tests:
  1. Application development environments such as  powerbuilder and gupta.
  2. Programming frameworks and languages such as .net, Delphi, WPF, visual basic and java swing/ AWT/ SWT.
  3. Host applications (5250, 3270)
  4. Web browsers including Mozilla firefox, opera and internet explorer
  5. Single – position application programs such as MS excel and outlook
  6. Key application programs including Siebel and SAP
  7. Protocols and hardwares including Flash, SOAP, USB execution and ODBC
TOSCA is supported on the following platforms:
Ø  Windows vista Service pack 2
Ø  Windows 7 (both 32 and 64 bit)
Ø  Windows XP service pack 2 and plus.
- The data bases supported are:
Ø  DB v 9.1
Ø  Oracle 10g
Ø  Microsoft SQL server 2005
- TOSCA is being used by 300 customers all over world. 


Saturday, February 2, 2013

Explain Robot Framework?


The Robot Framework was developed as generic test automation framework for two major tasks namely:
  1. Acceptance testing and
  2. ATDD or acceptance test driven development
- It makes use of the keyword driven testing approach for achieving the above mentioned two tasks. 
- It has got a very simple and unique tabular test data syntax that is quite easy to follow and this is what that makes robot framework so easy to be used. 
- All this makes the robot framework popular among the testers.
- Test libraries that can be implemented through either java or python can be used for extending the capabilities of the frame work. 
- The users have the choice of creating new keywords from the existing ones. 
This can be done using the same syntax that is available for the creation of the test cases. 
- Robot framework is actually an open source software. 
- The license it possesses is the Apache license 2.0.
- The Nokia Siemens Networks supports the development of this framework and also owns its copyrights. 

Features of Robot Framework

The robot test automation framework is abundant of features. Its features are:
  1. Its tabular syntax is easy to use and can be used for the creation of the test cases in a way that is uniform.
  2. Robot framework can work up on three approaches namely:
- Keyword driven testing approach
- Behavior driven development approach or BDD
- Data driven testing approach or DDT.
  1. It provides the facility for the creation of higher level keywords that are reusable for the already existing keywords.
  2. The reports and logs it generates are quite easy to follow and are based up on HTML.
  3. This test automation framework does not depend up on any platform or application.
  4. The architecture of the frame work is quite modular in nature that helps in supporting the creation of the tests for even the software systems and applications that possess a number of interfaces all different from one another.
  5. It comes with a library API that is quite simple and can be further used for the creation of other test libraries that can be customized.
  6. The XML based outputs that the framework provides allow for integrating itself in to the build infrastructure that already exists. These are called continuous integration systems.
  7. It also provides a command line interface for the same purpose as that of the XML based outputs.
  8. It supports selenium tool for the following things:
Ø  Web testing
Ø  Java GUI testing
Ø  telnet
Ø  running processes
Ø  SSH and so on.
  1. The framework comes with a remote library interface, thus enabling the implementation of the test libraries in any desired programming language and distributed testing.
  2. The tagging feature provides helps in categorizing the tests and selecting them accordingly for the execution.
  3. For variables, it comes with a special built – in support for regulating testing in different environments.
- Test cases can be written either in HTML or in plain text.
- Also, any editor can be used for editing the test cases. 
- The robot test automation framework comes with a graphical development tool called the RIDE or robot IDE. 
- The tool has got a number of features specific to the framework such as syntax highlighting, code completion and so on. 
- Selenium library is an extension to the robot test automation frame work and similarly there are a number of others. 
- However, there are some other languages that can be used for implementing the libraries such as perl, PHP, java script and so on. 
- However, these languages can be used only through a remote library interface.


Facebook activity