Subscribe by Email

Friday, May 31, 2013

Infrastructure: Testing, VPN and network issues relating to external teams

When dealing with testing or development by external teams, there are a lot of infrastructural issues to deal with, and in a number of cases, teams have not prepared some kind of process documentation to take care of such needs and processes. Some of the external teams that can be involved in the process of development or testing for a software application include the following:
- External teams that are working as an extension of the testing team. This is getting more and more common, whereby the testing team can be expanded as necessary by adding vendors to the testing resources. However, when the these external vendors need to start their work, they need access to the testing infrastructure such as test cases repository and the defect management software.
- External teams working on the localization (conversion of the software into different languages). Such teams typically work on both development and testing of the application in different languages and need access to the source repository, as well as the testing infrastructure (test plan and case repository, defect management software, and so on).
- Teams working on the documentation of the product typically need access to the defect management software, since there can be several issues that need to be documented which are typically present in the defect management software, and the documentation team would be provided access to the defect management software.

For these 3 teams and others working in a similar area, if they are located outside the organization, then there may be a necessity for them to be provide the required VPN access to get access to all these tools. In modern secure organizations, such access policies would go through an approval process, and to ensure that such a process is accelerated, it makes sense to prep the approving authorities about these requests for approval.

- External testers / pre-release testers. Such pre-release testers typically do not need any access to source code repository or the testing infrastructure. However, these testers do need a platform where the defects they have logged get into the defect management software in an easy and transparent way (and where the testers are not expected to do any additional work - in some cases, it is as simple as providing them a web page where they can enter the defects that they have found, and some additional parameters that testers  are expected to enter but pre-release testers are not expected to enter are entered by default).

For all these teams, there may be other access issues that teams may not have thought about. Typically when a team has any kind of feature that provides access to an online feature, such features are available in staged servers that need some kind of setting or special access. For ensuring that all the teams listed above and similar teams do not get stuck, it is required to plan for such an access. We had a problem in the past whereby an important online feature was rolled out to the press reviewing team, but there was a big screw up relating to the access for these features, and the team was not able to provide access for around 3 days, in which time, some of the press withdrew - a big disaster. So, a great team has plans for all these before they look to execute such features.

Thursday, May 30, 2013

Preparing test cases so that a subset can be extracted for use as well ..

Do a good job while doing the initial design and preparation, and it can be of big help later. How many times would you have heard this, but it is true, and we recently had an example of how relevant this actually is. Consider that you have a test plan and test cases to deal with the testing of a feature. It is a large feature with many workflows, and a number of combinations of inputs and outputs. These different input and output variables are what make testing of the feature difficult and lengthy, since each of these variables has to be tested along with the possible combinations. Overall, in any big software product, there can be many such features that need to be tested thoroughly, and possibly tested many times during the development of the product. So, if there is a big set of test cases dealing with the feature, then it makes it easier to test on a regular basis.
Now, let us consider the cases where the feature does not need a complete testing. If you consider that the complete testing of the feature may take more than 2 days, there will be times when you will be able to spend not more than a couple of hours on this feature. And if you do not have an automation run for the test cases, then let's continue with this post (if you have built automation for testing these cases, then it would not take 2 days, much less, more like the 2 hours that is part of the requirement, but building automation cases also takes time and effort, and most teams cannot build automation for all of their test cases).
However, as you get closer to your release times, you cannot afford to spend the full testing cycle. You diminish the risk by controlling the changes that are made, and then do a reduced testing of the features given the constraints on time available. And then there is the concept of dot releases or patches. These typically have far less time available for the entire project from start to end, and yet, there needs to be a quick checkup of the application including the features before there can be a release. Another example is when the team is releasing the same application in multiple languages and operating systems. If the same application is released across Windows XP, Windows 7 and Windows 8, Mac OS, and in a number of languages (and the large software products are released in more than 25 languages each), then it is not a realistic assumption that each feature needs to be tested on these various languages and operating systems in full detail. In fact, most testing teams do a lot of optimization of these testing strategies, and try to do a minimum of testing for some of these platforms. 
But how do you get there ? When the testing team is preparing their test cases, they need to think of these situations. The tendency is to create test cases that flow from one to the next one, and are meant to be followed in a sequence. But to handle the kinds of situation above, the test cases need to be structured in such a way that they can be broken up into pieces for situations where the testing needs to done in shorter periods of time, and yet make the team still feel fairly confident that the testing has been done to the extent required. It also requires this kind of breakup information to be listed in a way that another tester later on who was not informed during the preparation of the test cases needs to use a subset of the cases for one of the special needs mentioned above (which can happen all the time, the original tester may not be a part of the team any longer, and may not even be a part of the organization).

What are the various Desk Scheduling methods?

About Disk Scheduling

The I/O system has got the following layers:
  1. User processes: The functions of this layer including making I/O calls, formatting the I/O and spooling.
  2. Device independent software: Functions are naming, blocking, protection, allocating and buffering.
  3. Device drivers: Functions include setting up the device registers and checking their status.
  4. Interrupt handlers: These perform the function of waking up the I/O drivers up on the completion of the I/O.
  5. Hardware: Performing the I/O operations.
- Disk drives can be pictured as large 1 – D array consisting of logical blocks that are smallest unit of transfer.  
- These blocks are mapped in to the disk sectors in a sequential manner. 
Mapping is done in the same manner. 
- The responsibility of using the hardware efficiently is the duty of the operating system for the disk drives for increasing the speed of access and bandwidth of the disk. 

Algorithms for Scheduling Disk Requests

There are several algorithms existing for the scheduling of the disk requests:

- In this method the request having the minimum seek time is selected from the present head position. 
- This method is a modification of the SJF (shortest job first) scheduling and therefore contains some possibility of process starvation.

- From one end of the disk, the disk arm starts and continues in the direction of the other end, serving to the requests till the opposite end. 
- At this end the head is reversed and the process continues. 
- This is sometimes called as the elevator algorithm.

Ø  C – SCAN: 
- A better algorithm then the previous one. 
- This one offers a more uniform waiting time than the previous one. 
- The movement of the head is from one end to another while it services the requests encountered along the way. 
- However, the difference is that when it comes to the other it straightaway goes to the beginning without heeding to any of the requests in the way and then again starts. 
- The cylinders are treated as the circular list wrapped around last and the first cylinder.

Ø  C – Look: 
- This is a modified version of the C – SCAN. 
- Here the arm or the head travels only up to the last request rather than going till the far end. 
- Then immediately the direction is reversed and the process continues.

- For disk scheduling it is important that the method be selected as per the requirements only. 
- The first one is the most commonly used and appeals to the needs naturally. 
- For a system where often there is a heavy load on the disk, the SCAN and C- SCAN methods can help. 
- The number as well as the kind of requests affects the performance in a number of ways.
- On the other hand, the file – allocation method influences the requests for the disk services. 
- These algorithms have to be written as an individual module of the OS so that if required it can be replaced with a different one easily. 
- As a default algorithm, the LOOK or the SSTF is the most reasonable choice. 

Ways to attach to a disk

There are two ways of attaching the disk:
Ø  Network attached: This attachment is made via a network. This is called the network attached storage. All such connected storage devices together form the storage area network.
Ø  Host attached: This attachment is made via the I/O port.

All these disk scheduling methods are for the optimization of the secondary storage access and for making the whole system efficient. 

Wednesday, May 29, 2013

Explain the various File Access methods?

One of the most important functions of the mainframe operating system is the access methods that make it possible for you to access the data from external devices such as the tape or disk. 

What are access methods?

- Access methods are very useful in providing an API for transferring the data from one device to another.
- Another best thing about this API was that it worked as the device driver for the operating systems on non-mainframe computers. 
- There have been a lot of reasons behind the introduction of the access methods. 
- A special program had to be written for the I/O channel and there has to be a processor entirely dedicated to controlling the access to the peripheral storage device as well as data transfer from and to the physical memory. 
- Special instructions constitute these channel programs and are known as the CCWs or the channel command words.
- To write such programs, very detailed knowledge is required regarding the characteristics of the hardware. 

Benefits of File Access Methods

There are 3 major benefits of the file access methods:
Ø  Ease of programming: The programmer does not have to deal with the procedures of the specific devices, recovery tactics and error detection. A program designed to process a particular thing will do it no matter where the data has been stored.
Ø  Ease of hardware replacement: A program cannot be altered by the programmer during the migration of data from older to newer model of the storage device provided the same access methods are supported by the new model.
Ø  Ease in sharing the data set access: The access methods can be trusted for managing the multiple accesses to the same file. At the same it ensures the security of the system and data integrity.

Some File/Storage Access Methods

Ø  Basic direct access method (BDAM)
Ø  Basic sequential access method (BSAM)
Ø  Queued sequential access method (QSAM)
Ø  Basic partitioned access method (BPAM)
Ø  Indexed sequential access method (ISAM)
Ø  Virtual storage access method (VSAM)
Ø  OAM (object access method)

- For dealing with the records of a data set both the types of access i.e., the queued and the basic are suitable. 
- The queued access methods are an improvement of the basic file access methods. 
- Read ahead scheme and internal blocking of data is well supported by these methods. 
- This allowed combining the multiple records in to one unit, thus increasing the performance. 
- In sequential methods, it is assumed that there’s only a sequential way for processing the records which is just the opposite of the direct access methods. 
There are devices like the magnetic tape that only enforce the sequential access 
- Sequential access can be used for writing a data set and then later the direct manner can be used for processing it.

Today we have access methods that are network-oriented such as the following:
Ø  Basic telecommunications access method or BTAM
Ø  Queued tele – processing access method or QTAM
Ø  Telecommunications access method or TCAM
Ø  Virtual telecommunications access method or VTAM

The term access method was used by the IMS or the IBM information management system for referring to the methods for manipulation of the database records. 
- The access methods used by them are:
Ø  GSAM or generalized sequential access method
Ø  HDAM or hierarchical direct access method
Ø  HIDAM or hierarchical indexed direct access method
Ø  HISAM or hierarchical indexed sequential access method
Ø  HSAM or hierarchical sequential access method
Ø  PHDAM or partitioned hierarchical direct access method
Ø  PHIDAM or partitioned hierarchical indexed direct access

Ensuring a proper kick off and knowledge transfer to external teams that also work on the software ..

When you consider the software team, you would have a development team, a testing team, designers (including the user interface designer) and also the management team (and one could include the product managers or the product management team in the set of managers). When you consider these set of people, they have a lot of knowledge about the product and the features in the product, especially the people who are working on specific features. There are a few other people who have the same amount of knowledge on specific features. Given their level of knowledge, they take this level of knowledge as granted when they have a discussion with other people who are not on the team, and it can be difficult for them to realize that other teams do not have the same kind of knowledge that comes from working on the same product for a long period of time.
Any product team will need to work with other teams that provide services such as translating the software into other languages, do the documentation for the software and other related services that the product team will not do. The problem with this concept is that people working in these teams do not have the same kind of knowledge of the product that the product team has, and this can be a source of friction in their interactions. In some of these cases, the software team would do the right thing, find out the level of knowledge of these external teams and prepare time to ensure that these teams get a ramp up of the software program. But, even in these cases, it is a fair approximation that they will never reach the same amount of knowledge as the product team, and expecting such a level is what leads to frustration.
In my experience, the frustration comes when the product teams work with external teams that have little or no experience with the product (and this can happen often enough when dealing with external teams, since these teams typically cannot afford to have dedicated people for handling products, and their attrition rate may be different from that of the product team). So, the product team has some amount of knowledge transfer with the external teams, and assumes that there will be some questions that would come from the other teams. However, it may turn out that it takes time for these new people to understand the product, and the queries only come later in the timeline, and this is unexpected for the product team.
The way to handle this is to have a line of managers who understand that new people take time to understand the functionality of the product, and these people will never get the time that people in the product team have. So, there are 2 processes on how to handle this - prepare an effective training program that provides as much information as possible without over-doing it, and equally, learn from the teams that have gone through such an experience and learn what steps can ensure that the external teams learn faster, and what are some of the common problems and frustrations that can crop up in such a relationship. 

Tuesday, May 28, 2013

Concept of page fault in memory management

Page fault is also known as the pf or #pf and can be thought of as a trap that the hardware raises for the software whenever the program tries to access a page that has been mapped to an address space in the virtual memory but has not been loaded in the main memory. 

In most cases, the page fault is handled by the operating system by helping in accessing the required page at an address space in the main or the physical memory or sometimes by terminating the program if it makes an illegal attempt to the access the page.

- Memory management unit is the hardware that is responsible for detecting the page faults and is located in the processor. 
- The software that helps the memory management unit in handling the page faults is the exception handling software and is seen as a part of the OS. 
- ‘Page fault’ is not always an error.
- These are often seen as a necessary role player in increasing the memory. 
- This can be made available to the software applications that makes use of the virtual memory of  the operating system for execution.
- Hard fault is the term used by the Microsoft instead of page fault in the resource monitor’s latest versions.

Classification of Page Faults

Page faults can be classified in to three categories namely:

1. Minor: 
- This type of fault is also called the soft page fault and is said to occur when the loading of the page in to the memory takes place at the time of the fault generation, but the memory management unit does not mark it as being loaded in the physical memory. 
- A page fault handler is included in the operating system whose duty is to make an entry for the page that is pointed to by the memory management unit. 
- After making the entry for it, its task is to give an indication that the page has been loaded. 
- However, it is not necessary that the page must be read in to the memory. 
This is possible if the different programs share the memory and the page has been loaded in to the memory for the various applications. 
- In the operating systems that apply the technique of secondary page caching, the page can be removed from the working set of the process but not deleted or written to the disk.

2. Major: 
- Major fault is actually a fault that many operating systems use for increasing the memory for the program that must be available as demanded by the program. 
- The loading of the parts of the program is delayed by the operating system from the disk until an attempt is made by the program for using it and generating the page fault.
- In this case either a non – free page or a page in the memory has to be found by the page fault handler. 
- When the page is available, the data from it can be read by the operating system to the new page in the main memory, thus easily making an entry for the required page.

3. Invalid: 
- This type of fault occurs whenever a reference is made to an address that does not exists in the virtual address space and therefore it has no page corresponding to it in the memory. 
- Then the code by which the reference was made has to be terminated by the page fault handler and give an indication regarding the invalid reference. 

Analytics - Measuring data relating to user information - Part 6

This is a series of posts that talk about the use of analytics in your software application. My experience is more in the nature of desktop applications, but a lot of what has been written in the posts earlier is also related to analytics for web application; there may be some differences, but the need for analytics is the same, and the decisions that can be made on the basis of analytics are the same. Some of the guidelines and warnings are also the same; in short, whether you are working on desktop applications or web applications, the tools may be different, but there is a strong need to ensure that you have designed a strategy for the same, and not doing this on an adhoc basis. In the previous post (Analytics - Measuring data relating to user information - Part 5), I talked about a problem where the team had made a strategy to collect data, but there were not enough people to actually analyze the data and take decisions based on such decisions.
However, there are some pitfalls when it comes to analytics, and taking decisions based on that. There is a joke about a person who would scream for data for every decision, whenever there would be a need for any decision or the planning for taking some decisions, there would be a hunt for data, and if the data was not present, then there were high chances that the team would be sent for such data. This is a joke, but I have seen managers who get too data-oriented. This may be anathema to those who are firm proponents of analytics, but there can be 2 problems with an analytics oriented approach.
- The data may be incorrect
- There may be so much emphasis on data, that it crosses a limit and common sense is lost

Sometimes these 2 problems can also intersect, but let's take each of them separately. You cannot just wish for data to happen - this is a very obvious statement, but it goes to the heart of the problem. We had a situation whereby we were collecting some data sent by a particular dialog in the application, and the data was coming in beautifully. A full release went by, and nobody thought much of the code in the particular function. However, in the next release, there was a defect in that section of the dialog that also affected the data that was collected, and the developer who was debugging that area came across something puzzling. It turned out that the data collection code did not enter one of the areas in which the application went into, which we speculated was around 15% of the time in customer interactions, but we had no real data. The net result was that we understood that our data for that particular dialog was understated by a percentage, but we did not know that particular percentage accurately. Hence, any decision that we made on studying the data from that dialog had a margin of error that was unacceptable. We reviewed the test cases and their execution from that particular time when the code was being written, and realized that because of a paucity in time, the testing for this particular part was not done as it should have been. The learning from all this was that data could be incorrect even with the best of efforts. And this takes us to the next para, although not directly.
Basing business decisions on data analysis can be great if you have the correct data, and can be suicidal if your data is incorrect. Further, when important decisions are being taken, it is important that there be some sort of confirmation, or that data is used to confirm some decision rather than being the driver of the decision making. So, suppose the business end of the application wants to run a campaign based on their observing of the market information they are getting, analytics could be of great help in confirming some of the assumptions that the team is making as a part of this decision making. But, using only analytics as the base on which to make decisions, or creating an environment for the same is not recommended.
Even when collecting data, there should be a thorough analysis of the data and the data collection methods to ensure that the data that is collected is correct; in fact I had a colleague who was in favor of analytics but had also been burnt before. His advice was simple - when you are getting data from analytics, assume that the data is wrong and then prove that it is right and then use it.

Read the next post in this series - Measuring data relating to user information - Part 7

Monday, May 27, 2013

Building community: Assigning a rating badge or similar marker to frequent contributors in forums

For every organization, there is a need to connect with their customer base. When customers feel that the company cares about them, and is able to resolve their needs, they are more likely to continue using the software product and doing upgrades. For this, they expect that they when they have a query or an issue or a complaint, somebody is there to listen to them, and if a solution is required, to explain to them the next steps. If there is no solution available, but you are able to explain this to the customer, in a number of cases, they are fine with this.
However, it is expensive for a company to hire people to man their support structure - whether this be direct phone support, online chat support, or for reviewing the user forums and replying to posts. What would be ideal would be if you could build an online community where when a customer asks a question, another customer could respond to that question and satisfy the needs of the person asking the question. In such a case, the person asking the question does not care whether the answer has been given by somebody working for the company, or by another user.
In fact, in a number of cases, the users tend to have more information than the support staff. Support staff need to be trained on the software, and they do not tend to be regular users of the software for their requirements. On the other hand, when you get a customer, they are already more comfortable with the workflows that the person seeking information is using, and would be able to explain things in a much better way. There may be workarounds that the customer has developed which the support staff has no idea about and which would work for the person asking the question.
However, building a community is not easy. Why would a person want to be there in the user forums, and why would they want to share their knowledge with other people ? Well, people have different motives, which could be about an innate desire to help other people, which could be about craving recognition from other customers, or could be from a desire to share something new that they know about, or etc. The organization should do its best to encourage people to get into this mode, and one of the easiest ways of doing this in a non-monetary way is by having a system of merit badges.
It is fairly easy to design a system whereby the profile of the user can have additional elements displayed such as merit badges, which are awarded based on a logic system setup by the support staff. It could be based on the number of contributions, about likes from other customers, about the number of solutions suggested, or typically many organizations design a system which uses a combination of all of these to design a merit badge. When a person sees a merit badge next to their name, for a number of people, there is a sense of satisfaction, and this can be a mighty factor that inspires them to continue their participation in these user forums.

Sunday, May 26, 2013

Where are artificial neural networks applied?

The artificial neural networks have been applied to a number of problems in diverse fields such as engineering, finance, medical, physics, medicine, and biology and so on. 
- All these applications are based on the fact that these neural networks can simulate the human brain capabilities. 
- They have found a potential use in classification and prediction problems. 
These networks can be classified under the non-linear data driven self adaptive approaches. 
They come handy as a powerful tool when the underlying data relationship is not known. 
- They find it easy to recognize and learn the patterns and can correlate between the input sets and the result values.
- Once the artificial neural networks have been trained, they can be used in the prediction of the outcomes of the data. 
- They can even work when the data is not clear i.e., when it is noisy and imprecise. 
- This is the reason why they prove to be an ideal tool for modeling the agricultural data which is often very complex. 
- Their adaptive nature is their most important feature.
- It is because of this feature that the models developed using ANN is quite appealing when the data is available but there is a lack of understanding of the problem.
- These networks are particularly useful in those areas where the statistical methods can be employed. 
- They have uses in various fields:

    1. Classification Problems:
a)   Identification of underwater sonar currents.
b)   Speech recognition
c)   Prediction of the secondary structure of proteins.
d)   Remote sensing
e)   Image classification
f)    Speech synthesis
g)   ECG/ EMG/ EEG classification
h)   Data mining
i)     Information retrieval
j)    Credit card application screening

  1. Time series applications:
a)   Prediction of stock market performance
b)   ARIMA time – series models
c)   Machine robot/ control manipulation
d)   Financial, engineering and scientific time series forecasting
e)   Inverse modeling of vocal tract

  1. Statistical Applications:
a)   Discriminant analysis
b)   Logistic regression
c)   Bayes analysis
d)   Multiple regression

  1. Optimization:
a)   Multiprocessor scheduling
b)   Task assignment
c)   VLSI routing

  1. Real world Applications:
a)   Credit scoring
b)   Precision direct mailing

  1. Business Applications:
a)   Real estate appraisal
b)   Credit scoring: It is used for determining the approval of a load as per the applicant’s information.
c)   Inputs
d)   Outputs

  1. Mining Applications
a)   Geo-chemical modeling using neural pattern recognition technology.

  1. Medical Applications:
a) Hospital patient stay length prediction system: the CRTS/ QURI system was developed using a neural network for predicting the number of days a patient has to stay in hospital. The major benefit of this system was that money was saved and better patient care. This system required the following 7 inputs:
Ø  Diagnosis
Ø  Complications and comorbidity
Ø  Body systems involved
Ø  Procedure codes and relationships
Ø  General health indicators
Ø  Patient demographics
Ø  Admission category

  1. Management Applications: Jury summoning prediction: a system was developed that could predict the number of jurors that were actually required. Two inputs were supplied: the type of case and judge number. The system is known to have saved around 70 million.
  2. Marketing Application: A neural network was developed for improving the direct mailing response rate. This network selected those individuals who were likely to respond to the 2nd mailing. 9 variables were given as the input. It saved around 35 % of the total mailing cost.
  3. Energy cost prediction: A neural network was developed that could predict the price of natural gas for the next month. It achieved an accuracy of 97%. 

Ensuring that you have knowledge of changes in components that you integrate ..

Most modern day software applications use a number of components within them. The advantage of using components is that these components can get a lot of specialized functionality implemented through them, which would otherwise be difficult and hard to implement if the application development team wanted to write all this functionality. And even if the team was able to do all this, just maintaining all these specialized features can be a big problem. Consider the case of a software which needs to create a video that needs to be burnt onto media such as CD/DVD/Blu-Ray. Now, all of these are specialized features that are done pretty well by many components, both open source and commercial. However, if the product team wanted to write all this functionality as a part of their product, they would need to also make sure that they are maintaining all this code. Even for this specialized job, there are a lot of maintenance that needs to be done. The code for this area needs to be work on any new optical media that is introduced, such as when Blu-Ray's were introduced, and when new versions of Operating Systems for both Windows and Mac emerge, the features should work as well. For all this, it is a better strategy to ensure that components are used for specialized functionality.
However, there are some risks that come out when using components, and these risks can exist whether the component is from another group within the organization or from an external organization. One of the biggest risks is when the component is being used by multiple software applications, and your application does not have dedicated use of the component. In such cases, there is a need to be somewhat more informed about what the changes are that are happening in the component. Even if consider the above case about a component that allows writing to a DVD, another product that may be using the component may request for a change in the component for some need of theirs. And the component makers may decide that such a need is genuine and put in the required feature.
There is an even chance that the new feature that has been put in may not impact your application, but there is still a chance that it may have an impact on your feature workflow. For example, you may be using the component in a silent mode, whereby no dialog from the component may show up, but another organization requests a dialog to show up for a need of theirs. In such a case, you would need to ensure that you know about this, and the discussions with the component makers would need to ensure that there would be a way of ensuring that such a new dialog does not pop up in your workflow, such as a parameter that could be passed from your application code to the component which ensures that the new dialog does not pop up.
In all such cases, even though it is the duty of the component maker to have a list of changes in the component from the previous version, it is also your duty to check that none of the changes impact your application. And if you are integrating an open source component, you may not have the ability to get the external team to make any changes to their application, so you need to very carefully check their changelist and see the impact this has made on your application. Only if there is no impact should you incorporate a new version of the component in your application.
This is one of the risks of your project planning and depending on the number of components you use and the profile of those components, can be a high level of risk that needs to be checked at a regular interval.

Saturday, May 25, 2013

What are advantages and disadvantages of artificial neural networks?

The artificial neural networks, since they can simulate the biological nervous system are used in many real life applications which are also their one of the biggest advantages. 
They have made it easy for carrying out complex processes such as:
Ø  Function approximation
Ø  Regression analysis
Ø  Time series prediction
Ø  Fitness approximation
Ø  Modeling
- With the artificial neuron networks, the classification based on sequence and pattern recognition along with other difficult things such as the sequential decision making and the novelty detection is possible. 
- A number of operations falling under the data processing category such as clustering, filtering, compression and blind source separation etc. are also carried out with the help of artificial neural networks. 
- Artificial neural networks can be considered as the backbone of the robotics engineering field. 
- They are used in computer numerical control and in directing the manipulators.
- It offers advantages in the following fields of:
  1. System control (this including process control, vehicle control and natural resources management),
  2. System identification,
  3. Game – playing,
  4. Quantum chemistry
  5. Decision making (in games such as poker, chess, backgammon and so on.)
  6. Pattern recognition (including face identification, radar systems, object recognition and so on.)
  7. Sequence recognition (in handwritten text recognition, speech, gesture etc.)
  8. Medical diagnosis
  9. Financial applications i.e., in automated trading systems
  10. Data mining
  11. Visualization
  12. E – mail spam filtering
- Today several types of cancers can be diagnosed using the artificial neural networks. 
- HLND is an ANN based hybrid system for detection of lung cancer. 
- The diagnosis carried out with this is more accurate plus the speed of radiology is more. 
- These diagnoses are then used for making some models based up on information of the patient. 
- Its following theoretical properties are nothing but an advantage to the industry:
  1. Computational power: It provides a universal function approximator i.e., the multilayer perceptron or MLP.
  2. Capacity: This property indicates about the ability of ANN to model almost any given function. It has a relation with both the notion of complexity and information contained in a network.
  3. Convergence: This property is dependent on a number of factors such as:
Ø  Number of existing local minima which in turn depends up on model and the cost function.
Ø  Optimization method used
Ø  Impracticality of few methods for a large amount of parameters.
4. Generalization and statistics: Over training is quite a prominent problem in the applications where it is required to create a system that is capable of generalizing in unseen examples. This in turn leads to problem of the over specified or the convoluted systems along with the network exceeding the limit of the parameters. There are two solutions offered by ANN for this problem:
-   Croos – validation and
-   Regularization

Disadvantages of Artificial Neural Networks

1. It requires a lot of diverse training for making the artificial neural networks ready for the real world operations which is a drawback more prominent in the robotics industry.
2. Many storage and processing resources are required for implementing large software neural networks using ANNs.
3. The human has the ability to process the signals via a graph of neurons. A similar simulation of even a very small problem can call for excessive HD and RAM requirements.
4. Time and money cost for building ANNs is very large. 
5. Furthermore, simulation of the signal transmission through all the connections and associated neurons is required.     

Facebook activity