Subscribe by Email


Showing posts with label Modeling. Show all posts
Showing posts with label Modeling. Show all posts

Sunday, May 26, 2013

Where are artificial neural networks applied?


The artificial neural networks have been applied to a number of problems in diverse fields such as engineering, finance, medical, physics, medicine, and biology and so on. 
- All these applications are based on the fact that these neural networks can simulate the human brain capabilities. 
- They have found a potential use in classification and prediction problems. 
These networks can be classified under the non-linear data driven self adaptive approaches. 
They come handy as a powerful tool when the underlying data relationship is not known. 
- They find it easy to recognize and learn the patterns and can correlate between the input sets and the result values.
- Once the artificial neural networks have been trained, they can be used in the prediction of the outcomes of the data. 
- They can even work when the data is not clear i.e., when it is noisy and imprecise. 
- This is the reason why they prove to be an ideal tool for modeling the agricultural data which is often very complex. 
- Their adaptive nature is their most important feature.
- It is because of this feature that the models developed using ANN is quite appealing when the data is available but there is a lack of understanding of the problem.
- These networks are particularly useful in those areas where the statistical methods can be employed. 
- They have uses in various fields:

    1. Classification Problems:
a)   Identification of underwater sonar currents.
b)   Speech recognition
c)   Prediction of the secondary structure of proteins.
d)   Remote sensing
e)   Image classification
f)    Speech synthesis
g)   ECG/ EMG/ EEG classification
h)   Data mining
i)     Information retrieval
j)    Credit card application screening

  1. Time series applications:
a)   Prediction of stock market performance
b)   ARIMA time – series models
c)   Machine robot/ control manipulation
d)   Financial, engineering and scientific time series forecasting
e)   Inverse modeling of vocal tract

  1. Statistical Applications:
a)   Discriminant analysis
b)   Logistic regression
c)   Bayes analysis
d)   Multiple regression

  1. Optimization:
a)   Multiprocessor scheduling
b)   Task assignment
c)   VLSI routing

  1. Real world Applications:
a)   Credit scoring
b)   Precision direct mailing

  1. Business Applications:
a)   Real estate appraisal
b)   Credit scoring: It is used for determining the approval of a load as per the applicant’s information.
c)   Inputs
d)   Outputs

  1. Mining Applications
a)   Geo-chemical modeling using neural pattern recognition technology.

  1. Medical Applications:
a) Hospital patient stay length prediction system: the CRTS/ QURI system was developed using a neural network for predicting the number of days a patient has to stay in hospital. The major benefit of this system was that money was saved and better patient care. This system required the following 7 inputs:
Ø  Diagnosis
Ø  Complications and comorbidity
Ø  Body systems involved
Ø  Procedure codes and relationships
Ø  General health indicators
Ø  Patient demographics
Ø  Admission category

  1. Management Applications: Jury summoning prediction: a system was developed that could predict the number of jurors that were actually required. Two inputs were supplied: the type of case and judge number. The system is known to have saved around 70 million.
  2. Marketing Application: A neural network was developed for improving the direct mailing response rate. This network selected those individuals who were likely to respond to the 2nd mailing. 9 variables were given as the input. It saved around 35 % of the total mailing cost.
  3. Energy cost prediction: A neural network was developed that could predict the price of natural gas for the next month. It achieved an accuracy of 97%. 


Friday, May 11, 2012

Explain Agile Model Driven Development (AMDD) lifecycle?


“AMDD” is the abbreviated form of the “agile model driven development” and nowadays is quite popular among the developers and programmers in the field of software engineering and technology.
AMDD took its birth from the “MDD” or the “model driven development” as its agile version that makes use of the agile models rather than using the extensive models in the pure model driven development. 

The agile model driven development was formulated out of model driven development since it was thought that the iterative development with the model driven development is possible. And since it constituted of the iterations, it was categorized under the category of agile software development methodologies. 

The agile models that drive the whole development procedure are good enough to take care of the development efforts. The agile model driven development is one of the most sought after beyond the small agile software scaling development methodologies.

Agile Model Driven Development Lifecycle


To understand this whole agile model driven development one needs to familiarize himself/ herself with the life cycle of this development model. This article is focused up on the life cycle of the agile model driven development only!

The life cycle of the agile model driven development is of quite a high level. So let us see what all are the various stages in the life cycle of the agile model driven development:

1. Envisioning: 
This stage of the life cycle is comprised of two more sub stages namely: the zeroth and the first iteration. These iterations usually come in to play during the first few weeks of the development process. This stage is actually included in to the life cycle with the purpose of the identification of the scope of the system and what kind of architecture will be suitable for developing the project. For this the following two sub stages come in to process:

(a)  Initial requirements envisioning or modeling: 
This stage may take up to several days for the identification of the high level requirements. Apart from just identifying the requirements the scope of the release product is also determined at this stage only. For carrying out with this stage the developer may require some type of usage model in order to see how the software project will be used by the customers or the users.

(b) Initial architecture modeling: 
This stage is all about setting up of a proper technical direction for the development of your software project.

      2. Iteration Modeling: 
     This stage involves planning for what is to be done with the current iteration. Often the modeling techniques are ignored by the developers while planning objectives for the iteration that is to be carried out next. The requirements in every agile model as we know are implemented in the order of their priority.
     
      3. Model Storming: 
      As mentioned in the agile manifesto the development team should consist of only a few members who are the ones who discuss the development issues by sketching it up on a white board or paper. The sessions which involve activities such as those are called the model storming sessions. These sessions are short enough to last for at most half an hour.
   
     4. Test driven development involving the executable specifications: this stage involves the coding phase using the re-factoring and test first design (TFD). The agile development helps you address cross entity issues whereas with the test driven development you can focus up on each and every single entity. Above all the technique of re-factoring the design, it is ensured that the high quality of the software project is not hampered at all.


Thursday, May 10, 2012

What are different Myths and Misconceptions surrounding Agile Model Driven Development (AMDD)?


The agile version of the model driven development or AMDD (agile model driven development) is now being recognized one of the most popular development models in the field of software engineering. This agile model driven development methodology was born out of a need of a combined development process of the agile methodology and the test driven development or TDD. 

The agile development in this combination is supposed to make the addressing of the cross entity issues easy while the TDD (test driven development) counterpart is suppose to focus exclusively on each and every individual entity of the software system or application. 

About Agile Model Driven Development


The agile model driven development lately has been known to suffer a lot of criticism giving way to the birth of several myths and misconceptions regarding it.
- The agile model driven development methodology has been characterized as an obsolete agile software development methodology to some extent though not so much. 
- The agile model driven development is thought to involve only a little bit of modeling but quite a lot of coding.
- The iterating efforts are deemed to be spread between the coding activities and the software modeling activities.
Here a feel like illusion of majority of the designing being carried out as a part of the implementation efforts is created. 
- Such situations are also true for many other traditional software development methodologies. 
- In situations like this what happens is that the designers ultimately put the blame on the developers without questioning their own way of processing the software development.
- One of the misconception regarding the agile model driven development is that it does not specifies what all types of the software models are to be created.
- Even though it is always specified by the agile model driven development that only the right artifact is to be applied, it never does specify what that particular artifact really is. 
- One of the myths is that the agile model driven development works perfectly well with the UCDD (use case driven development) and the FDD (feature driven development). 

Myths and Misconceptions of Agile Model Driven Development


Below mentioned are some of the other myths and misconceptions surrounding the agile model driven development:
  1. The agile models do not fulfill their purpose well.
  2. It is very difficult to understand the agile models.
  3. The agile models do not exhibit sufficient consistency.
  4. The agile models are not sufficiently detailed.
  5. The accuracy of the agile models is not so high.
  6. The agile models exhibit a characteristic complexity.
  7. Sometimes negative values are provided by the agile models.

Point of Argument


- Another most argued concept of agile model driven development is that the agile documents and models seem to be sufficient for carrying out the development.
For this, the people develop false assumptions that the software is not as good as it portraits and expectations regarding the quality of the software artifact. 
- It is also thought that if the artifact has fulfilled the purpose which was intended then any more work that can be carried out on it is considered to be a useless bureaucracy. 

Benefits of Agile Model Driven Development


- The agile model driven development is known to take a more realistic approach and give a description of how the developers and stake holders are supposed to work together in cooperation to create good models.
- Agile model driven development is quite a flexible technology in the way that it allows the use most primitive and unsophisticated development tools for the creation of the models like papers and white boards as mentioned above.
- The agile model driven development is supposed to be independent of any sophisticated CASE tools even though they can be used effectively by the experts. 


Friday, March 2, 2012

What are different interpreting data defects?

A software system or application can perform an assigned task only when it is capable of interpreting or analyzing the data. The proper analysis or interpretation of data is very much necessary for the proper execution of the task.
If the data interpretation itself is wrong, then you cannot expect the accurate results.

STEPS INVOLVED IN INTERPRETATION OF DATA
The interpretation of data involves the following steps:
- Inspection of data
- Cleaning of data
- Transformation of data
- Modeling of data

These steps are responsible for producing only the meaningful data with conclusions and any other supportive decisions. There are many approaches and facets of the data interpretation.

DATA INTERPRETING TECHNIQUES
- Data analysis employs various different data interpretation techniques in different domains.
- Following are some data interpreting techniques:
1. Data mining
These techniques are focused up on the modeling of data as well as descriptive purposes.

2. Business intelligence
- This interpreting technique is suitable for heavy data bases where a lot of aggregation work is required.
- This basically used in the business domain.

3. Statistical Analysis
Further comprises of two techniques namely exploratory data analysis (EDA, discovers new features), descriptive analysis and CDA or confirmatory data analysis (responsible for proving the existing hypotheses wrong).

4. Predictive Analytic
It is employed for predictive forecasting.

5. Text Analytic
It is used for extraction and classification of the data from various sources.

CATEGORIES OF DATA TYPES
Different data types employ different interpreting techniques. The data is classified in to the following categories:

1. Qualitative Data
Data denotes the presence or absence of a particular characteristic (passes/ fail).
2. Quantitative Data
Data is numerical either a continuous decimal number to a specified range or a whole counting number.
3. Categorical Data
Data from several different or similar categories.

DATA INTERPRETATION/ANALYSIS PROCESS
The interpretation or analysis of data is not a simple process and indeed involves complex processes. And complex processes are very much prone to defects and errors.
- In a data interpretation process, defects can exist in every phase.
- Let us start from the first step of the process and discuss the defects as we move down in the process.
- Data cleaning involves the removal of erroneous data.
- If the program performing the task of data cleaning itself is diagnosed with some defect, then it can let in some erroneous data which in turn can cause many defects in the whole process.
- The changes made in data should be retrievable and should be documented.
- It is recommended that the data to be analyzed should be quality checked as soon as possible since the defective data is the cause of many defects in the interpretation process.
- There are several ways of checking the quality like:
# Descriptive statistics
# Normality
# Associations
# Frequency counts

- In some cases the values of data might be missing.
- This can also cause the whole interpretation process to hang up or falter or it can also come to a halt.
- In such a case the missing data can be imputed.
- Defects can occur if the data is not uniformly distributed.
- To determine this randomization procedure should be checked for its success.
- If you have not included a randomization procedure, you can use a non sampling randomization procedure.

SOME DATA DISTORTIONS
There are some possible data distortions that also give rise to data interpreting defects:
1. Item Non Response
The data should be analyzed for this factor in the initial stage of the data analysis itself. The presence of randomization does not matter here.
2. Drop Out
Like item non response, the data is to be analyzed for this also in the beginning itself.
3. Quality Treatment
The bad quality of the data should be treated with various manipulation checks.


Friday, July 22, 2011

How to create a behavioral model in software engineering?

While other analysis modeling elements provides a static view of the software, behavioral modeling depicts the dynamic behavior. The behavioral model uses input from scenario based, flow oriented and class based elements to represent the states of analysis classes and the system as a whole. To accomplish this, states are identified, the events that cause a class to make a transition from one state to another are defined, and the actions that occur as transition is accomplished are also identified. State diagrams and sequence diagrams are the UML notation used for behavioral modeling.

The behavioral model is an indication showing how a software responds to external event. Steps to be followed are:
- All use cases are evaluated.
- Events are identified and their relation to classes are identified.
An event occurs whenever the system and an user exchange information. An event is not the information that is exchanged but a fact that information has been exchanged.

- A sequence is created for use-case.
- A state diagram is built.
There are two different characterizations of states in behavioral modeling : state of class as system performs its function and the state of the system as seen from outside. The system has states that represent specific externally observable behavior whereas a class has states that represent its behavior as the system performs its functions.
- Behavioral model is reviewed for accuracy and consistency.


Introduction to Class-Responsibility-Collaborator (CRC) Modeling

Class-responsibility-collaborator (CRC) modeling is a means to identify and organize classes relevant to system requirements. CRC model is a collection of index cards and it consists of three parts:

Classes : It is a collection of similar objects.
- Entity classes or business classes are obtained directly from statement of the problem. The information contained in these classes are important to users but they do not display themselves.
- Boundary classes are used to create interface which user sees and interacts with as software is used.
- Controller classes are designed to manage creation or update of entity objects, instantiation of boundary objects, communication between objects and validation of data.

Responsibility : something that a class knows or does. Some guidelines that can be applied for allocating responsibilities to classes are:
- System intelligence should be distributed across classes to best address the needs of the problem.
- Each responsibility should be stated as generally as possible.
- Information and the behavior related to it should reside within the same class.
- Information about one thing should be localized with a single class not distributed across multiple classes.
- Responsibilities should be shared among related classes when appropriate.

Collaborator : another class that the class interacts with to fulfill the responsibilities.
- It takes one of two forms : a request for information or a request to do something.
- If a class cannot fulfill all of its obligations itself, then a collaboration is required.
- Collaboration identifies relationship between classes.


Monday, July 18, 2011

What are the different data modeling concepts?

Data model is a conceptual representation of data structures. The data structures consist of data objects, relationship between data objects and rules that govern these operations. Often, analysis modeling begins with data modeling.
The inputs of data model comes from the planning and analysis stage. There are two outputs of the data model. First is an entity relationship diagram and second is a data document. The goal of the data model is to make sure that the all data objects required by the database are completely and accurately represented.

The different concepts of data modeling are:

- Data Objects are a representation of any composite information that is processed by software. A data object can be an external entity, thing, occurrence, event, role, an organizational unit, place or a structure. The description of the data object includes data object and its attributes. A data object contains only data.

- Data Attributes name a data object, describe its characteristics and sometimes make reference to another object. One or more attributes must be identified as a key which acts as an identifier.

- Relationships indicate the manner in which data objects are connected to one another.

- Cardinality of a relationship is the actual number of related occurences for each of the two entities. It defines the maximum number of objects participating in relationship. It does not indicate whether a data object should participate in relationship or not.

- Modality of a relationship can be 0 or 1. It is 1 if an occurrence of relationship is must. It is 0 if an occurrence of relationship is optional.


Tuesday, July 12, 2011

Introduction to Crystal Agile Methodology and Agile Modeling

Crystal agile methodology is a software development approach. It is applicable for projects with small teams. It is a light weight approach. It is an adaptable approach. Crystal is a human powered methodology which means focus is on enhancing the work of the people. Crystal is ultra light which means it reduces the paper work, overhead involved. Crystal is stretch to fit methodology which means it grows just enough to get it to the right size. Crystal focuses on people and not processes.

Crystal consists of methodologies like Crystal Yellow, Crystal Orange, Crystal clear etc. It believes that project requires policies, practices and priorities as characteristics. Crystal methodology is based on the observation of various teams. Crystal methodology focuses on things that matter the most and make the most difference.

Agile Modeling suggests modeling is essential for all systems, but that the complexity, type, and size of the model must be in accordance with the software that is to be built. Some principles of agile methodology are:
- Agile Modeling is a practice based methodology.
- Values, principles and practices combine together for modeling.
- Agile Modeling is not a prescriptive process.
- Agile Modeling is not a complete software process.
- Agile Modeling focus on effective modeling and documentation.
- Developers using agile modeling should model with a purpose.
- Different models should be used to present different aspect and only those models should be kept that provide value.
- Traveling light is an appropriate approach for all software engineering work. Build only those models that provide value - no more, no less.
- During modeling, content is more important than representation.
- Be aware of the models and tools that are used to create them.
- The modeling approach should be able to adapt to the needs of agile team.

Agile modeling becomes difficult to implement in large teams, lack of modeling skills or team members are not co-located.


Wednesday, April 13, 2011

What is Software Architecture ? What are subsystems and interfaces?

Software architecture defines a set of decisions about the organization. It includes:
- how to select structural elements.
- how to select their interfaces.
- how to select the behavior.
- how to select the composition of these structural and behavioral elements into larger subsystems.
- architectural style that guides this organization.
A software architecture is a description of the sub-systems and components of a software system and the relationships between them.
Software architecture is layered structure of software components and the manner in which these components interact.
A software architecture is modeled using package diagram of UML. A package is a model element that can contain other elements.
A subsystem is a combination of package and class. The advantages of defining subsystem are:
- Development of smaller units is possible.
- Re-usability increases.
- Handling complexity is managed properly.
- Maintainability eases.
- Supports portability.

An interface is a set of operations.
- Allows the separation of the declaration of behavior from the realization of the behavior.
- Serves as a contract to help in the independent development of the components by the development team, and ensures that the components can work together.
- There are two styles of communication subsystems use: client-server and peer-to-peer communication.


Facebook activity