Subscribe by Email


Showing posts with label Architecture. Show all posts
Showing posts with label Architecture. Show all posts

Saturday, June 29, 2013

What are the reasons for using layered protocols?

Layered protocols are typically used in the field of networking technology. There are two main reasons for using the layered protocols and these are:
  1. Specialization and
  2. Abstraction
- A neutral standard is created by a protocol which can be used by the rival companies for creating programs that are compatible. 
- So many protocols are required in the field and that should also be organized properly and these protocols have to be directed to the specialists that can work up on these protocols. 
- A network program can be created using the layered protocols by a software house if the guidelines of one layer are known. 
- The services of the lower level protocols can be provided by the companies. 
This helps them to specialize. 
- In abstraction, it is assumed that another protocol will provide the lower services. 
- A conceptual framework is provided by the layered protocol architecture that divides the complex task of information exchange into much simpler tasks between the hosts. 
- The responsibility for each of the protocols is narrowly defined. 
- A protocol provides an interface for the successive higher layer protocol. 
- As a result of this, it goes in to hiding the details of the higher protocol layers that underlies. 
- The advantage of using the layered protocols is that the same application i.e., the user level program can be used by a number of diverse communication networks.
- For example, when you are connected to a dial up line or internet via LAN you can use the same browser. 
- For simplifying the networking designs, one of the most common techniques used is the protocol layering. 
- The networking designs are divided in to various functional layers and the protocols are assigned for carrying out the tasks of each layer. 
- It is quite common to keep the functions of the data delivery separate from each other and separate layers for the connection management too.  
Therefore, we have one protocol for performing the data delivery tasks and second one for performing connection management. 
- The second one is layered up on the first one. 
- Since the connection management protocol is not concerned with the data delivery, it is also quite simple. 
- The OSI seven layer model and the DoD model are one of the most important layered protocols ever designed. 
- A fusion of both the models is represented by the modern internet. 
- Simple protocols are produced by the protocol layering with some well defined tasks. 
- These protocols then can be put together to be used as a new whole protocol. - As required for some particular applications, the individual protocols can be either replaced or removed. 
- Networking is such a field involving programmers, electricians, mathematicians, designers, electricians and so on. 
- People from these various fields have very less in common and it is because of the layering that people with such varying skills to make an assumption or feel like others are carrying out their duty. 
- This is what we call abstraction. 
- Protocols at a level can be followed by an application programmer via abstraction assuming that network exists and similarly electricians assume and do their work. 
- One layer can provide services to the succeeding layer and can get services in return too. 
- Abstraction is thus the fundamental foundation for layering. 
- Stack has been used for representing the networking protocols since the start of network engineering. 
- Without stack, it would be unmanageable as well as overwhelming. 
Representing the layers of specialization for the first protocols derived from TCP/ IP.



Monday, April 8, 2013

What are features of Hyper-Threading technology?


The HT technology or the hyper–threading technology is a proprietary SMT (simultaneous multi–threading) implementation developed by Intel in order to make improvements in the pluralization of the computations that are carried out by the microprocessors in PC. It was first included in the Xeon server processes and then in Pentium 4 processors, atom, Itanium, core I series etc. Two logical or virtual cores are addressed by the operating system for each physical processor core present. The workload is shared among these two whenever required and possible.

Features of Hyper Threading Technology

  1. Hyper–threading technology reduces the number of instructions in the pipeline that are dependent in nature. This is also its main purpose.
  2. Architecture: The hyper – threading technology is based on the super scalar architecture. This kind of architecture is capable of operate multiple instructions in parallel with separate data. It appears as if there are two processors, thus letting the OS operate with two processes simultaneously.
  3. Resource sharing: The same resources can be shared by the two or more processors available. Re–allocation of the resources can be done up on the failure of one of the processes.
  4. Support for SMT: Hyper–threading implies the support for SMT through an OS that is SMT supportable. The OS needs to be specially optimized for this technology. It is recommended by Intel to disable the HTT if the OS have not been optimized for HTT.
  5. Two processors: Certain processor sections are duplicated by the HTT. These are the sections in which the architectural states are stored. The main execution resources are not duplicated. Because of this, the HT processor appears as two processors to the OS namely, the physical and the logical processor. So the OS is able to process two threads at the same time without messing up. When a current task is not using the execution resources and the HTT and when the processor is stalled (because of data dependency, cache miss or branch mis-prediction), those resources can be used by the HT processor in execution of some other task scheduled earlier.
  6. Support for SMP: SMP stands for symmetric multiprocessing which is mandatory for taking full advantage of the hyper – threading processing.
  7. Transparency: There is a lot of transparency between the OS, its programs and this technology.
  8. Easy optimization: HTT allows easy optimization of the behavior of the OS on HTT capable systems running on multiprocessors.
  9. Provides support for multi–threaded code thus improving both the response time and reaction.
  10. Application – dependent performance: It works well in improving the performance of most of the MPI applications. The improvement in the performance depends largely on the nature of the running application and its cluster configuration. The performance gain can also be negative. Using performance tools would be beneficial for understanding the factors contributing to performance gain and degradation.
  11. Security: A timing attack can be used by some malicious thread for monitoring the other thread’s memory access patterns. This is nothing but the stealing of the cryptographic info. This can be avoided by changing the cache eviction strategy of the processor. 
The hyper – threading technology has been criticized heavily for being energy inefficient. It has been stated by ARM that power consumption in SMT is more than in the dual – core designs by a margin of 46%. It was also claimed that cash thrashing is also increased by a margin of 42% in SMT when compared to a 37% decrease in the case of dual core processors. However, on the other side, Intel has claimed the HTT to be highly efficient since it puts the ideal resources to use. 


Friday, February 8, 2013

What is TOSCA Test suite? Explain the architecture of TOSCA?


TOSCA test suite is another software tool in the line of tools for automated execution of the regression and functional testing. But what makes it different from the other tools in the same category is the TOSCA’s following features:
  1. Integrated test management
  2. Graphical user interface (GUI)
  3. A command line interface (CLI)
  4. Application programming interface (API)

History and Evolution of TOSCA

- The developer of TOSCA is TRICENTIS technology and consulting GmbH which is an Austrian company based in Vienna. 
- The year of 2011 saw the inclusion of TOSCA in magic quadrant’s report (developed by Gartner Inc.’s) as a visionary. 
- In year 2011 the TOSCA test suite were recognized as the second most widely used test automation tool by Ross report in New Zealand and Australia. 
- The tool was further recognized by the scientific community when it was presented at two important international conferences namely:
  1. Euromicro SEAA and
  2. IEEE ICST
- Since then TOSCA has been awarded a number of awards for its web and customer support. 
- TOSCA is software testing tool that serves purposes of numerous other tools such as:
  1. Test management tool
  2. Test design tool
  3. Test execution tool
  4. Data generation tool for regression and functional testing

Architecture of TOSCA

The architecture of TOSCA is composed of the following:
  1. TOSCA commander: This is the test execution tool of TOSCA which can be used for the creation, administration, execution and analyzation of the test cases.
  2. TOSCA wizard: This is the model building tool of TOSCA which can be used for building application model and storing related info in to modules that are essentially XML – GUI maps.
  3. TOSCA executor: This TOSCA tool is responsible for the execution of the test cases and displaying the results obtained in the TOSCA commander.
  4. TOSCA exchange portal: This TOSCA portal is the place where the exchange and use of the special modules, TOSCA commander components that have already been built and extensions by the customers takes place.
  5. TOSCA test repository: This has been made integrated by the TOSCA is responsible for holding the assets. A number of users can access the repository at the same time.

Functionality of TOSCA

Business Dynamic Steering
- The model–driven approach is the concept behind the TOSCA commander.
- This approach focuses in making the whole test dynamic in nature rather than just making the input data dynamic. 
- You can drag and drop the modules and enter the actions and values for validation in order to create the test cases. 
- Making the whole test dynamic is of great advantage for enabling the business – based description of both the automated and manual test cases. 
- This lets the non – technical users (SMEs) to design, automate, maintain and specify the test cases.
- TOSCA supports the following technologies for automation of software tests:
  1. Application development environments such as  powerbuilder and gupta.
  2. Programming frameworks and languages such as .net, Delphi, WPF, visual basic and java swing/ AWT/ SWT.
  3. Host applications (5250, 3270)
  4. Web browsers including Mozilla firefox, opera and internet explorer
  5. Single – position application programs such as MS excel and outlook
  6. Key application programs including Siebel and SAP
  7. Protocols and hardwares including Flash, SOAP, USB execution and ODBC
TOSCA is supported on the following platforms:
Ø  Windows vista Service pack 2
Ø  Windows 7 (both 32 and 64 bit)
Ø  Windows XP service pack 2 and plus.
- The data bases supported are:
Ø  DB v 9.1
Ø  Oracle 10g
Ø  Microsoft SQL server 2005
- TOSCA is being used by 300 customers all over world. 


Sunday, December 16, 2012

What are Six Best Practices in Rational Unified Process?


The IBM Rational Unified Process is a means of commercial deployment of the approaches and practices which have been proven for the development of the software systems and applications. It is based up on the following six best practices:
  1. Iterative development of the software systems and applications
  2. Management of the requirements
  3. Use of architecture based up on the components.
  4. Visual modeling of the software system
  5. Verification of the software system.
  6. Controlling the changes to the software system or application.
The above mentioned practices are called the best practices not because their value can be precisely quantified but because they are quite common in the software industry by most of the organizations which are successful and reputable.
In the rational unified process each and every member of the team gets templates, guidelines as well as the tools which are found necessary for the whole of the team in order to reap the full advantage.

Basic Practices In Rational Unified Process in Detail

Iterative development of the software systems or applications:  
Software systems and applications are quite sophisticated and therefore they make it impossible to define the problem first in sequence.
- By sequence we mean, first defining the whole problem, designing a solution of the problem, building the software system or application and then finally testing the software system. 
- In order to deal with such software systems and applications there is a requirement of an iterative approach so that an increase in the understanding of the problem can be made in a series of successive refinements. 
- This also helps in developing an effective solution in increments done over multiple iterations.

Management of the Requirements: 
The rational unified process gives a description:
- On how the elicitation, organization and documentation of the constraints as well as the functionality is to be done, 
- how the trade-offs and decisions have to be tracked and documented and 
how the business requirements are to be captured and communicated.

Use of architecture based up on the components: 
- The focus of the development process is on the base-lining and early development of an architecture that is robust and executable as well. 
- It gives a description of how a resilient architecture can be built with more flexibility that can accommodate the changes easily, can be easily understood and effectively promotes the reuse of the existing software artifacts. 
- The rational unified process provides a great support to the component based development. 
- By components we mean, the sub systems and non – trivial elements for a clear function.

Visual modeling of the software system: 
- The rational unified shows you exactly how a software system or application can be visually modeled and can be used for capturing the behavior and structure of its architectural components. 
- This further enables you to hide the details and develop the code with the help of the graphical building blocks. 
- With such visual abstractions, communication can be established between the different aspects of the software system or application.

Verification of the software system: 
- Poor reliability dramatically cuts down the chances of a software system or application from being accepted.
- Therefore it is important to review the quality concerning the factors namely functionality, reliability, system performance and application performance etc.

Controlling the changes to the software system or application: 
- The management and ability to track the changes are critical to the success of any software system or application. 
- However, the rational unified process helps you to cope with these issues also.




Thursday, November 8, 2012

What is Silk Test Architecture?


Whenever the graphical user interface of any software system or application is tested, a manipulation is done to the windows, menus, buttons and so on via input sources such as key board and mouse clicks etc. 
These windows, menus, buttons and so on are nothing but the GUI objects which are interpreted by the silk test. 
Later, in the test automation process the silk test recognizes these GUI objects based up on two things that uniquely identify them namely:
  1. Object class properties and
  2. Object methods
The operations that are performed on that particular application software by the users are usually in terms of input from keyboard and mouse clicks. 
These events are simulated by the silk test and the results thus obtained are subjected to automatic verification. 
This whole process is carried out by two very distinct components of the silk test mentioned below:
  1. Silk host software and
  2. Silk agent software
Both of these components are installed on different machines: the host machine and the target machine. 
- Host machine is for the silk host software whereas target machine is for the second component of the silk test. 
- The host component plays an important role in the development of the test scripts as well as the test plan.
- Using the components the following operations can be carried out on the test scripts:
  1. Creating
  2. Editing
  3. Deleting
  4. Compiling
  5. Running
  6. Debugging etc.
- The latter component of the silk test i.e., the agent is configured to interact with the graphical user interface of the AUT or application under test. 
- The agent is responsible for monitoring as well as driving the application under test. 
- The commands in test scripts are written in the 4test language. 
- These need to be translated in to specific equivalent GUI commands. 
- This task is also achieved by the silk agent software. 
- One thing that should be taken care of is that the application under test should be installed on the same machine as of the agent and on no machine else. 
- Matching objects to that of the GUI objects are created in the 4test and each one is unique. 
- Silk test completes test automation in a period of 4 steps:
  1. Creation of a test plan
  2. Recording of the test frame
  3. Creation of the test cases
  4. Execution of the test cases and interpretation of the test results.
- The interaction between the GUI of the application and the silk test is necessary since the operations need to be submitted to the application for simulation. 
- During the simulation, the silk test is said to be the simulated user whose work is to drive the application under test. 
- Since the AUT does not recognize the difference between the simulated user and the actual user, it behaves exactly in the same way as it reacts to an actual user. 
- In addition, you can have an agent as a local agent installed on the host machine. 
- Machines other than the host machine on which agent is installed in a network are called target machines. 
- The application under test is driven by the silk test and in turn drives the server like always. 
- Silk test is quite a powerful tool and can be used to drive the GUI of a server directly by running the scripts which will send equivalent SQL scripts to the data base of the server. 
- In such a way the server application is manipulated directly thus supporting the testing involving a server being driven by a client.


Tuesday, June 12, 2012

Explain the concepts of Domain Analysis Process?


Domain analysis is one of the three phases of the domain engineering and is the first one. Domain engineering makes re- use of all the domain knowledge in the development of new software systems and applications. It forms a key concept of the software re- use. 

Application domain provides a key idea in the systematic software re- use of the software system or application. In this article we have discussed the process of domain analysis. The process of domain analysis involves sub processes like:
  1. Identification of domains
  2. Bounding of the identified domains
  3. Discovering commonalities and variabilities among all the systems in a particular domain.
The knowledge obtained in the above mentioned activities is captured in the models which are then later used in the third phase of domain engineering i.e., domain implementation for the creation of the artifacts such as:
  1. Domain specific language
  2. Re- usable components
  3. Application generators

Concepts of Domain Analysis


All the above mentioned artifacts can be used to develop new software systems or applications within that particular domain. 
- Domain analysis is one of the three primary phases of the domain engineering and focuses up on multiple systems within a domain. 
- In the phase of domain analyzation, the system domain is defined with the help of feature models. 
- Earlier these feature models were considered to be a part of the so called method: feature oriented domain analysis.
- One of the main aims of the domain analysis is to identify the common points as well as varying points in a particular domain. 
- The domain analysis has greatly helped in improving the development of the system architectures as well as the configurable requirements. 
- Apart from this, the domain analysis also helps with the development of static configurations.
- Most of the people confuse the domain analysis with requirements engineering, this is a mistake not to be made! 
- Domain analysis proves to be an effective technology for the development of the configurable requirements as compared to the traditional approaches since they are ineffective in domains. 
- The domain engineering tends to be effective only if the re-use of the already existing software artifacts is considered in the early stages of the development of the software system or application. 
- In the domain analysis, the features that can be re- used in the new software systems or applications are selected earlier and later are worked throughout the development life cycle. 
The entire process of the domain analysis is driven by the past experience produced from the artifacts on a primary basis. 
- There are many potential sources of the domain analysis few of which have been mentioned below:
  1. Artifacts of the existing systems.
  2. Requirement documents
  3. Design documents
  4. Standards
  5. User manuals
  6. Customers and so on.
- It is not necessary that the domain analysis should consist of the collected and formalized information rather it is the presence of a creative component that matters more.
- This thing distinguishes the domain analysis from the requirements engineering. 
- During the domain analysis what actually happens is that the developers try to extend their knowledge of the domain beyond what is already known. 
- This is done basically to categorize the similarities and differences of the domains so that the re-configurability is enhanced. 
- The domain analysis is carried out with the help of a domain model that represents the commonalities and variabilities of all the systems lying in that domain. 
- Basically, the creation of the components and architectures of the system is assisted by the domain model. 


What is meant by CORBA architecture?


Software like hardware does not wear out but it has to be modified according to some changes in the needs of the users and advancement in technology. As a consequence of the modification of the software systems or applications, its degree of complexity increases proportionally which then leads to an increased rate of errors. 
It was suggested by some developers that in order to reduce this complexity and cut down on the maintenance costs and efforts, the development can be based up on the small and simple components. 
Initially, this proved to be very helpful as a means to tackle the software crisis but in the later years it soon developed in to what is called now “component based software development”. 

Following this software development methodology, large software systems and applications are built from the small and simple components that belong to the pre- existing software systems and applications. Over the years, this process has proved to be an effective approach for the enhancement of the maintainability and flexibility of the software systems that are built using it. The software system is assembled quickly and that too within quite a low budget.

The component based development is known to constitute of 4 activities namely:
  1. Component qualification
  2. Component adaptation
  3. Assembling components
  4. System evolution
We are going to discuss about “CORBA architecture” in this article which forms an important part of the third activity i.e., assembling the components. 

What does CORBA stand for?


- The assembling of the components is facilitated through some well defined infrastructure which provides binding for the separate components. 
- CORBA is an important component technology that stands for “common object request broker architecture” and has been developed by OMG (object management development). 
In CORBA “ORB” which stands for “object request broker” is object oriented and a more advance version of “RPC” or “remote procedural calls” that was an old technology. 
- With the remote procedural calls or object request brokers, the client applications are able to call the methods (passing responses and generating responses) from accessing the objects across an amalgam of several different networks.

What CORBA is meant for?


- To put it simply, we can say that the CORBA is an effective standard mechanism using which different operations on an object can be invoked. 
- CORBA is categorized under the category of distributed middle ware technology.
- It is meant to connect remote objects and inter- operate between them on operating systems, different networks, various machines and programming languages etc.
- It is done with the means of a standard IIOP protocol. 
- CORBA has made it easy to write software components in multiple computer languages that need to run together and support multiple platforms. 
- With CORBA all the components work together like a single set of integrated applications.  
- CORBA normalizes the method call semantics between the various objects of the application that reside either in remote address space or same address space. 

More about CORBA...


- The first version of the CORBA 1.0 was released in the year of 1991. 
- The IDL or interface definition language is used by CORBA for the specification of the interfaces which are presented to the outer world by the objects. 
- Then a mapping from IDL is specified by CORBA in a specific implementation language like java or C++. 
- For certain languages like C, C++, Ruby, Smalltalk, COBOL, Python and so on standard mappings exist and for some other languages like visual basic, TCL, Erlang and Perl non standard mappings exist.
- In practice, the ORB is initialized by the software application and an object adapter is accessed. 
- This object adapter maintains things like:
  1. Reference counting
  2. Object policies
  3. Instantiation policies
  4. Object lifetime policies
IDL java mapping makes use of the CORBA architecture.  


Sunday, June 10, 2012

What is a scrum process and how does it work?


To implement the scrum development process, it is important to know how it actually works. Most of the errors in the development occur because of the lack of knowledge about the working process of the scrum. 

Scrum works on the principle of iterative and incremental development and it operates with the help of two types of roles namely:

  1. Core roles:
(i)                Scrum master
(ii)              Development team
(iii)            Product owner

  1. Ancillary roles:
(i)                Stake holders and
(ii)              Managers

What is a scrum process?


- The scrum process deals in terms of sprints which are usually called iterations for the other agile software development processes. 
- In a typical scrum, a sprint may have duration of a week to a month. 
- Scrum is facilitated by various meetings which have been mentioned below:

1. Daily scrum: 
This meeting is held during the sprint and is based up on the project status. Usually the core roles participate in this meeting. This meeting is time boxed to 15 minutes.

2. Story time (back log grooming): 
This process involves the estimation of the existing backlog and the acceptance criteria for the user stories is also refined. These meetings are time boxed to an hour.

3. Scrum of scrums: 
This meeting follows after daily scrum and is somewhat same.

4. Sprint planning meeting: 
This meeting is held before the beginning of every sprint and the tasks that have to be completed within that sprint are selected.

5. Sprint review meeting: 
It reviews the status of the sprint and also the tasks that could not be completed.

Principles on which working of scrum depends


The scrum follows the following three principles throughout its working:

  1. Working software is more valuable then the documentation.
  2. Response to the changes in requirements is more important than the plan.
  3. Team collaboration is important than contract negotiation.

How does a scrum process work?


- Usually the first few weeks of the scrum are spent working out the high level requirements including business needs and system architecture. 
- After this, the team produces the product backlog and sprint backlog. 
- These two backlogs together make the scope of the software project by the end of the week. - All the team member themselves take up the responsibilities and operational activities from each other during the daily meetings. 
- At the end of some sprints, it happens that some of the tasks could not be completed as planned so they have to be included in the next sprint in addition to the other tasks. 
- One of the reasons for such situations is the “scope creep”
- However, this does not turns out to be a real issue especially when the team is working closely with the business owners who have good understanding of the development process going on. 
- It should be understood that the scrum is a framework rather than just being a full methodology. 
- A detail of everything that is to be done is not provided by the client since it is decided by the team itself.
- At the end of the sprints the coding, testing, integration of the features is done. 
- In the sprint review, the newly added features to the software are demonstrated to the product owner. 

Reasons why scrum works well


There are several reasons why scrum works and few of them have been mentioned below:
  1. Iterative nature.
  2. Re assessment of priorities between iterations.
  3. The old check points are discarded when the team is doing something new.
  4. Availability of the product owner.
  5. The development team works on a single project at a time.
  6. The team has a chance to co- locate the entire development process.


Facebook activity