Subscribe by Email


Showing posts with label Objectives. Show all posts
Showing posts with label Objectives. Show all posts

Wednesday, October 2, 2013

What is link encryption method?

- Link encryption method is one of the classic methods used in the digital communications for the application of the crypto.
- Link encryption method has been designed for hiding the secrets and preventing the forgery of data. 
- It is quite a simple concept that has been discussed here and it fits for all the types of existing applications and software used in the communication.  
- Even though this method does not works well enough for most of the applications, it is the simplest of all. 
- Link encryption method is a security measure that should be used only if your security objectives match with those of the link encryption method. 
- It is commonly used in the applications where a boundary has to be maintained between the internal users and the external users. 
- With the link encryption it gets easy for the internal users to share data whereas it is just the opposite for the external users. 
- It provides transparent protection except for the separation that is maintained between the two classes of the users. 

Below we mention some security objectives that can be met with the link encryption:

Ø  Maintaining confidentiality: Our systems of course store very sensitive data. While exchanging the data with other systems, it is required that the risk of leakage involved should be as minimum as possible.

Ø  Communication with the outsiders: Obviously, we do not want to share our data with the unwanted outsiders and unauthorized sites and so we want these to be blocked. Such exchanges should be prevented from happening even by carelessness or accident.

Ø  Hiding data traffic: As much as possible we want our data and its details to be shielded from the outsiders. This data might contain information about the destination host and other info necessary for communication control. However, here it is assumed that the information will not be leaked by the insiders.

Ø  Familiarity and safety: We rank these two factors above the cost.

Ø  Protection of the data transfers: We need protection for our data against any sort of tampering or forgery by the outsiders during the transition. An assurance is important.  This objective is unconditionally met by this link encryption method.

- From security standpoint, a design is yielded by the link encryption that is highly reliable. 
- If in your organization some security parameter has been established that is strong enough, link encryption is the best technique for its maintenance. 
- A strict control is kept over the flow of physical documents through this security parameter. 
- The link encryption provides a complementary protection for the flow of the electronic documents. 
- We can have an environment with every data link that traverses the boundary having encryptors. 
- The documents will be kept within the parameter limits. 
- The data leaving the parameter will be protector by means of the encryptors. 
Link encryption method is being used since years in banking organizations and military communications for providing secure links. 
- The link encryption uses the in-line encryptors as its building blocks.
- This hardware devices takes plain text and converts it into cipher text.

The encryptors have their own vulnerabilities as mentioned below:
Ø  Rewrite attacks: It is also known as the plain-text attack, it is used for forging the messages. Few crypto algorithms are vulnerable to these attacks.
Ø  Replay attacks: Most of us think that the encrypted data is self-validating and so by matching the encryptor with its keys a sensibly de-crypting message can be generated. Since the encrypted data is accessible to the outsiders, they can also access the message that decrypts sensibly.
Ø  Covert signaling attacks: This attack is based on the idea that there is always a way to leak info if there exists an internal process that tries to do so. 


Friday, September 13, 2013

What is Portability Testing?

- Portability Testing is the testing of a software/component/application to determine the ease with which it can be moved from one machine platform to another. 
- In other words, it’s a process to verify the extent to which that software implementation will be processed the same way by different processors as the one it was developed on.  
- It can also be understood as amount of work done or efforts made in order to move software from one environment to another without making any changes or modifications to the source code but in real world this is seldom possible.
For example, moving a computer application from Windows XP environment to Windows 7 environment, thereby measuring the efforts and time required to make the move and hence determining whether it is re usable with ease or not.

- Portability testing is also considered to be one of the sub parts of System testing as this covers the complete testing of software and also it’s re-usability over different computer environments that include different Operating systems, web browsers.

What needs to be done before Portability testing is performed (pre requisites/pre conditions)? 
1.   Keep in mind portability requirements before designing and coding of software.
2.   Unit and Integration Testing must have been performed.
3.   Test environment has been set up.

Objectives of Portability Testing
  1. To validate the system partially i.e. to determine if the system under consideration fulfills the portability requirements and can be ported to environments with different :-
a). RAM and disk space
b). Processor and Processor speed
c). Screen resolution
d). Operating system and its version in use.
e). Browser and its version in use.
To ensure that the look and feel of the web pages is similar and functional in the various browser types and their versions.

2.   To identify the causes of failures regarding the portability requirements, this in turn helps in identifying the flaws that were not found during unit and integration testing.
3.   The failures must be reported to the development teams so that the associated flaws can be fixed.
4.   To determine the potential or extent to which the software is ready for launch.
5.   Help in providing project status metrics (e.g., percentage of use case paths that were successfully tested).
6.   To provide input to the defect trend analysis effort.



Wednesday, March 13, 2013

What are characteristics of autonomic system?


Autonomic systems bring both challenges as well as opportunities for the future networking. The increasing numbers of users have had a negative impact on the complexity of the networks; it has also increased by multiple folds. Autonomic systems provide a solution for this problem. 

Characteristics of Autonomic System

  1. High intelligence: These systems have more intelligence incorporated in to them which lets them tackle this increasing complexity easily.
  2. Business Goal: They are driven by the business goal that the quality of experience of the user must be high. Even with the changing environment, there goals remain the same. But there are changes that take place in the low – level configurations. For example, when a user switches over to a low bandwidth network, the bit rate of the video has to be reduced in order to satisfy the goals of the business.
  3. Complex operations: All the operations carried out in an autonomic system are complex in nature even for the simplest of the services. For example, authentication, video encoding, billing, routing, shaping, QoS prioritizing, admission control.
  4. High level objectives: The human operator just has to specify the high – level objectives and it is left to the system whether it chooses to optimize one or more of the goals. In order to achieve this, the system has to translate these objectives in to low – level configurations.
  5. Adaptability: The system has the ability to adapt itself to the current environment.
  6. Policy continuum: There are a number of perspectives to this as mentioned below:
Ø  Business view: Includes guidelines, processes and goals.
Ø  System view: The service should be independent of the technology as well as the device that is being used.
Ø  Network view: It should be specific to technology but independent of the device.
Ø  Device view: Both technology and device specific.
Ø  Instance view: Operation should be specific to an instance.

  1. Elements: The elements of the network are assumed to be heterogeneous by the autonomic communication systems whereas in plain autonomic computing the elements are taken to be as the homogeneous.
  2. Distributed: These systems work up on a distributed environment.
  3. Complexity: The complexity in autonomic systems is more because of the complex autonomic loop that includes the following operations:
Ø  Interaction between the context  and the business goals
Ø  The MAPE (monitor, analyze, plan and execute) loop.

10. Reliability: In autonomic systems, the network has the authority to decide for itself focusing on high level objectives. Autonomic systems rely heavily up on artificial intelligence. However, there are issues associated with artificial intelligence like it becomes difficult to intervene in between when the things go wrong.It is quite difficult to know whether the system is doing the things it is supposed to do or not.
11. Scalability: This is another major characteristic of autonomic systems. It is required to keep track of the large amounts of knowledge and information. Autonomic systems have three tools to take care of this:
Ø Distributed ontologies
Ø Distributed large – scale reasoning
Ø Exchanging only the useful information
ØDistributing information among the different components of the autonomic network.

But in these cases, detection of the conflicts is a difficult task. For handling the various interactions taking place the various autonomic components efficient protocols are required. 
Currently two approaches have been suggested for developing the autonomic networking systems namely:
1. Evolutionary Approach: Incorporating the autonomic behavior in to the pre – existing infrastructure. This approach will consist of updates in increments till a fully autonomic system is developed. This approach is more likely to be adopted even though it requires a lot of patchwork.
2.  Clean slate approach: This approach is focused up on re – designing of the internet.


Monday, March 4, 2013

What are Software Process Improvement resources?


A supportive and effective infrastructure is required for facilitating the coordination of various activities that place during the course of the whole program. In addition to these qualities the infrastructure should be quite flexible so as to be able to support the changing demands of the software process improvement with time. 
Resources for this program include:
  1. Infrastructure and building support
  2. Sponsorship
  3. Commitment
  4. Baseline activities
  5. Technologies
  6. Coordinate training resources
  7. Planning expertise
  8. Baseline action plan and so on.
- When this program is initiated, a primitive infrastructure is put in to place for the management of the activities that would be carried out by the organization under SPI. 
- The resources mentioned above are also the initial accomplishments that tell how well the infrastructure has been performing. 
- It is the purpose of the infrastructure to establish a link between the program’s vision and mission, to monitor it and guide it and obtaining resources and allocating them.
- Once the SPI program, a number of improvement activities will be taking place across the different units of the organization. 
- These improvement activities cannot be performed serially rather they take place in parallel. 
- The configuration management, project planning, requirements management and reviews etc. are addressed by the TWGs (technical working groups). 
- But all these activities are tracked by the infrastructure.
- Support for the following issues must be provided by the infrastructure:
  1. For a technology that is to be introduced.
  2. Providing sponsorship
  3. Assessment of the organization impact
- As the program progresses, the functions to be performed by the infrastructure increase. 
- There are 3 major components of the SPI program:
  1. SEPG or software engineering process group
  2. MSG or management steering group
  3. TWG or technical work group
- It is third component from which most of the resources are obtained including:
  1. Human resources
  2. Finance
  3. Manufacturing
  4. Development
- However, the most important is the first one and is often called the process group. 
- It provides sustaining support for the SPI and reinforcing the sponsorship. 
The second component i.e., the MSG charters the SEPG.
- This is actually a contract between the SEPG and the management of the organization. 
- Its purpose is to outline the roles and the responsibilities and not to forget the authority of the SEPG. 
- The third component is also known as the process improvement team or process action team. 
- Different work groups created focus on different issues of the SPI program. 
- A software engineering domain is addressed by the technical work group. 
- It is not necessary for the TWGs to address the technical domains; they can address issues such as software standardization, purchasing, travel reimbursement and so on. 
- The team usually consists of the people who have both knowledge and experiencing regarding the area under improvement. 
- The life of TWGs is however finite and is defined in the charter. 
- Once they complete their duties, they return back to their normal work. 
- In the early stages of SPI program, the TWGs might tend to underestimate the time that would be required for the completion of the objectives assigned to them. 
- So the TWGs have to request to the MSG for allotting them more time. 
- Another important component could be the SPIAC or software process improvement advisory committee. 
- This is created in organizations where there are multiple functioning SEPGs. 


Wednesday, May 23, 2012

What is meant by Rational Unified Process?


RUP or rational unified process - refinement of the unified process has been categorized among the most popular and commonly used iterative software development process frame works. The rational unified process has been a trade mark of the IBM Corporation since the year of 2003 when it was developed by the rational software corporation. 
The rational software corporation has long been recognized as a division of the IBM. 

What is a Rational Unified Process?


- Rational unified process as it sounds is an adaptable process frame work rather than being a single and concrete prescriptive process. 
- The rational unified process serves as a frame work that can be tailored according to the needs and objectives of the software development organizations and the project development teams who are responsible for the selection of the elements of the development process that fit their needs. 
You can call the rational unified process as a specific implementation of the unified process since it is absolutely right. 
Rational unified process is a kind of software process product which was acquired by the IBM from the rational software corporation. 
- The rational unified process forms a part of the IBM RMC (rational method composer) using which the whole development process can be customized. 

Based on the experience of the implementation of the RMC for various projects, the below mentioned 6 practices were declared as best practices for modern software engineering:
     1. Iterative development using risk as the primary iteration driver.
     2. Management of the requirements.
     3. Employment of an architecture based on components.
     4. Visual modelling of the software system or application.
     5. Continuous verification of the quality.
     6. Keeping the changes under the control.

Rational unified process contributes greatly in making improvements in the quality of the software system or application and in predicting the software development efforts.

Aspects of Rational Unified Process


The rational unified process is characterized by its following three aspects:
     1. It can be tailored according to the needs that will guide the development process.
     2. It is a tool that can be used for the automation of the whole development process.
     3. It is a service that serves for the accelerated adoption of all the processes and the tools involved.

Rational unified process was actually developed in the year of 1996. The year of 1997 saw the addition of the requirements and the test discipline to the rational unified process. In the year of 1998 again two new aspects were added to the process namely the business modelling and change. Apart from these, some techniques were also added that included:
     1. Performance testing
     2. UI design
     3. Data engineering

With all these techniques the rational unified process was updated to the UML 1.3. the rational unified process constitutes of some set of building blocks that describe the functionality which is to be produced. 
Below mentioned are the main building blocks:
     1. Roles
     2. Work products
     3.  Tasks

Nine disciplines governing the tasks have been defined:
     1. Deployment
     2. Implementation
     3. Requirements
     4. Business modelling
     5. Analysis and design
     6. Test
    In addition to these nine disciplines there are 3 additional disciplines:
    1. Environment
    2. Configuration management
    3. Project management
    
   Like the normal unified process, the rational unified process also consists of the 4 phases namely:
     1. Inception
     2. Elaboration
     3. Construction
     4. Transition

The RMC product has proved to be quite an effective tool for configuring, authoring, publishing and viewing processes with rational unified process incorporated in to it. The above mentioned 6 practices are now recognized as a paradigm in the field of software engineering for designing any software and increasing productivity. The development cycle is said to finish when the product release milestone is reached. 


Tuesday, April 24, 2012

What is a Test Harness?


Test harness is a rarely heard concept in the field of software testing. 

What is a Test Harness?
- A test harness is known by other name “automated test framework”. 
- A test harness can be defined simply as a collection of test data and softwares that have been configured in order to test a unit of a software program.
- The program is run under various conditions. 
- Under each condition the behavior as well as working of the program is observed and the outcomes are reported. 

Phases of Test Harness Process

The whole process of test harness is completed in two individual phases:
1.        Test execution engine and
2.        Test script repository

How is Test Harness carried out?
- Test harness process cannot be carried out without the automation of the tests. 
- The automated tests than can themselves call the concerned functions with the required parameters and execute them.
- The actual results are then compared to the expected results. 
- The process of test harness acts as a hook for the already developed code which is highly testable and can be checked out using an automation framework or test harness.  

What should a Test Harness do?
The below mentioned are the things that a test harness must do:
1. Run specific tests in order to allow optimization.
2. Orchestrate an environment during the run time.
3. Analyze the results

Certain objectives have been defined for a test harness:
1. Automation of the whole testing process.
2. Execution of the specified test cases.
3. Report the outcomes.

Advantages of a Test Harness
Test harness is quite an advantageous process and some of its advantages have been stated below:
1.  It increases productivity by automating the whole test process.
2.  It increases the probability of the occurrence of the regression testing.
3.  It increases the quality of the components of the software system and application.
4.  It helps ensure the similarity of the subsequent as well as the previous test cases.
5.  It helps in running the tests at any time whenever the testing staff is not available.
6. It effectively executes the test scripts including the conditions that are otherwise unexecutable since they are difficult to be simulated.

How Test Harness facilitate testing at Integration level?

- Test harness has also been developed to facilitate the testing at the integration level i.e., the integration testing. 
- The test stubs tested by a test harness are components of an application that is currently under development and during testing they are replaced by the working components of the developed application in the top down design. 
- Test harness serves as an external aid to the software system or application under testing by simulating functionalities and features which are not present in the immediate test environment.
- In other words, the test harness helps in providing substitutes in case any functionality is found to be missing during the test. 
- The same test harness when kept outside from the source code of the software system or application, it can be used for multiple projects again and again. 
- It forms a deliverable part of the project. 
- The test harness is not provided with any knowledge of test cases, test suites and test reports since it has the capability of simulating the functionality.
- The information on these aspects is fed to the test harness via associated automated testing tools and a testing framework. 
- A test harness can also have a graphical user interface for the ease of operation, logging and scripting of the test cases.
- A new test harness is written for each run time and language since it becomes very much difficult to write a test harness which will work across all the languages and run times.  
- Test harness generates the application which is required to run the tests by providing the required code, files, and test cases and so on.


Monday, March 26, 2012

What is the difference between quality assurance and testing?

Quality assurance and testing are the processes that together keep up a control on the quality check of the software system or application. These two processes when implemented together ensure that maximum quality of the software system or application is maintained as much close as possible to the 100 percent.

There is no such software or application that can boast to have 100 percent customer satisfying quality. Well this article is focussed up on these two processes only and the differences between the two. We are discussing differences here because most of the people often confuse between the two.

QUALITY ASSURANCE

- The term “quality assurance” is a self justifying.

- By the term only we can make out that it must be some systematic and planned activities that are to be implemented in a quality system so that a check over its quality requirements is maintained.

- It involves the following processes:
1. Systematic measurement of the quality of the software system or application.
2. Comparison of the quality of the software system or application with the pre- defined quality standards.
3. Monitoring of the processes.
4. An associated feedback for conferring the error prevention.

- A typical quality assurance process also keeps a quality check on the quality of the tools, assemblages, equipments, testing environment, production, development and management processes that are involved with the process of the software testing.

- The quality of a software system or application product is defined by the clients or the customers rather than having a whole society do it.

- One thing that one should always keep in mind that the quality of a software system or application cannot be defined by quality adjectives like poor and good since the quality of one of the aspects of the system could be high and in some other aspect it could be low.

PRINCIPLES OF QUALITY ASSURANCE
The whole process of the quality assurance is guided by the two following principles:

1. Fit for purpose:
The software product is deemed to fulfil the purpose for which it has been made and
2. Right first time:
The mistakes encountered for the first time should be completely eliminated.

TESTING PROCESSES EMPLOYED IN SOFTWARE TESTING & QUALITY ASSURANCE
Below we are mentioning the testing processes that are employed for both the software testing as well as the quality assurance:

1. Testing approaches:
(a) White box testing
(b) Black box testing
(c) Grey box testing
(d) Visual testing

2. Testing levels:
(a) test target:
(i) unit testing
(ii) Integration testing
(iii) System testing
(b) Objectives:
(i) regression testing
(ii) User acceptance testing
(iii) Alpha and beta testing

3. Non functional testing:
(a) Performance testing
(b) Usability testing
(c) Security testing
(d) Internationalization and localization
(e) Destructive testing

4. Testing processes:
(a) waterfall model or CMMI
(b) Extreme or agile development model
(c) Sample testing cycle

5. Automated testing using tools and measurements

In fact both the processes are just the same but with a different perspective i.e., the software testing is aimed at eliminating the bugs out of the software system and the quality assurance takes in to consideration the overall quality of the software system.

In contrast to the quality assurance, software testing is the way to implement the quality assurance i.e., it provides the clients or the customers with the information regarding the quality of the software system or application. The testing is done to make sure of the following points:


1. The product meets the specified requirements.
2. Works as intended.
3. Is implemented with the same characteristics.


The software testing can be implemented at any point of time in the development process unlike the quality assurance that should be implemented right from the beginning to ensure maximum quality.


Wednesday, January 25, 2012

What are different characteristics of Certified Association in Software Quality (CASQ)?

CASQ is the abbreviated form of Certified Association in Software Quality. Day by day the competition is increasing in the market. Therefore it becomes necessary to incorporate ability in the management by virtue of which it can easily distinguish the skilled individuals and professionals. CASQ certification lays down a basic foundation for understanding of principles of quality assurance as well as the practices.

- Whenever software attains the certification of CASQ it attains a level of professionalism with regard to the principles and practices of software quality assurance in the field of information technology.

- The software becomes a member of an acclaimed group of professionalism and receives the recognition for its competency by professional and business associates.

- It is guaranteed with fast career advancement.

OBJECTIVE OF CASQ
This certified association of software quality is aimed at establishing the standards of the qualification. Its objective is to continue the advancement in the professional competence.

CASQ BENEFITS

- It defines the tasks and skills associated with the quality of the software in an appropriate order to determine the level of skill mastery.

-It brings out the will of an individual to make a professional improvement.

- It acknowledges the attainment of an standard of professional competency which is duly acceptable.

- It aids the other organizations in the process of selection and promotion of the individuals who qualify successfully.

- It motivates the skilled individuals and professionals to maintain their professional competency and also take up their software quality responsibilities effectively.

- It assists the skilled individuals and professionals in enhancing and improving the software quality assurance programs that are carried out by their organizations.

WHAT CERTIFIED CANDIDATES SHOULD DO?
- To accept the responsibility is a distinguishing mark of professional competency.

- The certified individuals must maintain their standards with regard to their conduct.

- This helps them in discharging their responsibilities effectively.

- If an individual wants to apply for certified association of software quality certification than he/ she has to strictly abide by the policy of the code of ethics that guide the principles and practices of software quality assurance.

- This software certification program comprises of procedures for monitoring the individuals’ behaviour and whether they are sticking to the certification policies and ethics or not.

- If a certified individual or a professional later fails to adhere to these policies, than he is subjected to de-certification.

There are some common principles that one needs to adhere to and these have been mentioned below:

1. Principles
2. Quality concepts
3. Quality assessments
4. Quality models
5. Quality baselines
6. Quality practices
7. Quality planning
8. Quality assurance
9. Define
10.Build
11.Implement
12.Quality work
13.Quality metrics and measurements
14.Security
15.Internal control
16.COTS and contracting quality
17.Out sourcing


There are certain prerequisites that each candidate needs to qualify such as complete a course for a stipulated period of time from an accredited institution, and some experience in information science. The candidate needs to strictly follow the guidelines and commit to the code of ethics.

EXAMINATION FOR CASQ
- The examination for obtaining this kind of certification is available in many countries.

- The QAI global institute is famous for its professionalism in software quality assurance.

- It was established in 1980 and basically was a software quality assurance industry.

- The first certified association of software quality certification was carried out in the year of 1985. The company launched its first formal process in the year of 1990.

- These days, the QAI global institute has attained the multinational reach. The company has certified over 36,000 professionals in the IT sector in over 44 countries of the world.


Facebook activity