Subscribe by Email


Showing posts with label traditional. Show all posts
Showing posts with label traditional. Show all posts

Thursday, October 3, 2013

What is Traditional Cryptography?

- Cryptography is the practice that involves study and application of the techniques for making communication secure with the adversaries or the third parties. 
To be more general, it involves construction and the analyzation of the protocols for overcoming the impact of the adversaries and other aspects concerning the information security such as the following:
Ø  Data confidentiality
Ø  Data integrity
Ø  Authentication
Ø  Non – repudiation
- The modern cryptography in contrast to the traditional cryptography intersects the computer science, mathematical and the engineering disciplines. 

There are various applications of cryptography as in the following:
Ø  ATM cards
Ø  Computer passwords
Ø  Electronic commerce

- The traditional cryptography was synonymous with the process of encryption which involves converting the information which is in readable state to such a state in which it appears like utter nonsense. 
- The one who generated the encrypted message also shared the technique for decoding the message only with the desired recipients, thus the unwanted people are precluded from doing so.
- Cryptography is in use since the World War I and the methods that were used then now have become so complex and eventually its application increased. 
Modern cryptography’s foundation is based up on the computer science and the mathematical theory. 
- The designing of the cryptographic algorithms is done around the computational hardness assumptions. 
- In practice, this makes these algorithms quite hard to break by any third party. 
- However, theoretically it is possible to break in to such a system but for doing so any known practical means are in-feasible.
- That is why, all these schemes are considered to be computationally safe and secure. 

For the following, the continuous adaptation of these methods is required:
Ø  Improvements in the algorithms for the integer factorization.
Ø  Faster computing technology.


- Also, there are schemes that are information – theoretically secure and even with unlimited computing power, these schemes cannot be broken.
- One such scheme is one time pad. 
- Also, the implementation of these schemes is also quite difficult when compared to the schemes that are computationally secure but are theoretically breakable. 
- Traditionally cryptography referred only to the encryption which involves conversion of the ordinary info in to cipher text or unintelligible text. 
The reverse process of this is decryption. 
- The pair of algorithms that carry out these two processes is called the cipher. - Each instance of the operation of the cipher is controlled by a key which is kept secret between the communicants. 
- The purpose of this key lies in decryption of the cipher text. 
- Earlier the encryption and the decryption process were carried out directly by the ciphers without involvement of any integrity or authentication checks. 
Before the advent of the modern cryptography, the traditional cryptography was known to be concerned only with the message confidentiality i.e., converting the message from comprehensible text in to incomprehensible text and vice versa. 
- The message was thus unreadable for the eavesdroppers and the interceptors without key. 
- For ensuring the secrecy in the communications, the encryption process was used. 
- But now the field expands far beyond the confidentiality issues.
- It now consists of techniques for authentication and message integrity checking, secure computation techniques, interactive proofs, digital signatures and so on. 
- Earlier two types of classical ciphers were used namely substitution ciphers and the transposition ciphers. 
- The former type involved replacing the letters by some other letters.
- The transposition ciphers involved rearrangement of the letters. 
- Some examples of early ciphers are caeser cipher, atbash cipher etc. 
- The early ciphers were assisted by some other physical aids and devices. 
Eventually more complex ciphers could be developed with the development of the digital computers. 
- Any kind of data that could be represented in binary format could be encrypted.


Friday, December 28, 2012

What is the difference between Purify and traditional debuggers?


The IBM Rational Purify grants power to the developers to deliver a product whose quality, reliability and performance matches with the expectations of the users. The purify plus combines the following and provides 3 benefits:
  1. Bug finding capabilities from the rational purify,
  2. Performance tuning effects from the rational quantify and
  3. Testing rigors from the rational pure coverage.
Together these three things make purify a different debugger that what the traditional debuggers we have. The above mentioned 3 benefits are measured in the terms of the faster development times, less number of errors and better code. 

About IBM Rational Purify

- The purify is actually a memory debugger by nature and is particularly used for the detection of the memory access errors especially in the programs that have been written in languages such as C and C++. 
- This software was originally developed by Reed Hastings, a member developer of the pure software organization. 
- However, Rational Purify exhibits the similar functionality as that of the Valgrind, bounds checker, Insure++.
- A process called dynamic verification using which the errors that occur during the execution can be discovered by a program is supported by the rational purify just like a debugger.
- However, there is another process called the static verification which is just the opposite of the dynamic verification and is also supported by the rational purify. 
- This process works by digging out inconsistencies present in the program logic. 
- Whenever there is a linking between a program and purify, the correct version of the verified code is automatically inserted in to the executable part of the code by either adding it to the object code or by parsing. 
- So, if whenever an error occurs, the location of the error, its memory address and other relevant info will be printed out by the tool. 
- Similarly, whenever a memory leak is detected by the purify it generates a leak report towards the exit of the program.

Difference between Rational Purify and Traditional Debuggers

- The major difference between the rational purify and the traditional debuggers is the ability of detecting the non – fatal errors. 
- The traditional debuggers only show up the sources which can cause the fatal errors such as a de-referencing to a null pointer can cause a program to crash and they are not effective in finding out the non – fatal memory errors. 
However, there are certain things for which the traditional debuggers are more effective than the rational purify for e.g.
- The debuggers can be used to step line by line through the code and to examine the memory of the program at any particular instance of time. 
- It would not be wrong if we say that these two tools are complementary to each other and can work great for a skilled developer. 
- The purify comes with other functionality which can be used for more general purposes rather than the debuggers which can be used only for the code.
- One thing to be noted about the purify is that it is more effective for the programming languages in which the memory management is left to the program developer. 
- This is the reason why the occurrence of the memory leaks is reduced in the programs written in languages such as java, visual basic and lisp etc. 
- It is not like these languages will never have memory leaks, they do have which occur because of the objects being referred to unnecessarily (this prevents the re – allocation of the memory.). 
- IBM has provided solution for these kind of errors also in the form of its another product called the rational application developer.
- Errors such as the following are covered by the purify:
  1. Array bounds
  2. Access to un-allocated memory
  3. Freeing the memory that is un-allocated
  4. Memory leaks and so on. 


Friday, July 27, 2012

What are the causes for the failure of traditional planning approach?

The traditional planning approaches does not always lead to very satisfactory results.

Causes for Planning Failure


Cause #1: Planning is done by activity and not feature
- The traditional approaches to planning focus on activity completion rather than on delivery of features.
- Activity based plans generally lead to projects that overrun their schedules.
- Hence, quality is reduced.

Cause #2: Activities do not finish early
Cause #3: Lateness is passed down the schedule
- Traditional approaches being activity based, their main focus is to focus on dependencies between activities.
- Testing will start late if anything goes worse than planned according to traditional approach.
- Testing will start early if everything goes better than planned.

Ways to avoid late start of testing are:
1. User interface coding finishes late.
2. Middle tier coding takes longer than planned and finishes late.
3. Middle tier coding starts late as tables adding to database finishes late.
4. Tester is not available.

Cause #4: Activities are not independent
- Activities are independent if duration of one activity does not influence the duration of another activity.
- For independent activities, late finish on one activity can be offset by an early finish on another.


Cause #5: Delay caused by multitasking
- Multitasking exacts a horrible toll on productivity.
- It becomes an issue once a project starts to have some activities that finish late.
- Dependencies between activities become critical.
- For a traditionally planned project, multitasking becomes a problem for two reasons:
1. Work is assigned in advance and it is impossible to allocate work efficiently in advance.
2. It focuses on achieving high level of utilization of all individuals rather than on maintaining sufficient slack.

Cause #6: Features are not developed by priority
Cause #7: Ignoring Uncertainty
- We fail to acknowledge uncertainty in traditional approach.
- Ignore the uncertainty about product.
- Assuming initial requirement analysis will lead to complete specification of product.
- Ignoring uncertainty about how we will build the product.
- The best way to deal with uncertainty is to iterate.


After looking at the problems with traditional approaches to planning, many projects are disappointing. Planning based on activity diverts us from features and as a result, a variety of problems leads to the likelihood of delivering late against a schedule.










Monday, August 3, 2009

Introduction to Databases

Databases play an important role in almost all areas where they are used including business, engineering, medicine, law, education, and library science, to name a few.
A database is a collection of related data, where data means recorded facts. A typical database represents some aspect of the real world and is used for specific purposes by one or more groups of users. Databases are not specific to computers. Examples of non-computerized databases abound: phone book, dictionaries, almanacs, etc. A database has the following implicit properties :
1. A database represents some aspect of the real world.
2. A database is a logically coherent collection of data with some inherent meaning.
3. A databse is designed, built, and populated with data for a specific purpose.
4. A databse can be of any size and of varying complexity.
5. A database may be generated and maintained manually or it may be computerized.
A database management system (DBMS) is a collection of programs that enables users to create and maintain a database. The DBMS is a general-purpose software system that facilitates the process of defining, construction, and manipulating databases for different applications.
Defining a database involves specifying the data types, structures, and constraints for the data to be stored in the database.
Constructing a database is the process of storing the data itself on some storage medium that is controlled by the DBMS.
Manipulating a database includes such functions as querying the database to retrieve specific data, updating the database and generating the reports from the data.

Database System

CHARACTERSTICS THAT DISTINGUISH DATABASE APPROACH FROM TRADITIONAL FILE-PROCESSING APPLICATIONS :
- Existence of a catalog : It contains information such as structure of each file, the type and storage format of each data item and various constraints on the data. The information stored in catalog is called meta-data.
- Program data independence : In traditional file processing, the structure of a file is embedded in the access programs, so any changes to the structure of a file may require changing all programs that access this file. By contrast, the structure of data files is stored in DBMS catalog separately from access programs. This property is called program data independence.
- Program operation independence: Users can define operations on data as part of database applications. An operation is specified in two parts - interface of operation : includes operation name and data types of its arguments, implementation of operation : specified separately and can be changed without affecting the interface. This is called
program operation independence.
- Data abstraction : The characteristic that allows program data independence and program operation independence is called data abstraction.
- Support of multiple user views.
- Sharing of data among multiple transactions.

Main Categories of Database users are :
- Administrators.
- Designers.
- End users.
- System analysts and application programmers.
- DBMS system designers and implementers.
- Tool Developers.
- Operators and maintenance personnel.

Advantages of using Databases :
- Potential for enforcing standards.
- Reduced application development time.
- Flexibility.
- Availability of up-to-date information to all users.
- Economies of sale.


Facebook activity