Subscribe by Email


Thursday, September 25, 2014

Which languages support logic programming?

In this article we discuss about the languages supporting the logic programming. There are many such languages and we divide them in to the following two categories:
- Prolog programming language family
- Functional logic programming languages

First we shall discuss some languages falling under the first category:
1. Algebraic logic functional programming language or ALF: Combines the techniques of both the logic programming techniques and the functional programming techniques. This language is based on the horn clause logic. The resolution rules for solving the literals and evaluating the functions form the foundation for the operational semantics of this language. It follows a left – most inner – most narrowing strategy for reducing the number of steps in solving a problem. Thus, these operational semantics are much more efficient and powerful than those produced by the resolution strategy of prolog.
2. Alice ML: This programming language was designed at the Saarland University and is actually a standard ML dialect. But it has the support for concurrency (this includes distributed computing as well as multithreading), lazy evaluation, constraint programming etc. this language uses a relation concept called “promise” according to which a future value provided by one thread will be computed to another thread. Thus data flow synchronization is possible in Alice ML by use of future typed variables and promise concept.
3. Ciao
4. Curry
5. Leda: It stands for ‘library of efficient data types and algorithms’ and is a multi – paradigm – programming language. The language is used for mixing the features of logic based, object based, functional and imperative programming in to one.
6. Mercury
7. Metal
8. Mozart
9. Oz: This one is another multi – paradigm programming language for the purpose of programming language education. It was developed in the year of 1991 at the Swedish institute of computer science. A primary example of Oz implementation is the Mozart Programming system. The system comes with an open source language and supports many platforms including Microsoft windows, mac os x, Linux, Unix etc.
10. Visual prolog

Now we shall see the second category of languages i.e., the functional logic programming languages:
1. B – prolog: This is the high – level implementation of the prolog and has some additional features such as the event handling rules, matching clauses, arrays, declarative loops, tabling, constraint solving etc. it was developed in 1994 and now is a popular CLP system. Though B – prolog is a commercial product, it comes free for research purposes. A clause in which the input/ output unifications and the determinacy are denoted in an explicit way is called a matching clause.
The matching clauses are translated in to the respective trees by the compiler and indexes are generated. This compilation is easier when compared to the compilation of the normal clauses in prolog. B – prolog overcomes the absence of active sub goals programming facility in prolog by introducing the action rules or AR which is a powerful but simple language for serving this purpose. The sub –goals are called agents. Activation of an agent is followed by the execution of an action. The CHIP system heavily affected the finite domain solver of B – prolog.  For creating arrays, a built – in is provided by B – prolog called new_array(X, Dims) where x stands for uninitialized variable and Dims for positive integers for specifying the array dimensions.
2. Eclipse
3. GNU prolog
4. Jprolog
5. KL0 and KL1
6. Logtalk
7. Objlog: This one is a frame based language that combines two things: Prolog II and object from CNRS.
8. Prolog
9. Prolog++
10. Strawberry prolog
11. Tuprolog
12. Visual prolog
13. YAP


What is a programming paradigm?

The fundamental style in which you design and write computer programs is called a programming paradigm. It’s the way you build the elements and structures of your programs. Presently we have the following 6 kinds of programming paradigms:
- Imperative
- Declarative
- Functional
- Object – oriented
- Logic
- Symbolic

Just as the different methodologies define the software engineering, different paradigms define the programming languages. Different languages are designed for supporting different paradigms. For example,
- Object oriented paradigm is adopted by smalltalk
- Functional programming is adopted by Haskell

Though there are certain programming languages that support only one kind of paradigm, there are many others that can work with multiple paradigms. Examples are:
- Object pascal
- C++
- Java
- C#
- Scala
- Visual basic
- Common lisp
- Scheme
- Perl
- Python
- Ruby
- Oz
- F#

The pascal and C++ programs can either be purely object oriented or purely procedural or may contain features of both. It is up to the programmers, how they want to use the paradigms. In object oriented programming, the program is considered as a collection of objects that can interact with each other. Whereas, in functional programming the program is considered as a sequence of the evaluations of stateless functions.
The process – oriented programming helps in designing the programs as a set of processes running concurrently in systems with multiple processors. The shared data structures are used by these processes. Some techniques are allowed by programming paradigms while others are forbidden. For example, the use of side effects is forbidden while using pure functional programming. The use of the dangerous goto statement is not allowed in structured programming. Because of this the modern programming paradigms are considered to be overly strict when compared to their older counterparts.
Proving the theorems can be easy if you avoid the use of certain techniques or if you understand the programs’ behavior. Sometimes programming models are compared with the paradigms. The abstractions of the systems are called systems. An example is the von Neumann model which is used in the sequential computers. There are a number of models for computers using parallel processing.
 Most of the models are based upon message passing, shared memory, hybrid etc. machine code and the assembly language instructions are the programming paradigms of the lowest level. The assembly language makes use of the mnemonics for operations. They are often referred to as the first generation languages. The assembly language is still in use for programming the embedded systems for obtaining direct control over the machine. The next generation of the languages is represented by the procedural languages. Examples are cobol, fortran, algol, PL/I, basic, c etc. procedural paradigm is adopted by all these languages.
The experience and ability of the programmer affects the efficiency and the efficacy of the problem’s solution. Later, the object oriented languages came in to the scenario such as the smalltalk, simula, java, Eiffel, C++ etc. These languages use objects (data and functions or methods for handling it) for modeling the real world problems. The object’s methods are the only way through which the user can access the data. Object oriented paradigm has led to the possibilities of creation of the object – oriented assembler language. For example, the HLA (high level assembly) is such a language that provides support for the advanced data types. In declarative programming paradigms, the problem is told to the computer but not the method. The program thus consists of properties that can be used for finding the result that is expected. The program is not a procedure. 


Wednesday, September 24, 2014

What is a reasoning system?

In the field of information technology, a software application which is used for the purpose of drawing conclusions from the knowledge that is available is called a reasoning system. The reasoning systems work on the principles of logical induction, deduction and some other reasoning techniques. These systems fall under the category of more sophisticated systems called the intelligent systems. Reasoning systems have a very important role to play in the fields of artificial intelligence and knowledge engineering.
In these systems, the knowledge that has been acquired already is manipulated for generating new knowledge. By knowledge here we mean symbolical representation of the propositional statements and facts based on assumptions, beliefs and assertions. Sometimes knowledge representations that are being used might be connectionist or sub – symbolic. An example of this is a trained neural net. The process of inferring and deriving new knowledge via logic is automated by means of the reasoning systems. The reasoning systems provide support for the procedural attachments for application of knowledge in a situation or a domain. Reasoning systems are used in a wide range of fields:
- Scheduling
- Business rule processing
- Problem solving
- Complex event processing
- Intrusion detection
- Predictive analysis
- Robotics
- Computer vision
- Natural language processing
As we mentioned above, logic is used by reasoning systems for generating knowledge. However there is a lot of difference and variation in usage of different systems of logic. This is also affected by formality. Symbolic logic and propositional logic is used by majority of the reasoning systems. The variations or the differences demonstrated are usually the FOL (formal logic systems) representations, their hybrid versions, extensions etc.  that are mathematically very precise.
There are other additional logic types such as the temporal, modal, deontic logics etc. that might be implemented explicitly by the reasoning systems. But, we also have some reasoning systems for the implementation of the semi – formal and imprecise approximations to the logic systems that are already recognized. A number of semi – declarative and procedural techniques are supported by these systems for modeling various reasoning strategies.
The emphasis of the reasoning systems is over pragmatism rather than formality. It also depends on attachments and other custom extensions for solving the real world problems. Other reasoning systems make use of the deductive reasoning for drawing inferences. Both the backward reasoning and forward reasoning are supported by the inference engines in order to draw conclusions through modus ponens. These reasoning methods are recursive and are called as backward chaining and forward chaining respectively.
Though majority systems use deductive inference, a small portion also uses inductive, abductive and defeasible reasoning methods. For finding acceptable solutions for intractable problems, heuristics might also be used. OWA (open world assumption) and CWA (closed world assumption) are used by reasoning systems. The first one is related with the semantic web and the ontological knowledge representation. Different systems have different approaches towards negation.
Apart from the bitwise and logical complement, other existing forms of negation both strong and weak (such as inflationary negation, negation – as - failure) are also supported by the reasoning systems. There are two types of reasoning that are used by reasoning systems namely monotonic reasoning and non – monotonic reasoning. There are many reasoning systems that are capable of reasoning under uncertainty. This is very useful particularly in situated reasoning agents that are used for dealing with the world’s uncertain representations. Some common approaches include:
- Probabilistic methods: Demster – Shafer theory, Bayesian inference, fuzzy logic etc.
- Certainty factors
- Connectionist approaches
Types of reasoning system are:
- Constraint solvers
- Theorem provers
- Logic programs
- Expert systems
- Rule engines


Tuesday, September 23, 2014

What is concurrency control with respect to databases?

Concurrency control is a technique applied in the fields of operating systems, databases, computer programming, and multi processing etc. for ensuring that concurrent operations produce the correct results in as less time as possible. Both the hardware and the software parts of the computer are made up of smaller modules and components where each of them is designed and programmed respectively to work correctly according to some consistency rules. There is concurrent interaction between these components through messages, shared data, which can lead to a result which is in violation of those rules.
The basic idea of concurrency control is to provide methodologies, theories and rules for enforcing consistency in the whole system. Implementation of concurrency control reduces the performance because we apply some constraints on the components which does have the effect of reducing the overall speed. However one thing that should be taken care of is to achieve consistency with as much efficiency as possible and without reducing the performance below minimum levels.
The drawbacks of concurrency control include additional complexity and generation of more overhead in using a concurrent algorithm. The concurrent algorithms generate more overhead when compared to their sequential algorithm counterparts. If the concurrency control mechanism fails, it can lead to torn read and write operations and can corrupt the data.
In this article we talk about concurrency with respect to the databases. Concurrency control is implemented in DBMS, distributed applications etc. for ensuring that the concurrent data transactions are accomplished without causing damage to the data integrity. Distributed applications include cloud computing and grid computing. Concurrency control is also used in some other transactional objects.
It is an essential part of the systems where two or more transactions overlap over the same time instant and can operate on the same data. This happens in almost any general purpose database management system.
Since the advent of database systems, research has been going on related to this concept. Serializability theory is the best established theory that helps define the concept of concurrency control.  This theory also lets us in designing as well as analyzing the concurrency control methods as well as mechanism as effectively as possible.  There is another theory that does not emphasize upon concurrency control over the abstract data types but rather over atomic transactions. However this theory though having a wider scope and more refined, it adds more complexity to the system. Both the theories have their advantages and disadvantages. Merging these two theories might help because they are complementary to some extent.
For ensuring proper concurrency control and correct execution of the transactions, only the serializable transactions and schedules are generated by the system and executed. In some cases, the serializability might be relaxed intentionally by the system for increasing the performance. But, this is done only in those cases where it won’t generate incorrect output.
There are many cases when the transactions fail. Here the system needs to have a recoverability property for recovering from the damage. A good database system also ensures that the result of the transactions that have committed is not lost if the system is switched off accidentally or crashes. On the other hand it also ensures that the incomplete results of the aborted transactions are erased and the actions are rolled back. The ACID rules (mentioned below) characterize the transactions:
- Atomicity: Each thread consists of a single transaction.
- Consistency: This characteristic depends largely on user.
- Isolation: Every transaction should be executed in isolation i.e., should not interfere with others.
 - Durability: The results of the committed actions should persist.
 Nowadays as, database systems are becoming more distributed, the focus is more upon the distribution of the concurrency control mechanism.  


Saturday, September 20, 2014

What are some concurrency control mechanisms (with respect to databases)?

In this article we discuss about different types of concurrency control mechanisms that are implemented in databases.
- Optimistic: This type of concurrency control method delays transaction checking. It does not check the integrity and isolation rules such as the recoverability, serializability etc. of the transaction until it has completed executing all of its associated operations. Also, any of the operations of the transaction are not blocked. If, upon commitment the transaction’s operations are violating these rules, the transaction is aborted. Immediately after being aborted, this transaction is executed again though this generates a restart and re – execution overhead. This mechanism is good to follow if there not too many abortions.
- Pessimistic: The concurrency control methods falling under this category block a transaction from carrying out any transaction if it is suspected to violate the rules. The transaction is not allowed to execute until the probability of rule violation becomes zero. However such prolonged blocking of the transactions reduces the performance drastically.
- Semi – optimistic: These concurrency control mechanisms consider blocking the transaction only in some situations where it is important to do so. Transactions are not blocked when rules are being checked as in optimistic concurrency control mechanisms.

The performance of these different types of concurrency control mechanisms is different. By this we mean they all have different throughputs (rate of transaction completion). This depends on various factors such as the level of parallelism, transaction types mix etc. The trade – offs between the various categories should be considered and the one providing highest performance in the particular situation should be chosen. Two transactions mutually locking each other result in a deadlock. In such a situation the involved transactions go on waiting forever and are not able to complete. The concurrency control mechanisms that are non – optimistic are observed to have more deadlocks. The transactions have to be aborted for resolving the deadlocks. All this deadlocks, blocking, resolving introduces delays in performance and these are the major trade – off factors between the types. Below we mention some major concurrency control methods which many variants falling under the above mentioned categories:
- Locking: Its variants include the two – phase locking (2PL). This mechanism facilitates the access to the locks on data acquired by the transactions. If a transaction tries to acquire a lock over a piece of data already locked on by another transaction, it is blocked till the latter transaction releases its lock. This however depends on the type of access operation and type of lock.
- Serialization graph checking or precedence graph checking: This mechanism checks out for any cycles in the graph of the schedule and if found breaks them by aborting the involved transactions.
- Timestamp ordering: The timestamps are assigned to the transactions and access to data is kept under control by constant checking and time stamping.
- Commitment ordering: The transactions are checked in the order of their commitment so as to maintain their compatibility with their precedence order.

There are some other concurrency control methods are used along with the above mentioned types:
- Index concurrency control: The access operations are synchronized with the indexes instead of synchronizing with the user data. Performance can be gained using specialized methods.
- Multi – version concurrency control or MVCC: Each time an object is written, it generates a new copy of that object so that the other transactions can still read the object. This increases concurrency without compromising with the performance.
- Private workspace model: A private workspace is maintained by each transaction for accessing the data. Any changes made to the data become visible to the outside transactions after the transaction commits. 


Friday, September 19, 2014

Best practices for concurrency control with respect to databases?

Today almost all the service – oriented businesses have grown highly dependent on reliable and speedy access to their data. Most of the global enterprises need this access to databases on a 24x7 level without interruptions. These reliability, availability and performance needs of the organizations are met by database management systems (DBMS). Thus, a DBMS is responsible for two things. Firstly, for protecting data that it stores and for providing correct, reliable and all – time access to this data. The concurrency control and recovery mechanisms of DBMS are responsible for carrying out these functions properly.
Concurrency control mechanism ensures that you get to see the execution of only your transaction even though 100s of users are accessing the database at the same time. The recovery mechanism ensures that the database is able to recover from all the faults. It is because of the existence of these functionalities that the programmers feel free to add new parts to the system without having much to worry about. A transaction is nothing but a unit of work which consists of several operations and updates. Every transaction is expected to obey the ACID rules. In this article we discuss about some best practices for concurrency control.
- Two phase locking: Locking is perhaps the most widely used technique for maintaining control over concurrency matters. This mechanism provides two types of locks namely, the shared lock (S), and the exclusive lock (X). The compatibility matrix defines the compatibility of these two locks. According to the compatibility matrix, S locks can be held by two different transactions at the same time but this is not possible in the case of X locks for the same data item. With this policy multiple read operations can be carried out concurrently. In other words read access to an item is protected by the S locks. On the other side, the write access is protected by the exclusive locks. In simple words, no other transaction can obtain a lock on a data item which has already been locked by another transaction if the two locks are conflicting. A transaction requesting for a lock at an instant when it cannot be granted, it is blocked by the mechanism until the other transaction releases its lock.
- Hierarchical locking: Practically, the notion of the conflicting locks works at different levels of granularity. Deciding for proper granularity for locking an item generates a locking overhead and might interfere with concurrency. Locking at the granularity of a tuple (one row) allows the system to keep concurrency at the maximum level. The disadvantage of this locking mechanism is that for a transaction to access multiple tuples, it needs to lock all those tuples. This will require issuing same number of calls to the lock manager generating a substantial overhead. This can be avoided by considering a coarser granularity but at the expense of more false conflicts.

The two phase locking is categorized under the pessimistic technique as it assumes that there will be interference among the transactions and takes measures against the same. Optimistic concurrency control provides an alternative to this. With this, the transactions can carry out the operations without having to acquire locks. For ensuring that there is no violation of serializability, a validation phase is performed before the transactions can commit. A number of optimistic protocols have been proposed. The validation process makes sure that the read and write operations of two transactions running concurrently do not conflict. If such a conflict is determined during validation, the transaction is immediately aborted and forced to restart. Thus, for ensuring isolation, optimistic mechanisms rely on restarting the transaction whereas the locking policies use blocking strategy.


Wednesday, September 17, 2014

What are some challenges with respect to database concurrency control?

During the sequential execution of the transactions, the execution time periods do not overlap each other and therefore the transaction concurrency does not exist. But if we allow interleaving of the transactions in a manner that is not controlled properly, we are bound to get undesirable results. There are many situations in which concurrency of a database might be harmed.
In this article we discuss such challenges. The transaction models based upon the ACID rules have proved to be quite durable in their course of time. They serve as a foundation for the present database and transaction systems. Many types of environments, parallel as well as distributed have used this basic model for implementing their own complex database systems. These environments however require some additional techniques such as the two - phase commit protocol. Though this model has been a great success, it suffers from few limitations.
The model lacks flexibility and thus is not able to model some particular kinds of interactions between the organization and the complex systems. Also in collaborative environments, a piece of data cannot be strictly isolated even if it is desirable. Also since the ACID transaction model suits well for the systems with short and simple transactions, it is not so appropriate for the workflow management systems.
Such systems require rich transaction models with multi – level notion. For other environments for e.g., the mobile wireless networks also this model does not suffice. In such environments expectations of having large disconnection period are higher. Then we have the internet which is a loosely coupled WAN (wide area network) which too cannot fully adopt ACID model because the availability is low. We require techniques which can help the ACID model in adjusting to such extremely varying situations.
Research is going on new techniques that help in maintaining concurrency control as well as recovering in dissemination – oriented environments, heterogeneous systems and so on. Another problem is that this model is not capable of exploiting the data and application semantics through a general mechanism. Knowledge about this can help a great deal in improving the performance of the system by a significant margin. A separate research has been going on over the subject of recovery and concurrency control. The DBMS’ recovery component is responsible for durability as well as atomicity of the ACID transactions. Distinguishing between the volatile storage and the non – volatile storage becomes absolutely necessary. The below mentioned three types of failure pose a challenge for the proper working of a DBMS:
- Transaction failure: In some cases it happens that the transaction during execution reaches a state from where it cannot commit successfully. In such cases all the updates made by that transaction have to be erased from the database as a measure of atomicity preservation. This is called transaction rollback.
- System failure: A system failure often causes a loss of volatile memory contents. It has to be ensured that the updates made by all the transactions before the occurrence of the crash persist in the database and the updates made by the unsuccessful transactions have been removed.
- Media failure: In these failures, the non – volatile storage gets corrupted which makes it impossible to recover the online version of the data. Here the option is to restore the database from an archive and using operation logs all the updates must be made.
Recovery from the last kind of failure requires using other additional mechanisms. For all this to take effect, the recovery component has to be reliable. Flaws in the recovery system of a database put it to a high risk of losing data.


Facebook activity