A temporal database is a database with built-in time aspects, e.g. a temporal data model and a temporal version of structured query language. More specifically the temporal aspects usually include valid-time and transaction-time. These attributes go together to form bitemporal data.
* Valid time denotes the time period during which a fact is true with respect to the real world.
* Transaction time is the time period during which a fact is stored in the database.
* Bitemporal data combines both Valid and Transaction Time.
- Temporal DBMS manages time-referenced data, and times are associated with database entities.
- Modeled reality.
- Database entities.
- Fact: any logical statement than can meaningfully be assigned a truth value, i.e., that is either true or false.
- Valid Time (vt).
- Valid time is the collected times when the fact is true.
- Possibly spanning the past, present & future.
- Every fact has a valid time.
- Transaction Time (tt).
- The time that a fact is current in the database.
- Maybe associated with any database entity, not only with facts.
- TT of an entity has a duration: from insertion to deletion.
- Deletion is pure logical operation.
- Time domain may be discrete or continuous.
- Typically assume that time domain is finite and discrete in database.
- Assume that time is totally ordered.
- Uniqueness of “NOW”.
- The current time is ever-increasing.
- All activities is happed at the current time.
- Current time separates the past from the future.
- “NOW” <> “HERE”.
- Time cannot be reused!
- A challenge to temporal database management.
Most applications of database technology are temporal in nature:
- Financial applications : portfolio management, accounting & banking.
- Record-keeping applications : personnel, medical record and inventory management.
- Scheduling applications : airline, car, hotel reservations and project management.
- Scientific applications : weather monitoring.
Monday, September 7, 2009
Temporal Database Concepts
Posted by
Sunflower
at
9/07/2009 11:31:00 PM
0
comments
Labels: Data Model, Databases, Temporal Databases
![]() | Subscribe by Email |
|
Sunday, September 6, 2009
Deductive Object-Oriented Databases
Deductive Object Oriented databases (DOODs) came about through the integration of the OO paradigm and logic programming. The following broad approaches have been adopted in the design of DOOD systems:
- Language Extension : An existing deductive language model is extended with object oriented features. For example, Datalog is extended to support identity, inheritance, and other OO features.
- Language Integration : A deductive language is integrated with an imperative programming language in the context of an object model or type system. The resulting system supports a range of standard programs, while allowing different and complementary programming paradigms to be used for different tasks, or for different parts of the same task.
- Language Reconstruction : An object model is reconstructed, creating a new logic language that includes object-oriented features. in this strategy, the goal is to develop an object logic that captures the essentials of the object-oriented paradigm and that can also be used as a deductive programming language in DOODs.
Validity combines deductive capabilities with the ability to manipulate complex objects. The ability to declaratively specify knowledge as deduction and integration rules brings knowledge independence. Moreover, the logic-based language of deductive databases enables advanced tools, such as those for checking the consistency of a set of rules, to be developed. VALIDITY provides the following :
- A DOOD data model and language, called DEL (Datalog Extended Language).
- An engine working along a client-server model.
- A set of tools for schema and rule editing, validation, and querying.
The DEL data model provides object-oriented capabilities, similar to those offered by the ODMG data model, and includes both declarative and imperative features. The declarative features include deductive and integrity rules, with full recursion, stratified negation, disjunction, grouping, and quantification. The imperative features allow functions and methods to be written.
Posted by
Sunflower
at
9/06/2009 11:40:00 PM
0
comments
Labels: Data Model, Deductive Databases, Object Oriented, validity
![]() | Subscribe by Email |
|
Monday, August 10, 2009
Functionality of Data Warehouses and Building a Data Warehouse
Data warehouses exist to facilitate complex, data-intensive, and frequent adhoc queries. The data warehouse access component supports enhanced spreadsheet functionality, efficient query processing, structured and adhoc queries, data mining, and materialized views. These offer pre-programmed functionalities such as :
- Roll-up : Data is summarized with increasing generalization.
- Drill-down : Increasing levels of details are revealed.
- Pivot : Cross tabulation is performed.
- Slice and dice : Performing projection operations on the dimensions.
- Sorting : Data is sorted by ordinal value.
- Selection : Data is available by value or range.
- Derived attributes : Attributes are computed by operations on stored and derived values.
BUILDING A DATA WAREHOUSE :
In constructing a data warehouse, builders should take a broad view of the anticipated use of the data warehouse. Acquisition of data for the warehouse involves the following steps :
- The data must be extracted from different sources.
- Data must be formatted for consistency within the data warehouse. Names, meanings, and domains of data from unrelated sources must be reconciled.
- Data must be cleaned to ensure validity.
- The data must be fitted into the data model of the data warehouse.
- The data must be loaded into the warehouse.
- How up-to-date must be data be ?
* Can the warehouse go off-line, and for how long.
* What are the data interdependencies ?
* What is the storage availability ?
* What are the distribution requirements ?
* What is the loading time ?
Data warehouses must also be designed with full consideration of the environment in which they will reside. Important design considerations include the following :
- Usage projections.
- The fit of the data model.
- Characteristics of available sources.
- Design of the metadata component.
- Modular component design.
- Design for manageability and change.
- Consideration of distributed and parallel architecture.
Posted by
Sunflower
at
8/10/2009 10:28:00 PM
0
comments
Labels: Building a data warehouse, Data Model, data warehousing, Functionality
![]() | Subscribe by Email |
|
Monday, August 3, 2009
Database System Concepts - Data Model, Schemas and Database state
A data model is a collection of concepts that can be used to describe the structure of a database. By structure of the database we mean the data types, relationships, and constraints that should hold on the data. Most data models also include a set of basic operations for specifying retrievals and updates on database.
Categories of Data Models:
- High level or Conceptual data models : These models provide concepts that are close to the way many users perceive data. They use concepts such as entities, attributes, and relationships. An entity represents a real-world object or concept such as an employee or a project. An attribute represents property of interest that describes an entity such as employee's salary or name. A relationship represents an interaction among the entities.
- Representational data models : These models provides concept that may be understood by end users but that are not too far removed from the way data is organized within the computer. They are used most frequently in traditional commercial DBMSs and they include the widely used relational model as well as the network and hierarchical models. These models represent data by using record structures and hence are sometimes called record-based data models.
- Low level or Physical data models : These models provide concepts that describe the details of how the data is stored in the computer by representing information such as record formats, record orderings, and access paths. An access path is a structure that makes the search for particular database records efficient.
Schemas:
The description of a database in any data model is called the database schema which is specified during the database design and is not expected to change frequently. A displayed is called a schema diagram.
A schema diagram displays only some aspects of a schema, such as names of record types and data items, and some types of constraints.
Database State or Iinstance: The actual data in a database changes every time data is inserted, deleted, or modified. The data in the database at a particular moment in time is called a database state or a snapshot. It is also called the current set of occurrences or instances in the database.
Distinguish between Database State and Database Schema:
When a new database is defined, we specify its database schema only to the DBMS. At this point, the corresponding database state is empty state. The initial state of the database is got when the database is first populated or loaded with the initial data. From then on, every time an update operation is applied to the database, we get another database state.
Posted by
Sunflower
at
8/03/2009 11:03:00 AM
0
comments
Labels: Categories, Conceptual data model, Data Model, Database state, Databases, Instance, physical data model, representational data model, Schemas
![]() | Subscribe by Email |
|