C-STORE : A COLUMN ORIENTED DBMS
#1

C-STORE A COLUMN ORIENTED DBMS
presented by
JACOB K JOSE
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
COLLEGE OF ENGINEERING TRIVANDRUM
2007-11 Batch

[attachment=7369]


Contents


Abstract
1. Introduction
2. Features of C-store
3. Data models
3.1 Storage Keys
3.2 Join Indices
4. Internal architecture
5. Readable store (rs)
6. Writeable store (ws)
7. Storage Management
8. Updates and Transactions
8.1 Providing Snapshot isolation
9. Recovery
10. Tuple Mover
11. C-Store Query Optimization
12. Performance Comparison
13. Conclusions
Bibliography


ABSTRACT


Column-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table’s columns becomes faster, at the potential expense of excessive disk-head seeking from column to column for scattered reads or updates.This paper presents the design of a read optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. Among the many differences in its design are: storage of data by column rather than by row, careful packing of objects into storage including main memory, storing an overlapping collection of column-oriented projections, rather than the current fare of tables and indexes, a non-traditional implementation of snapshot isolation for read-only transactions, and the extensive use of bitmap indexes to complement B-tree structures.

1. Introduction

Most major DBMS vendors implement record-oriented storage systems, where the attributes of a record (or tuple) are placed contiguously in storage. With this row store arch-itecture, a single disk write suffices to push all of the fields of a single record out to disk. Hence, highperformance writes are achieved, and we call a DBMSwith a row store architecture a write-optimized system.These are especially effective on OLTP-style applications.In contrast, systems oriented toward ad-hoc querying of large amounts of data should be read-optimized. read-mostly applications include customer relationship management (CRM) systems, electronic library card catalogs, and other ad-hoc inquiry systems. In such environments, a column store architecture, in which the values for each single column (or attribute) are stored contiguously, should be more efficient. This efficiency has been demonstrated in the warehouse marketplace by products like Sybase IQ.
In this seminar we are presenting a new dbms viz C-Store which pads up the features of both row stores and column stores. Current relational DB-MSs were designed to pad attributes to byte or word boundaries and to store val-ues in their native data format. It was thought that it was too expensive to shift data values onto byte or word boundaries in main memory for processing. How-ever, CPUs are getting faster at a much greater rate than disk bandwidth is incre-asing. Hence, it makes sense to trade CPU cycles, which are abundant, for disk bandwidth, which is not. This tradeoff appears especially profitable in a read-mostly environment.


2. Features of C-store



1. A hybrid architecture with a WS component optimized for frequent insert and update and an RS component optimized for query performance.

2. Redundant storage of elements of a table in several overlapping projections in different orders, so that a query can be solved using the most advantageous
projection.

3. Heavily compressed columns using one of several coding schemes.

4. A column-oriented optimizer and executor, with different primitives than in a row-oriented system.

5. High availability and improved performance through K-safety using a sufficient number of overlapping Projections
.
6. The use of snapshot isolation to avoid 2PC and locking for queries.

Clearly, collections of off-the-shelf “blade” or “grid” computers will be the cheapest hardware architecture for computing and storage intensive applications such as DBMSs. Hence, any new DBMS architecture should assume a grid environment in which there are G nodes (computers), each with private disk and private memory. We propose to horizontally partition data across the disks of the various nodes in a “shared nothing” architecture. Grid computers in the near future may have tens to hundreds of nodes, and any new system should be architected for grids of this size. Of course, the nodes of a grid computer may be physically co-located or divided into clusters of co-located nodes. Since database administrators are hard pressed to optimize a grid environment, it is essential to allocate data structures to grid nodes automatically. In addition, intra-query
parallelism is facilitated by horizontal partitioning of stored data structures, and we follow the lead of Gamma in implementing this construct.



3. Data models

C-Store supports the standard relational logical data model, where a database consists of a collection of named tables, each with a named collection of attributes (columns). As in most relational systems, attributes (or collections of attributes) in C-Store tables can form a unique primary key or be a foreign key that references a primary key in another table. The C-Store query language is assumed to be SQL, with standard SQL semantics. Data in C-Store is not physically stored using this logical data model. Whereas most row stores implement physical tables directly and then add various indexes to speed access, C-Store implements only projections. Specifically, a C-Store projection is anchored on a given logical table, T, and contains one or more attributes from this table. In addition, a projection can contain any number of other attributes from other tables, as long as there is a sequence of n:1 (i.e., foreign key) relationships from the anchor table to the table containing an attribute.
To form a projection, we project the attributes of interest from T, retaining any duplicate rows, and perform the appropriate sequence of value-based foreign-key joins to obtain the attributes from the non-anchor table(s). Hence, a projection has the same number of rows as its anchor table. Of course, much more elaborate projections could be allowed, but we believe this simple scheme will meet our needs while ensuring high performance. We note that we use the term projection slightly differently than is common practice, as we do not store the base table(s) from which the projection is derived.


Table 1 : Sample EMP Data



Here given the possible subset of the projections for the data stored in the C-store.
EMP1 (name, age)
EMP2 (dept, age, DEPT.floor)
EMP3 (name, salary)
DEPT1(dname, floor).
Tuples in a projection are stored column-wise. Hence, if there are K attributes in a projection, there will be K data structures, each storing a single column, each of which is sorted on the same sort key.
Clearly, to answer any SQL query in C-Store, there must be a covering set of projections for every table in the database such that every column in every table is stored in at least one projection. However, C-Store must also be able to reconstruct complete rows of tables from the collection of stored segments. To do this, it will need to join segments from different projections, which we accomplish using storage keys and join indexes.
3.1 Storage Keys
Each segment associates every data value of very column with a storage key, SK. Values from different columns in the same segment with matching storage keys belong to the same logical row. We refer to a row of a segment using the term record or tuple. Storage keys are numbered 1, 2, 3, … in RS and are not physically stored, but are inferred from a tuple’s physical position in the column (see Section 3 below.) Storage keys are physically present in WS and are represented as integers, larger than the largest integer storage key for any segment in RS.
3.2 Join Indices
To reconstruct all of the records in a table T from its various projections, C-Store uses join indexes. If T1 and T2 are two projections that cover a table T, a join index from the M segments in T1 to the N segments in T2 is logically a collection of M tables, one per segment, S, of T1 consisting of rows of the form: (s: SID in T2, k: Storage Key in Segments).



Projection Emp1
NAME AGE

JILL 24

BOB 25

BILL 27

Join Index
SID KEY
1
2
1
3
1
1

Projection Emp3
NAME SALARY
BOB 10K
BILL 50K
JILL 80K

Table 2:A Join index from Emp3 to Emp1

In practice, we expect to store each column in several pro-jections, thereby allowing us to maintain relatively few join indices. This is because join indexes are very expensive to store and maintain in the presence of updates, since each modification to a projection requires every join index that points into or out of it to be updated as well. The segments of the projections in a database and their connecting join indexes must be allocated to the various nodes in a C-Store system. The C-Store administrator can optionally specify that the tables in a database must be Ksafe.In this case, the loss of K nodes in the grid will still allow all tables in a database to be reconstructed (i.e.,despite the K failed sites, there must exist a covering set of projections and a set of join indices that map to some common sort order.) When a failure occurs, C-Store simply continues with K-1 safety until the failure is repaired and the node is brought back up to speed.
4. Internal architecture

C-Store approaches this dilemma from a freshperspective. Specifically, we combine in a single piece of system soft-ware, both a read-optimized column store and an update or insert oriented writeable store, connected by a tuple mov-er, as noted in Figure 1. At the top level, there is a small writeable Store (WS) component, which is architected to support high performance inserts and up-dates. There is also a much larger component called the Read-optimized Store (RS), which is capable of supporting very large amounts of information. RS, as the name implies, is optimized for read and supports only a very restricted form of insert, namely the batch movement of records from WS to RS, a task that is performed by the tuple mover of Figure 1.


WRITEABLE STORE(WS)

T TUPLE MOVER
READABLE STORE(RS)

Fig 1:Architecture of C-Store

Of course, queries must access data in both storage systems. Inserts are sent to WS, while deletes must bemarked in RS for later purging by the tuple mover.Updates are implemented as an insert and a delete. In order to support a high-speed tuple mover, we use a variant of the LSM-tree concept , which supports a merge out process that moves tuples from WS to RS in bulk by an efficient method of merging ordered WS data objects with large RS blocks, resulting in a new copy of RS that is installed when the operation completes.



5. Readable store(rs)

RS is a read-optimized column store. Hence any segment of any projection is broken into its constituent columns, and each column is stored in order of the sort
key for the projection. The storage key for each tuple in RS is the ordinal number of the record in the segment. This storage key is not stored but calculated as needed.
Type 1: Self-order, few distinct values: A column encoded using Type 1 encoding is represented by a sequence of triples, (v, f, n) such that v is a value stored in the column, f is the position in the column where v first
appears, and n is the number of times v appears in the
column.
Type 2: Foreign-order, few distinct values: A column encoded using Type 2 encoding is represented by a sequence of tuples, (v, b) such that v is a value stored in the column and b is a bitmap indicating the positions in
which the value is stored.
Type 3: Self-order, many distinct values: The idea for this scheme is to represent every value in the column as a delta from the previous value in the column.
Type 4: Foreign-order, many distinct values: If there are a large number of values, then it probably makes sense to leave the values unencoded.We are still investigating possible compression techniques for this situation.











6. Writeable store (ws)

In order to avoid writing two optimizers, WS is also a column store and implements the identical physical DBMS design as RS. Hence, the same projections and join indexes are present in WS. However, the storage representation is drastically different because WS must be efficiently updatable transactionally.The storage key, SK, for each record is explicitly stored in each WS segment. A unique SK is given to each insert of a logical tuple in a table T. The execution engine must ensure that this SK is recorded in each projection that stores data for the logical tuple. This SK is an integer, larger than the number of records in the largest segment in the database. For simplicity and scalability, WS is horizontally partitioned in the same way as RS. Hence, there is a 1:1 mapping between RS segments and WS segments. A (sid,storage_key) pair identifies a record in either of these containers. Since we assume that WS is trivial in size relative to RS, we make no effort to compress data values; instead we represent all data directly. Therefore, each projection uses B-tree indexing to maintain a logical sort-key order.Every column in a WS projection is represented as a collection of pairs, (v, sk), such that v is a value in the column and sk is its corresponding storage key. Each pair is represented in a conventional B-tree on the second field The sort key(s) of each projection is additionally represented by pairs (s, sk) such that s is a sort key value and sk is the storage key describing where s first appears.Again, this structure is represented as a conventional Btree on the sort key field(s). To perform searches using the sort key, one uses the latter B-tree to find the storage keys of interest, and then uses the former collection of Btrees to find the other fields in the record.Join indexes can now be fully described. Every projection is represented as a collection of pairs of segments, one in WS and one in RS. For each record in the “sender,” we must store the sid and storage key of a corresponding record in the “receiver.” It will be useful to horizontally partition the join index in the same way as the “sending” projection and then to co-locate join index partitions with the sending segment they are associated with. In effect, each (sid, storage key) pair is a pointer to a record which can be in either the RS or WS.

7. Storage Management

The storage management issue is the allocation of segments to nodes in a grid system; C-Store will performthis operation automatically using a storage allocator. It seems clear that all columns in a single segment of a projection should be co-located. As noted above, join indexes should be co-located with their “sender” segments. Also, each WS segment will be co-located with the RS segments that contain the same key range. Using these constraints, we are working on an allocator. This system will perform initial allocation, as well as reallocation when load becomes unbalanced. The details of this software are beyond the scope of this paper. Since everything is a column, storage is simply the persistence of a collection of columns. Our analysis shows that a raw device offers little benefit relative to today’s file systems. Hence, big columns (megabytes) are stored in individual files in the underlying operating system.



















8. Updates and Transactions

An insert is represented as a collection of new objects in WS, one per column per projection, plus the sort key data structure. All inserts corresponding to a single logical record have the same storage key. The storage key is allocated at the site where the update is received. To prevent C-Store nodes from needing to synchronize with each other to assign storage keys, each node maintains a locally unique counter to which it appends its local site id to generate a globally unique storage key.Keys in the WS will be consistent with RS storage keys because we set the initial value of this counter to be one larger than the largest key in RS.We are building WS on top of BerkeleyDB; we use the B-tree structures in that package to support our data structures. Hence, every insert to a projection results in a collection of physical inserts on different disk pages, one per column per projection. To avoid poor performance, we plan to utilize a very large main memory buffer pool, made affordable by the plummeting cost per byte of primary storage. As such, we expect “hot” WS data structures to be largely main memory resident.C-Store’s processing of deletes is influenced by our locking strategy. Specifically, C-Store expects large numbers of ad-hoc queries with large read sets interspersed with a smaller number of OLTP transactions covering few records. If C-Store used conventional locking, then substantial lock contention would likely be observed, leading to very poor performance.Instead, in C-Store, we isolate read-only transactions using snapshot isolation. Snapshot isolation works by allowing read-only transactions to access the database as of some time in the recent past, before which we can guarantee that there are no uncommitted transactions. For this reason, when using snapshot isolation, we do not need to set any locks. We call the most recent time in the past at which snapshot isolation can run the high water mark (HWM) and introduce a low-overhead mechanism for keeping track of its value in our multi-site environment. If we let read-only transactions set their effective time arbitrarily, then we would have to support general time travel, an onerously expensive task. Hence, there is also a low water mark (LWM) which is the earliest effective time at which a read-only transaction can run. Update transactions continue to set read and write locks and obey strict two-phase locking.Providing Snapshot Isolation.
8.1 Providing Snapshot isolation

The key problem in snapshot isolation is determining which of the records in WS and RS should be visible to a read-only transaction running at effective time ET. To provide snapshot isolation, we cannot perform updates in place. Instead, an update is turned into an insert and a delete. Hence, a record is visible if it was inserted before ET and deleted after ET. To make this determination without requiring a large space budget, we use coarse granularity “epochs,” to be described with fig.2, as the unit for timestamps. Hence, we maintain an insertion vector (IV) for each projection segment in WS, which contains for each record the epoch in which the record was inserted. We program the tuple mover (described in Section 10) to ensure that no records in RS were inserted after the LWM. Hence, RS need not maintain an insertion vector. In addition, we maintain a deleted record vector (DRV) for each projection, which has one entry per projection record, containing a 0 if the tuple has not been deleted; otherwise, the entry contains the epoch in which the tuple was deleted. Since the DRV is very sparse (mostly zeros), it can be compactly coded using the type 2 algorithm described earlier. We store the DRV in the WS, since it must be updatable. The runtime system can now consult IV and DRV to make the visibility calculation for each query on a record-by-record basis.To maintain the HWM, we designate one site the timestamp authority (TA) with the responsibility of allocating timestamps to other sites. The idea is to divide time into a number of epochs; we define the epoch number to be the number of epochs that have elapsed since the beginning of time. We anticipate epochs being relatively long – e.g., many seconds each, but the exact duration may vary from deployment to deployment. We define the initial HWM to be epoch 0 and start current epoch at 1. Periodically, the TA decides to move the system to the next epoch; it sends a end of epoch message to each site, each of which increments current epoch from e to e+1,thus causing new transactions that arrive to be run with a timestamp e+1. Each site waits for all the transactions that began in epoch e (or an earlier epoch) to complete and then sends an epoch complete message to the TA. Once the TA has received epoch complete messages from all sites for epoch e, it sets the HWM to be e, and sends this value to each site. Figure 3 illustrates this process.After the TA has broadcast the new HWM with value e, read-only transactions can begin reading data from epoch e or earlier and be assured that this data has been committed. To allow users to refer to a particular realworld time when their query should start, we maintain a
table mapping epoch numbers to times, and start the query as of the epoch nearest to the user-specified time.To avoid epoch numbers from growing without boundand consuming extra space, we plan to “reclaim” epochs that are no longer needed. We will do this by “wrapping” timestamps, allowing us to reuse old epoch numbers as in other protocols, e.g., TCP. In most warehouse applications, records are kept for a specific amount of time, say 2 years. Hence, we merely keep track of the oldest epoch in any DRV, and ensure that wrapping epochs through zero does not overrun.



Fig 2:Snapshot isolation
Illustration showing how the HWM selection algorithm works. Gray arrows indicate messages from the TAto the sites or vice versa. We can begin reading tuples with timestamp e when all transactions from epoch e have committed. Note that although T4 is still executing when the HWM is incremented, read-only transactions will not see its updates because it is running in epoch e+1.



9. Recovery


As mentioned above, a crashed site recovers by running a query (copying state) from other projections. Recall that C-Store maintains K-safety; i.e. sufficient
projections and join indexes are maintained, so that K sites can fail within t, the time to recover, and the system will be able to maintain transactional consistency. There are three cases to consider. If the failed site suffered no data
loss, then we can bring it up to date by executing updates that will be queued for it elsewhere in the network. Since we anticipate read-mostly environments, this roll forward operation should not be onerous. Hence, recovery from the most common type of crash is straightforward. The second case to consider is a catastrophic failure which destroys both the RS and WS. In this case, we have no
choice but to reconstruct both segments from other projections and join indexes in the system. The only needed functionality is the ability to retrieve auxiliary data structures (IV, DRV) from remote sites. After restoration, the queued updates must be run as above. The third case occurs if WS is damaged but RS is intact. Since RS is written only by the tuple mover, we expect it will typically escape damage. Hence, we discuss this common case in detail below.

10. Tuple Mover

The job of the tuple mover is to move blocks of tuples in a WS segment to the corresponding RS segment, updating any join indexes in the process. It operates as a background task looking for worthy segment pairs. When it finds one, it performs a merge-out process, MOP on this (RS, WS) segment pair.MOP will find all records in the chosen WS segment with an insertion time at or before the LWM, and then divides them into two groups:Ones deleted at or before LWM. These are discarded, because the user cannot run queries as of a time when they existed.Ones that were not deleted, or deleted after LWM.These are moved to RS.MOP will create a new RS segment that we name RS'.Then, it reads in blocks from columns of the RS segment,deletes any RS items with a value in the DRV less than or equal to the LWM, and merges in column values from WS. The merged data is then written out to the new RS' segment, which grows as the merge progresses. The most recent insertion time of a record in RS’ becomes the segment’s new tlastmove and is always less than or equal to the LWM. This old-master/new-master approach will be more efficient than an update-in-place strategy, since essentially all data objects will move. Also, notice that records receive new storage keys in RS', thereby requiring join index maintenance. Since RS items may also be deleted, maintenance of the DRV is also mandatory. Once RS' contains all the WS data and join indexes are modified on RS', the system cuts over from RS to RS'. The disk space used by the old RS can now be freed.
Periodically the timestamp authority sends out to each site a new LWM epoch number. Hence, LWM “chases” HWM, and the delta between them is chosen to mediate between the needs of users who want historical access and the WS space constraints.







11. C-Store Query Optimization

The query optimizer will accept a SQL query and construct a query plan of execution nodes. In this section, we describe the nodes that can appear in a plan and then the architecture of the optimizer itself. C-Store operators have the capability to operate on both compressed and uncompressed input.The ability to process compressed data is the key to the performance benefits of C-Store. The major optimizer decision is which set of projections to use for a given query. Obviously, it will be time consuming to construct a plan for each possibility, and then select the best one. Our focus will be on pruning this search space. In addition, the optimizer must decide where in the plan to mask a projection according to a bitstring. For example, in some cases it is desirable to push the Mask early in the plan (e.g, to avoid producing a bitstring while performing selection over Type 2 compressed data) while in other cases it is best to delay masking until a point where it is possible to feed a bitstring to the next operator in the plan (e.g., COUNT) that can produce results solely by processing the bitstring.
















12. Performance Comparison

Consider the example given below :

CREATE TABLE LINEITEM (
L_ORDERKEY INTEGER NOT NULL,
L_PARTKEY INTEGER NOT NULL,
L_SUPPKEY INTEGER NOT NULL,
L_LINENUMBER INTEGER NOT NULL,
L_QUANTITY INTEGER NOT NULL,
L_EXTENDEDPRICE INTEGER NOT NULL,
L_RETURNFLAG CHAR(1) NOT NULL,
L_SHIPDATE INTEGER NOT NULL);

CREATE TABLE ORDERS (
O_ORDERKEY INTEGER NOT NULL,
O_CUSTKEY INTEGER NOT NULL,
O_ORDERDATE INTEGER NOT NULL);

CREATE TABLE CUSTOMER (
C_CUSTKEY INTEGER NOT NULL,
C_NATIONKEY INTEGER NOT NULL);

Here there are seven queries to briefly evaluate the performance of C-store
Q1. Determine the total number of lineitems shipped for
each day after day D.
SELECT l_shipdate, COUNT (*)
FROM lineitem
WHERE l_shipdate > D
GROUP BY l_shipdate

Q2. Determine the total number of lineitems shipped for
each supplier on day D.
SELECT l_suppkey, COUNT (*)
FROM lineitem
WHERE l_shipdate = D
GROUP BY l_suppkey

Q3. Determine the total number of lineitems shipped for
each supplier after day D.
SELECT l_suppkey, COUNT (*)
FROM lineitem
WHERE l_shipdate > D
GROUP BY l_suppkey

Q4. For every day after D, determine the latest shipdate
of all items ordered on that day.
SELECT o_orderdate, MAX (l_shipdate)
FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND
o_orderdate > D
GROUP BY o_orderdate

Q5. For each supplier, determine the latest shipdate of an
item from an order that was made on some date, D.
SELECT l_suppkey, MAX (l_shipdate)
FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND
o_orderdate = D
GROUP BY l_suppkey
Q6. For each supplier, determine the latest shipdate of an
item from an order made after some date, D.
SELECT l_suppkey, MAX (l_shipdate)
FROM lineitem, orders
WHERE l_orderkey = o_orderkey AND
o_orderdate > D
GROUP BY l_suppkey
Q7. Return a list of identifiers for all nations represented
by customers along with their total lost revenue for
the parts they have returned. This is a simplified
version of query 10 (Q10) of TPC-H.
SELECT c_nationkey, sum(l_extendedprice)
FROM lineitem, orders, customers
WHERE l_orderkey=o_orderkey AND
o_custkey=c_custkey AND
l_returnflag='R'
GROUP BY c_nationkey

The tables given below speaks out the result


Space constraints

Table 3: Space constraint



Execution time

Table 4: Execution time


C-Store is much faster than either commercial product. The main reasons are:

• Column representation – avoids reads of unused
attributes (same as competing column store).
• Storing overlapping projections, rather than the whole
table – allows storage of multiple orderings of a column
as appropriate.
• Better compression of data – allows more orderings in
the same space.
• Query operators operate on compressed representation- mitigates the storage barrier problem of current processors.
.In summary, for this seven query benchmark, C-Store is on average 164 times faster than the commercial rowstore and 21 times faster than the commercial columnstore in the space-constrained case. For the case of unconstrained space, C-Store is 6.4 times faster than the commercial row-store, but the row-store takes 6 times the space. C-Store is on average 16.5 times faster than the commercial column-store, but the column-store requires 1.83 times the space.

13. Conclusions

This paper has presented the design of C-Store, a radical departure from the architecture of current DBMSs.Unlike current commercial systems, it is aimed at the “read-mostly” DBMS market. The innovative contributions embodied in C-Store include:
• A column store representation, with an associated
query execution engine.
• A hybrid architecture that allows transactions on a column store.
• A focus on economizing the storage representation on disk, by coding data values and dense-packing the data.
• A data model consisting of overlapping projections of tables, unlike the standard fare of tables, secondary indexes, and projections.
• A design optimized for a shared nothing machine
environment.
• Distributed transactions without a redo log or two
phase commit.
• Efficient snapshot isolation.


Bibliography


[1] Mike Stonebraker,Daniel J. Abadi, Adam Batkin,Xuedong Chen, Mitch herniack,Miguel Ferreira, Edmond Lau, Amerson Lin,Sam Madden, Elizabeth O’Neil , Pat O’Neil, Alex Rasin, Nga Tran, Stan Zdonik “C-Store: A Column-oriented DBMS” Proceedings of the 31st VLDB Conference,
Trondheim, Norway, 2005.










Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: widen blogger column, dbms ideas, cfst column, dbms information, seminar report in column oriented database, distributed dbms nptelnotes, bukampa new informioan marathi store,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  service oriented architecture full report project report tiger 12 14,715 27-04-2015, 01:48 PM
Last Post: seminar report asees
  OBJECT-ORIENTED APPROACH IN SOFTWARE DEVELOPMENT project report helper 2 2,470 20-11-2012, 12:48 PM
Last Post: seminar details
  Grayscale Image Retrieval using DCT on Row mean, Column mean and Combination computer girl 0 1,044 06-06-2012, 04:57 PM
Last Post: computer girl
  Automated Negotiation in Service Oriented Architecture computer science crazy 1 1,694 16-02-2012, 12:16 PM
Last Post: seminar paper
Exclamation Service-oriented architecture computer science crazy 1 1,710 16-02-2012, 12:16 PM
Last Post: seminar paper
  SERVICE ORIENTED ARCHITECTURE USING WEB SERVICES (SOAWS) seminar class 1 1,460 16-02-2012, 12:15 PM
Last Post: seminar paper
  Seminar Report On ASPECT ORIENTED PROGRAMMING Computer Science Clay 1 2,276 21-04-2011, 04:38 PM
Last Post: seminar class
  DATA BACKUP AND RECOVERY IN DBMS seminar class 0 3,126 21-04-2011, 10:00 AM
Last Post: seminar class
  dbms mini project ashok01 1 12,454 14-04-2011, 02:42 PM
Last Post: seminar class
  Aspect Oriented Programming ppt seminar surveyer 0 1,849 13-01-2011, 02:59 PM
Last Post: seminar surveyer

Forum Jump: