7

Click here to load reader

Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

Embed Size (px)

Citation preview

Page 1: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

Challenges in Embedded Database System Administration

Margo I. Seltzer, Harvard UniversityMichael A. Olson, Sleepycat Software

Database configuration and maintenance have histori-cally been complex tasks, often requiring expert knowl-edge of database design and application behavior. In anembedded environment, it is not possible to require suchexpertise or to perform ongoing database maintenance.This paper discusses the database administration chal-lenges posed by embedded systems and describes howSleepycat Software’s Berkeley DB architectureaddresses these challenges.

1 Introduction

Embedded systems provide a combination of opportuni-ties and challenges in application and system configura-tion and management. As an embedded system is mostoften dedicated to a single application or small set ofcooperating tasks, the operating conditions of the sys-tem are typically better understood than those of gen-eral-purpose computing environments. Similarly, asembedded systems are dedicated to a small set of tasks,one expects that the software to manage them would besmall and simple. On the other hand, once an embeddedsystem is deployed, it must continue to function withoutinterruption and without administrator intervention.

Database administration has two components,initial configuration and ongoing maintenance. Initialconfiguration includes database design, manifestation,and tuning. The instantiation of the design includesdecomposing the data into tables, relations, or objectsand designating proper indices and theirimplementations (e.g., B-trees, hash tables, etc.). Tuningthe design requires selecting a location for the log anddata files, selecting appropriate database page sizes,specifying the size of in-memory caches, anddetermining the practical limits of multi-threading andconcurrency. As we expect that an embedded system iscreated by experienced and knowledgeable engineers,requiring expertise during the initial systemconfiguration process is acceptable, and we focus ourefforts on the ongoing maintenance of the system. Inthis way, our emphasis differs from other projects thatfocus on automating database design, such asMicrosoft’s AutoAdmin project [3]and the “no-knobs”administration that is identified as an area of importantfuture research by the Asilomar authors [1].

In this paper, we focus on what the authors of theAsilomar report call “gizmo” databases [1], databasesthat reside in devices such as smart cards, toasters, ortelephones. The key characteristics of these databasesare that their functionality must be completelytransparent to users, no explicit database operations ordatabase maintenance is ever performed, the databasemay crash at any time and must recover instantly, thedevice may undergo a hard reset at any time (requiringthat the database return to its initial state), and thesemantic integrity of the database must be maintained atall times. In Section 2, we provide more detail on thesorts of tasks typically performed by databaseadministrators (DBAs) that must be automated in anembedded system.

The rest of this paper is structured as follows. InSection 2, we outline the requirements for embeddeddatabase support. In Section 3, we discuss how BerkeleyDB is conducive to the hands-off management requiredin embedded systems. In Section 4, we discuss novelfeatures that enhance Berkeley DB’s suitability for theembedded applications. In Section 5, we discuss issuesof footprint size. In Section 6, we discuss related work,and we conclude in Section 7.

2 Embedded Database Requirements

Historically, much of the commercial database industryhas been driven by the requirements of high perfor-mance online transaction processing (OLTP), complexquery processing, and the industry standard benchmarksthat have emerged (e.g., TPC-C [9], TPC-D [10]) toallow for system comparisons. As embedded systemstypically perform fairly simple queries and only rarelyrequire high, sustained transaction rates, such metricsare not nearly as relevant for embedded database sys-tems as are ease of maintenance, robustness, and a smallmemory and disk footprint. Because of continuouslyfalling hardware prices, robustness and ease of mainte-nance are the key issues of these three requirements.Fortunately, robustness and ease of maintenance are sideeffects of simplicity and good design. These, in turn,lead to a small size, contributing to the third requirementof an embedded database system.

Page 2: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

2.1 The User Perspective

Users must be able to trust the data stored in theirdevices and must not need to manually perform anydatabase or system administration in order for theirgizmo to perform correctly.

In the embedded database arena, it is the ongoingmaintenance tasks that must be automated, not the initialsystem configuration. There are five tasks traditionallyperformed by DBAs, which must be performedautomatically in embedded database systems. Thesetasks are log archival and reclamation, backup, datacompaction/reorganization, automatic and rapidrecovery, and reinitialization from an initial state.

Log archival and backup are tightly coupled.Database backups are part of any recoverable databaseinstallation, and log archival is analogous to incrementalbackup. It is not clear what the implications of backupand archival are for an embedded system. Consumers donot back up their telephones or refrigerators, yet they do(or should) back up their personal computers or personaldigital assistants. For the remainder of this paper, weassume that some form of backups are required forgizmo databases (consider manually re-programming atelephone that includes functionality to compare andselect long-distance services based on the caller’scalling pattern and connection times). Furthermore,these backups must be completely transparent and mustnot interrupt service, as users should not be aware thattheir gizmos are being backed up, will not remember toinitiate the backups explicitly, and are unwilling to waitfor service.

Data compaction or reorganization has traditionallyrequired periodic dumping and restoration of databasetables and the recreation of indices in order to boundlookup times and minimize database growth. In anembedded system, compaction and reorganization musthappen automatically.

Recovery issues are similar in embedded andtraditional environments with two significantexceptions. While a few seconds or even a minute ofrecovery is acceptable for a large server installation,consumers are unwilling to wait for an appliance toreboot. As with archival, recovery must be nearlyinstantaneous in an embedded product. Additionally, itis often the case that a system will be completelyreinitialized, rather than performing any type ofrecovery, especially in systems that do not incorporatenon-volatile memory. In this case, the embeddeddatabase must be restored to its initial state and must notleak any resources. This is not typically an issue forlarge, traditional database servers.

2.2 The Developer Perspective

In addition to the maintenance-free operation requiredof embedded database systems, there are a number ofrequirements based on the constrained resources typi-cally available to the “gizmos” using gizmo databases.These requirements are a small disk and memory foot-print, short code-path, programmatic interface (for tightapplication coupling and to avoid the time and size over-head of interfaces such as SQL and ODBC), support forentirely memory-resident operation (e.g., systems wherefile systems are unavailable), application configurabilityand flexibility , and support for multi-threading.

A small footprint and short code-path are self-explanatory; however, what is not as obvious is that theprogrammatic interface requirement is their logicalresult. Traditional interfaces, such as ODBC and SQL,add a significant size overhead and frequently addmultiple context/thread switches per operation, not tomention several IPC calls. An embedded product isunlikely to require the complex and powerful queryprocessing that SQL enables. Instead, in the embeddedspace, the ability for an application to obtain quickly itsspecific data is more important than a general queryinterface.

As some systems do not provide storage other thanmemory, it is essential that an embedded database workseamlessly in memory-only environments. Similarly,many embedded operating systems have a singleaddress space architecture, so a fast, multi-threadeddatabase architecture is essential for applicationsrequiring concurrency.

In general, embedded applications run on gizmoswhose native operating system support variestremendously. For example, the embedded OS may ormay not support user-level processes or multi-threading.Even if it does, a particular embedded application mayor may not need it, e.g., not all applications need morethan one thread of control. An embedded database mustprovide mechanisms to developers without decidingpolicy. For example, the threading model in anapplication is a matter of policy, and depends not on thedatabase software, but on the hardware, operatingsystem and library interfaces, and the application’srequirements. Therefore, the data manager must providefor the use of multi-threading, but not require it, andmust not itself determine what a thread of control lookslike.

Page 3: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

3 Berkeley DB: A Database for EmbeddedSystems

The current Berkeley DB package, as distributed bySleepycat Software, is a descendant of the hash and B-tree access methods that were distributed with the 4BSDreleases from the University of California, Berkeley.The original software (usually referred to as DB 1.85),was originally intended as a public domain replacementfor thedbm andndbm libraries that were proprietary toAT&T. Instead, it rapidly became widely used as a fast,efficient, and easy-to-use data store. It was incorporatedinto a number of Open Source packages including Perl,Sendmail, Kerberos, and the GNU standard C library.Versions 2.0 and later were distributed by SleepycatSoftware (although they remain Open Source software)and added functionality for concurrency, logging, trans-actions, and recovery.

Berkeley DB is the result of implementing databasefunctionality using the UNIX tool-based philosophy.Each piece of base functionality is implemented as anindependent module, which means that the subsystemscan be used outside the context of Berkeley DB. Forexample, the locking subsystem can be used toimplement general-purpose locking for a non-DBapplication and the shared memory buffer pool can beused for any application performing page-based filecaching in main memory. This modular design allowsapplication designers to select only the functionalitynecessary for their application, minimizing memoryfootprint and maximizing performance. This directlyaddresses the small footprint and short code-pathcriteria mentioned in the previous section.

As Berkeley DB grew out of a replacement fordbm, its primary implementation language has alwaysbeen C and its interface has been programmatic. The Cinterface is the native interface, unlike many databasesystems where the programmatic API is a layer on topof an already-costly query interface (e.g. embeddedSQL). Berkeley DB’s heritage is also apparent in its datamodel; it has none. The database stores unstructuredkey/data pairs, specified as variable length byte strings.This leaves schema design and representation issues theresponsibility of the application, which is ideal for anembedded environment. Applications retain full controlover specification of their data types, representation,index values, and index relationships. In other words,Berkeley DB provides a robust, high-performance,keyed storage system, not a particular databasemanagement system. We have designed for simplicityand performance, trading off the complex, generalpurpose support found in historic data managementsystems. While that support is powerful and useful in

many applications, the overhead imposed (regardless ofits usefulness to any single application), is unacceptablein embedded applications.

Another element of Berkeley DB’s programmaticinterface is its customizability. Applications can specifyBtree comparison and prefix compression functions,hash functions, error routines, and recovery models.This means that an embedded application is able totailor the underlying database to best suit its datademands and programming model. Similarly, theutilities traditionally bundled with a database manager(e.g., recovery, dump/restore, archive) are implementedas tiny wrapper programs around library routines. Thismeans that it is not necessary to run separateapplications for the utilities. Instead, independentthreads can act as utility daemons, or regular querythreads can perform utility functions. Many of thecurrent products built on Berkeley DB are bundled as asingle large server with independent threads thatperform functions such as checkpoint, deadlockdetection, and performance monitoring.

As described earlier, living in an embeddedenvironment requires flexible management of storage.Berkeley DB does not require any preallocation of diskspace for log or data files. While typical commercialdatabase systems take complete control of a raw diskdevice, Berkeley DB cooperates with the nativesystem’s file system, and can therefore safely and easilyshare the file system with other applications. Alldatabases and log files are native files of the hostenvironment, so whatever standard utilities are providedby the environment (e.g., UNIXcp) can be used tomanage database files as well.

Berkeley DB provides three different memorymodels for its management of shared information.Applications can use the IEEE Std 1003.1b-1993(POSIX) mmap interface to share data, they can usesystem shared memory, as frequently provided by theshmget family of interfaces, or they can use per-process heap memory (e.g.,malloc). Applications thatrequire no permanent storage on systems that do notprovide shared memory facilities can still use BerkeleyDB by requesting strictly private memory andspecifying that all databases be memory-resident. Thisprovides pure-memory operation.

Finally, Berkeley DB is designed for rapid start-up,e.g., recovery happens automatically as part of systeminitialization. This means that Berkeley DB workscorrectly in environments where gizmos are shut downwithout warning and restarted.

Page 4: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

4 Extensions for Embedded Environments

All the features described in the previous section areuseful both in conventional systems as well as in embed-ded environments. In this section, we discuss a numberof features and “automatic knobs” that are specificallygeared toward the more constrained environments foundin gizmo databases.

4.1 Automatic compression

Following the programmatic interface design philoso-phy, we support application-specific (or default) com-pression routines. These can be geared toward theparticular data types present in the application’s dataset,thus providing better compression than a general-pur-pose routine. Applications can specify encryption func-tions as well, and create encrypted databases instead ofcompressed ones. Alternately, the application mightspecify a function that performs both compression andencryption.

As applications are also permitted to specifycomparison and hash functions, an application canchose to organize its data based either on uncompressedand clear-text data or compressed and encrypted data. Ifthe application indicates that data should be comparedin its processed form (i.e., compressed and encrypted),then the compression and encryption are performed onindividual data items and the in-memory representationretains these characteristics. However, if the applicationindicates that data should be compared in its originalform, then entire pages are transformed upon being readinto or written out of the main memory buffer cache.These two alternatives provide the flexibility to choosethe correct relationship between CPU cycles, memoryand disk space and security.

4.2 In-memory logging & transactions

One of the four key properties of transaction systems isdurability. This means that transaction systems aredesigned for permanent storage (most commonly disk).However, as described above, embedded systems do notnecessarily contain any such storage. Nevertheless,transactions can be useful in this environment to pre-serve the semantic integrity of the underlying storage.Berkeley DB optionally provides logging functionalityand transaction support regardless of whether the data-base and logs are on disk or in memory.

4.3 Remote Logs

While we do not expect users to backup their televisionsets and toasters, it is certainly reasonable that a set-topbox provided by a cable carrier will need to be backed

up by the cable carrier. The ability to store logs remotelycan provide “information appliance” functionality, andcan also be used in conjunction with local logs toenhance reliability. Furthermore, remote logs providefor true catastrophic recovery, e.g., loss of the gizmo,destruction of the gizmo, etc.

4.4 Application References to DatabaseBuffers

In typical database systems, data must be copied fromthe data manager’s buffer cache (or data pages) into theapplication’s memory when it is returned to the applica-tion. However, in an embedded environment, the robust-ness of the total software package is of paramountimportance, and maintaining an artificial isolationbetween the application and the data manager offers lit-tle advantage. As a result, it is possible for the data man-ager to avoid copies by giving applications directreferences to data items in the shared memory cache.This is a significant performance optimization that canbe allowed when the application and data manager aretightly integrated.

4.5 Recoverable database creation/deletion

In a conventional database management system, the cre-ation of database tables (relations) and indices areheavyweight operations that are not recoverable. This isnot acceptable in a complex embedded environmentwhere instantaneous recovery and robust operation inthe face of all types of database operations are essential.Berkeley DB provides transaction-protected file systemutilities that allow us to recover both database creationand deletion.

4.6 Adaptive concurrency control

The Berkeley DB package uses page-level locking bydefault. This trades off fine-grained concurrency controlfor simplicity during recovery. (Finer-grained concur-rency control can be obtained by reducing the page sizein the database or the maximum number of items per-mitted on each page.) However, when multiple threadsof control perform page-locking in the presence of writ-ing operations, there is the potential for deadlock. Assome environments do not need or desire the overheadof logging and transactions, it is important to providethe ability for concurrent access without the potentialfor deadlock.

Berkeley DB provides an option to perform coarsergrained, deadlock-free locking. Rather than locking onpages, locking is performed at the interface to thedatabase. Multiple readers or a single writer are allowed

Page 5: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

to be active in the database at any instant in time, withconflicting requests queued automatically. The presenceof cursors, through which applications can both read andwrite data, complicates this design. If a cursor iscurrently being used for reading, but will later be used towrite, the system will be deadlock-prone unless specialprecautions are taken. To handle this situation, theapplication must specify any future intention to writewhen a cursor is created. If there is an intention to write,the cursor is granted an intention-to-write lock, whichdoes not conflict with readers, but does conflict withother intention-to-write and write locks. The end resultis that the application is limited to a single potentiallywriting cursor accessing the database at any point intime.

Under periods of low contention (but potentiallyhigh throughput), the normal page-level lockingprovides the best overall throughput. However, ascontention rises, so does the potential for deadlock. Atsome cross-over point, switching to the less concurrent,but deadlock-free locking protocol will ultimately resultin higher throughput as operations must never be retried.Given the operating conditions of an embedded databasemanager, it is useful to make this change automaticallyas the system detects high contention in theapplication’s data access patterns.

4.7 Adaptive synchronization

In addition to the logical locks that protect the integrityof the database pages, Berkeley DB must synchronizeaccess to shared memory data structures, such as thelock table, in-memory buffer pool, and in-memory logbuffer. Each independent module uses a single mutex toprotect its shared data structures, under the assumptionthat operations that require the mutex are very short andthe potential for conflict is low. Unfortunately, in highlyconcurrent environments with multiple processorspresent, this assumption is not always true. When thisassumption becomes invalid (that is, we observe signifi-cant contention for the subsystem mutexes), BerkeleyDB can switch over to a finer-grained concurrencymodel for the mutexes. Once again, there is a perfor-mance trade-off. Fine-grained mutexes impose a penaltyof approximately 25% (due to the increased number ofmutex acquisitions and releases for each operation), butallow for higher overall throughput. Using fine-grainedmutexes under low contention would cause a decrease inperformance, so it is important to monitor the systemcarefully, so that the change happens only when it willincrease system throughput without jeopardizinglatency.

4.8 Source code availability

Finally, full source code is provided for Berkeley DB.This is an absolute requirement for debugging embed-ded applications. As the library and the applicationshare an address space, an error in the application caneasily manifest itself as an error in the data manager.Without complete source code for the library, debuggingeasy problems can be difficult, and debugging difficultproblems can be impossible.

5 Footprint of an Embedded System

Embedded database systems compete on disk and mem-ory footprint as well as the traditional metrics of fea-tures, price and performance. In this section of thepaper, we focus on footprint.

Oracle reports that Oracle Lite 3.0 requires 350 KBto 750 KB of memory and approximately 2.5 MB ofhard disk space [7]. This includes drivers for interfacessuch as ODBC and JDBC. In contrast, Berkeley DBranges in size from 75 KB to under 200 KB, foregoingheavyweight interfaces such as ODBC and JDBC. Atthe low end, applications requiring a simple single-useraccess method can choose from either extended linearhashing, B+ trees, or record-number based retrieval andpay only the 75 KB space requirement. Applicationsrequiring all three access methods will observe the 110KB footprint. At the high end, a fully recoverable, high-performance system occupies less than a quartermegabyte of memory. This is a system you can easilyincorporate in a toaster oven. Table 1 shows the per-module breakdown of the Berkeley DB library. (Note,however, that this does not include memory used tocache database pages.)

6 Related Work

Every three to five years, leading researchers in the data-base community convene to identify future directions indatabase research. They produce a report of this meet-ing, named for the year and location of the meeting. Themost recent of these reports, the 1998 Asilomar report,identifies the embedded database market as one of thehigh growth areas in database research [1]. Not surpris-ingly, market analysts identify the embedded databasemarket as a high-growth area in the commercial sectoras well [5].

The Asilomar report identifies a new class ofdatabase applications, which they term “gizmo”databases, small databases embedded in tiny mobileappliances, e.g., smart-cards, telephones, personaldigital assistants. Such databases must be self-managing, secure and reliable. They will require plug

Page 6: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

and play data management with no databaseadministrator (DBA), no human configurableparameters, and the ability to adapt to changingconditions. More specifically, the Asilomar authorsclaim that the goal is self-tuning, including defining thephysical and logical database designs, generatingreports, and executing utilities [1]. To date, fewresearchers have accepted this challenge, and there is adearth of research literature on the subject.

Our approach to embedded database administrationis fundamentally different from that described by theAsilomar authors. We adopt their terminology, but viewthe challenge in supporting gizmo databases to be thatof self-sustenanceafter initial deployment. We find it,not only acceptable, but desirable to expect applicationdevelopers to control initial database design andconfiguration. To the best of our knowledge, none of thepublished work in this area addresses this approach.

As the research community has not providedguidance in this arena, most work in embedded databaseadministration has been done by the commercialvendors. These vendors can be partitioned into twogroups: companies selling databases specificallydesigned for embedding or programmatic access and themajor database vendors (e.g., Oracle, Informix, Sybase).The embedded vendors all acknowledge the need forautomatic administration, but normally fail to identifyprecisely how their products accomplish this. A notableexception is Interbase, whose white paper comparisonwith Sybase and Microsoft’s SQL servers explicitly

addresses features of maintenance ease. Interbase statesthat as they use no log files, there is no need for log rec-lamation, checkpoint tuning, or other tasks normallyassociated with log management. However, Interbaseuses Transaction Information Pages, and it is unclearhow these are reused or reclaimed [6]. Additionally,with a log-free system, they must use a FORCE policy(flushing all modified pages to disk at commit), asdefined by Haerder and Reuter [4]. This has serious per-formance consequences for disk-based systems. Berke-ley DB does use logs and therefore requires logreclamation, but provides interfaces such that applica-tions may reclaim logs safely and programmatically.While Berkeley DB requires checkpoints, the goal oftuning the checkpoint interval is to bound recovery time.The checkpoint interval can be expressed as an amountof log data written. The application designer sets a targetrecovery time, and selects the amount of log data thatcan be read in that interval and specifies the checkpointinterval appropriately. Even as load changes, the time torecover does not.

The backup approaches taken by Interbase andBerkeley DB are similar in that they both allow onlinebackup, but rather different in their effect ontransactions running during backup. As Interbaseperforms backups as transactions [6], concurrent queriespotentially suffer long delays. Berkeley DB uses nativeoperating system utilities and recovery for backups, sothere is no interference with concurrent activity, otherthan potential contention on disk arms.

There are a number of other vendors sellingproducts into the embedded market (e.g., Raima,Centura, Pervasive, Faircom), but none of thesehighlight the special requirements of embeddeddatabase applications. On the other end of the spectrum,the major vendors (e.g., Oracle, Sybase, Microsoft), areall clearly aware of the importance of the embeddedmarket. As discussed earlier, Oracle has announced itsOracle Lite server for embedded use. Sybase hasannounced its UltraLite platform for “application-optimized, high-performance, SQL database engine forprofessional application developers building solutionsfor mobile and embedded platforms” [8]. We believethat SQL is fundamentally incompatible with the gizmodatabase environment and the truly embedded systemsfor which Berkeley DB is most suitable. Microsoftresearch is taking a different approach, developingtechnology to assist in automating initial databasedesign and index specification [2][3]. As describedearlier, such configuration is, not only acceptable in theembedded market, but desirable so that applicationdesigners can tune their entire applications for the targetenvironment.

SubsystemObject Size (in bytes)

Text Data BssBtree-specific routines 28812 0 0

Recno-specific routines 7211 0 0

Hash-specific routines 23742 0 0

Memory Pool 14535 0 0

Access method commoncode

23252 0 0

OS compatibility 4980 52 0

Support utilities 6165 0 0

Full Btree 77744 52 0

Full Recno 84955 52 0

Full Hash 72674 52 0

All Access Methods 108697 52 0

Locking 12533 0 0

Recovery 26948 8 4

Logging 37367 0 0

Full Package 185545 60 4

Table 1. Berkeley DB memory footprint.

Page 7: Challenges in Embedded Database System Administrationson/cs851/papers/embedded.challenge.pdf · Challenges in Embedded Database System Administration ... edge of database design and

7 Conclusions

The coming wave of embedded systems poses a new setof challenges for data management. The traditionalserver-based, large footprint systems designed for highperformance on big iron are not the correct technicalanswer for this environment. Instead, application devel-opers need small, fast, versatile data management toolsthat can be integrated easily within specific applicationenvironments. In this paper, we have identified severalof the key issues in designing these systems and shownhow Berkeley DB provides many of these requirements.

8 References

[1] Bernstein, P., Brodie, M., Ceri, S., DeWitt, D.,Franklin, M., Garcia-Molina, H., Gray, J., Held, J.,Hellerstein, J., Jagadish, H., Lesk, M., Maier, D.,Naughton, J., Pirahesh, H., Stonebraker, M., Ull-man, J., “The Asilomar Report on DatabaseResearch,” SIGMOD Record 27(4): 74–80, 1998.

[2] Chaudhuri, S., Narasayya, V., “AutoAdmin ‘What-If ’ Index Analysis Utility,” Proceedings of the ACMSIGMOD Conference, Seattle, 1998.

[3] Chaudhuri, S., Narasayya, V., “An Efficient, Cost-Driven Index Selection Tool for Microsoft SQLServer,” Proceedings of the 23rd VLDB Conference,Athens, Greece, 1997.

[4] Haerder, T., Reuter, A., “Principles of Transaction-Oriented Database Recovery,” Computing Surveys15,4 (1983), 237–318.

[5] Hostetler, M., “Cover Is Off A New Type of Data-base,” Embedded DB News, http://www.theadvisors.com/embeddeddb-news.htm, 5/6/98.

[6] Interbase, “A Comparison of Borland InterBase 4.0Sybase SQL Server and Microsoft SQL Server,”http://web.interbase.com/products/doc_info_f.html.

[7] Oracle, “Oracle Delivers New Server, ApplicationSuite to Power the Web for Mission-Critical Busi-ness,” http://www.oracle.com.sg/part-ners/news/newserver.htm, May 1998.

[8] Sybase, Sybase UltraLite, http://www.sybase.com/products/ultralite/beta.

[9] Transaction Processing Council, “TPC-C Bench-mark Specification, Version 3.4,” San Jose, CA,August 1998.

[10]Transaction Processing Council, “TPC-D Bench-mark Specification, Version 2.1,” San Jose, CA,April 1999.