What are mixed files used for? What are other types of primary file organizations?

Short Answer

Expert verified
Mixed files are used to store varying types and lengths of records in the same file. Other types of primary file organizations include sequential files, indexed files, direct (or hashed) files, and pile files.

Step by step solution

01

Definition of Mixed Files

Mixed files, also known as multiple record type files, are files that contain records of varying lengths and types within the same file. These types of files are typically used in situations where different types of data need to be stored in a single file. For example, if a business wants to keep track of its employees and customers in the same file, each with different sets of attributes, then they might use a mixed file to do so.
02

Overview of other Primary File Organizations

Apart from mixed files, there are other types of primary file organizations. These include 1) Sequential files: data is stored in a specific order, either ascending or descending; 2) Indexed files: data is stored with an index to improve the speed of data retrieval; 3) Direct, or Hashed files: data is stored based on a unique key value that directly addresses the storage location; 4) Pile files: data is stored in order of arrival, with no particular organization.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Can you think of techniques other than chaining to handle bucket overflow in external hashing?

Suppose that a disk unit has the following parameters: seek time \(s=20\) msec; rota tional delay \(r d=10 \mathrm{msec} ;\) block transfer time \(b t t=1 \mathrm{msec} ;\) block size \(B=2400\) bytes; interblock gap size \(G=600\) bytes. An EMPLOYEE file has the following fields: \(\mathrm{SSN}, 9\) bytes; LASTNAME, 20 bytes; fIRSTNAYE, 20 bytes; MIDOLE INIT\(, 1\) byte; BIRTHOATE, 10 bytes; ADDRESS, 35 bytes; PHONE, 12 bytes; SUPERVISORSSN, 9 bytes; DEPARTMENT, 4 bytes; JOBCODE, 4 bytes; deletion marker, 1 byte. The EMPLOYEE file has \(r=30,000\) records, fixed-length format, and unspanned blocking. Write appropriate formulas and cal. culate the following values for the above eMPLoyee file: a. The record size \(R\) (including the deletion marker), the blocking factor \(b f r,\) and the number of disk blocks \(b\) b. Calculate the wasted space in each disk block because of the unspanned orga nization. c. Calculate the transfer rate \(t r\) and the bulk transfer rate brr for this disk unit (see Appendix B for definitions of tr and btr). d. Calculate the average number of block accesses needed to search for an arbitrary record in the file, using linear search. e. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are stored on consecutive disk blocks and double buffering is used. f. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are not stored on consecutive disk blocks. g. Assume that the records are ordered via some key field. Calculate the average number of block accesses and the average time needed to search for an arbitrary record in the file, using binary search.

Can you think of techniques other than an unordered overflow file that can be used to make insertions in an ordered file more efficient?

A PARTS file with Part* as hash key includes records with the following Part* val. ues: 2369,3760,4692,4871,5659,1821,1074,7115,1620,2428,3943,4750 \(6975,4981,9208 .\) The file uses eight buckets, numbered 0 to 7. Each bucket is one disk block and holds two records. Load these records into the file in the given order, using the hash function \(h(K)=K \bmod 8 .\) Calculate the average number of block accesses for a random retrieval on Part#.

Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that \(B=2400\) bytes, \(s=16 \mathrm{ms}, r d=8.3 \mathrm{ms},\) and \(b t t=0.8 \mathrm{ms}\) Suppose we want to make \(X\) independent random record reads from the file. We could make \(X\) random block reads or we could perform one exhaustive read of the entire file looking for those \(X\) records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform \(x\) individual random reads. That is, what is the value for \(X\) when an exhaustive read of the file is more efficient than random \(X\) reads? Develop this as a function of \(X\)

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free