Chapter 13: Problem 22
What are mixed files used for? What are other types of primary file organizations?
Chapter 13: Problem 22
What are mixed files used for? What are other types of primary file organizations?
All the tools & learning materials you need for study success - in one app.
Get started for freeCan you think of techniques other than chaining to handle bucket overflow in external hashing?
Suppose that a disk unit has the following parameters: seek time \(s=20\) msec; rota tional delay \(r d=10 \mathrm{msec} ;\) block transfer time \(b t t=1 \mathrm{msec} ;\) block size \(B=2400\) bytes; interblock gap size \(G=600\) bytes. An EMPLOYEE file has the following fields: \(\mathrm{SSN}, 9\) bytes; LASTNAME, 20 bytes; fIRSTNAYE, 20 bytes; MIDOLE INIT\(, 1\) byte; BIRTHOATE, 10 bytes; ADDRESS, 35 bytes; PHONE, 12 bytes; SUPERVISORSSN, 9 bytes; DEPARTMENT, 4 bytes; JOBCODE, 4 bytes; deletion marker, 1 byte. The EMPLOYEE file has \(r=30,000\) records, fixed-length format, and unspanned blocking. Write appropriate formulas and cal. culate the following values for the above eMPLoyee file: a. The record size \(R\) (including the deletion marker), the blocking factor \(b f r,\) and the number of disk blocks \(b\) b. Calculate the wasted space in each disk block because of the unspanned orga nization. c. Calculate the transfer rate \(t r\) and the bulk transfer rate brr for this disk unit (see Appendix B for definitions of tr and btr). d. Calculate the average number of block accesses needed to search for an arbitrary record in the file, using linear search. e. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are stored on consecutive disk blocks and double buffering is used. f. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are not stored on consecutive disk blocks. g. Assume that the records are ordered via some key field. Calculate the average number of block accesses and the average time needed to search for an arbitrary record in the file, using binary search.
Can you think of techniques other than an unordered overflow file that can be used to make insertions in an ordered file more efficient?
A PARTS file with Part* as hash key includes records with the following Part* val. ues: 2369,3760,4692,4871,5659,1821,1074,7115,1620,2428,3943,4750 \(6975,4981,9208 .\) The file uses eight buckets, numbered 0 to 7. Each bucket is one disk block and holds two records. Load these records into the file in the given order, using the hash function \(h(K)=K \bmod 8 .\) Calculate the average number of block accesses for a random retrieval on Part#.
Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that \(B=2400\) bytes, \(s=16 \mathrm{ms}, r d=8.3 \mathrm{ms},\) and \(b t t=0.8 \mathrm{ms}\) Suppose we want to make \(X\) independent random record reads from the file. We could make \(X\) random block reads or we could perform one exhaustive read of the entire file looking for those \(X\) records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform \(x\) individual random reads. That is, what is the value for \(X\) when an exhaustive read of the file is more efficient than random \(X\) reads? Develop this as a function of \(X\)
What do you think about this solution?
We value your feedback to improve our textbook solutions.