Chapter 13: Problem 36
Write pseudocode for the insertion algorithms for linear hashing and for extendible hashing.
Chapter 13: Problem 36
Write pseudocode for the insertion algorithms for linear hashing and for extendible hashing.
All the tools & learning materials you need for study success - in one app.
Get started for freeWhat are mixed files used for? What are other types of primary file organizations?
Suppose that a disk unit has the following parameters: seek time \(s=20\) msec; rota tional delay \(r d=10 \mathrm{msec} ;\) block transfer time \(b t t=1 \mathrm{msec} ;\) block size \(B=2400\) bytes; interblock gap size \(G=600\) bytes. An EMPLOYEE file has the following fields: \(\mathrm{SSN}, 9\) bytes; LASTNAME, 20 bytes; fIRSTNAYE, 20 bytes; MIDOLE INIT\(, 1\) byte; BIRTHOATE, 10 bytes; ADDRESS, 35 bytes; PHONE, 12 bytes; SUPERVISORSSN, 9 bytes; DEPARTMENT, 4 bytes; JOBCODE, 4 bytes; deletion marker, 1 byte. The EMPLOYEE file has \(r=30,000\) records, fixed-length format, and unspanned blocking. Write appropriate formulas and cal. culate the following values for the above eMPLoyee file: a. The record size \(R\) (including the deletion marker), the blocking factor \(b f r,\) and the number of disk blocks \(b\) b. Calculate the wasted space in each disk block because of the unspanned orga nization. c. Calculate the transfer rate \(t r\) and the bulk transfer rate brr for this disk unit (see Appendix B for definitions of tr and btr). d. Calculate the average number of block accesses needed to search for an arbitrary record in the file, using linear search. e. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are stored on consecutive disk blocks and double buffering is used. f. Calculate in msec the average time needed to search for an arbitrary record in the file, using linear search, if the file blocks are not stored on consecutive disk blocks. g. Assume that the records are ordered via some key field. Calculate the average number of block accesses and the average time needed to search for an arbitrary record in the file, using binary search.
Why are disks, not tapes, used to store online database files?
Suppose we want to create a linear hash file with a file load factor of 0.7 and a block. ing factor of 20 records per bucket, which is to contain 112,000 records initially. a. How many buckets should we allocate in the primary area? b. What should be the number of bits used for bucket addresses?
A PARTS file with Part* as hash key includes records with the following Part* val. ues: 2369,3760,4692,4871,5659,1821,1074,7115,1620,2428,3943,4750 \(6975,4981,9208 .\) The file uses eight buckets, numbered 0 to 7. Each bucket is one disk block and holds two records. Load these records into the file in the given order, using the hash function \(h(K)=K \bmod 8 .\) Calculate the average number of block accesses for a random retrieval on Part#.
What do you think about this solution?
We value your feedback to improve our textbook solutions.