Ask Question
19 December, 17:27

16.50. Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that B = 2,400 bytes, s = 16 ms, rd = 8.3 ms, and btt = 0.8 ms. Suppose we want to make X independent random record reads from the file. We could make X random block reads or we could perform one exhaustive read of the entire file looking for those X records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform X individual random reads. That is, what is the value for X when an exhaustive read of the file is more efficient than random X reads? Develop this as a function of X.

+5
Answers (1)
  1. 19 December, 19:25
    0
    Answer and Explanation:

    Given that total number of records in a file = 100000

    Each record consists of = 240 bytes

    Given that B = 2400 bytes.

    Then total blocks in file = 100000 records * 240 / 2400

    = 10000 blocks.

    Time that will take for exhaustive read

    = s + r + b. btt

    = 16 + 8.3 + (10000) * 0.8

    = 8024.3 msec

    Now as per given X be the number of records that are searched randomly and it takes more than exhaustive read time.

    Hence, X (s + r + btt) > 8024.3

    X (16+8.3+0.8) > 8024.3

    X > 8024.3/25.1

    So that, X > 319.69

    Hence, atleast 320 random reads should be made to search the file
Know the Answer?
Not Sure About the Answer?
Get an answer to your question ✅ “16.50. Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that B = 2,400 bytes, s = 16 ...” in 📙 Computers & Technology if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions.
Search for Other Answers