add directory study
This commit is contained in:
@@ -0,0 +1,92 @@
|
||||
<html><head>
|
||||
<title>HPFS: Performance issues</title>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<center>
|
||||
<h1>Performance issues</h1>
|
||||
</center>
|
||||
|
||||
The HPFS attacks potential bofflenecks in disk throughput at multiple levels.
|
||||
It uses advanced data structures contiguous sector allocation, intelligent
|
||||
caching, read-ahead, and deffered writes in order to boost performance.
|
||||
First, the HPFS matches its data structures to the task at hand:
|
||||
sophisticated data structures (B-Trees and B+ Trees) for fast random access
|
||||
to filenames, directory names, and lists of sectors allocated to files or
|
||||
directories, and simple compact data structures (bitmaps) for locating chunks
|
||||
of free space of the appropriate size. The routines that manipulate these
|
||||
data structures are written in assembly language and have been painstakingly
|
||||
tuned, with special focus on the routines that search the freespace bitmaps
|
||||
for patterns of set bits (unused sectors). Next, the HPFS's main goal --its
|
||||
prime directive, if you will -- is to assign consecutive sectors to files
|
||||
whenever possible. The time required to move the disk's readowrite head from
|
||||
one track to another far out-weighs the other possible delays, so the HPFS
|
||||
works hard to avoid or minimize such head movements by allocating file space
|
||||
contiguously and by keeping control structures such as Fnodes and freespace
|
||||
bitmaps near the things they control.
|
||||
<p>
|
||||
|
||||
Highly contiguous files also help the file system make fewer requests of the
|
||||
disk driver for more sectors at a time, allow the disk driver to exploit the
|
||||
multisector transfer capabilities of the disk controller, and reduce the
|
||||
number of disk completion interrupts that must be serviced. Of course, trying
|
||||
to keep files from becoming fragmented in amultitasking system in which many
|
||||
files are being updated concurrently is no easy chore. One strategy the HPFS
|
||||
uses is to scatter newly created files across the disk--in separate bands,
|
||||
if poosible-so that the sectors allocated to the files as they are extended
|
||||
will not be interleaved. Another strategy is to reallocate approximately 4Kb
|
||||
of contiguous space to the file each time it must be extended and give back
|
||||
any excess when the file is closed. If an application knows the ultimate size
|
||||
of a new file in advance, it can assist the file system by specifying an
|
||||
initial file allocation when it creates the file. The system will then search
|
||||
all the free space bitmaps to find a run of consecutive sectors large enough
|
||||
to hold the file. That failing, it will search for two runs that are half
|
||||
the size of the file, and so on.
|
||||
<p>
|
||||
|
||||
The HPFS relies on several different kinds of caching to minimize the number
|
||||
of physical disk transfers it must request. Naturally, it caches sectors, as
|
||||
did the FAT file system. But unlike the FAT file system, the HPFS can manage
|
||||
very large caches efficiently and adjusts sector caching on a per handle basis
|
||||
to the manner in which a file is used. The HPFS also caches path names and
|
||||
directories, transforming disk directory entries into an even more compact and
|
||||
efficient in-memory representation. Another technique that the HPFS uses to
|
||||
improve performance is to preread data it believes the program is likely to
|
||||
need. For example, when a file is opened, the file system will pre-read and
|
||||
cache the Fnode and the first few sectors of the file's contents. If the file
|
||||
is an executable program or the history information in the file's Fnode shows
|
||||
that an open operation has typically been followed by an immediate sequential
|
||||
read of the entire file, the file system will preread and cache much more of
|
||||
the file's contents. When a program issues relatively small read requests, the
|
||||
file system always fetches data from the file in 2Kb chunks and caches the
|
||||
excess, allowing most read operations to be satisfied from the cache. Finally,
|
||||
the OS/2 operating system's support for multitasking makes it possible for the
|
||||
HPFS to rely heavily on lazy writes (sometimes called deferred writes or write
|
||||
behind) to improve performance. When a program requests a disk write, the data
|
||||
is placed in the cache and the cache buffer is flagged as dirty (that is,
|
||||
inconsistent with the state of the data on disk). When the disk becomes idle
|
||||
or the cache becomes saturated with dirty buffers, the file system uses a
|
||||
captive thread from a daemon process to write the buffers to disk, starting
|
||||
with the oldest data. In general, lazy writes mean that programs run faster
|
||||
because their read requests will almost never be stalled waiting for a write
|
||||
request to complete. For programs that repeatedly read, modify, and write a
|
||||
small working set of records, it also means that many unnecessary or redundant
|
||||
physical disk writes may be avoided. Lazy writes have their dangers, of course,
|
||||
so a program can defeat them on a per-handle basis by setting the write-through
|
||||
flag in the Open Mode parameter for DosOpen or it can commit data to disk on a
|
||||
per-handle basis with the DosBufReset function.
|
||||
|
||||
<p>
|
||||
<hr>
|
||||
|
||||
< <a href="ifs.html">[Installable File Systems]</a> |
|
||||
<a href="hpfs.html">[HPFS Home]</a> |
|
||||
<a href="faultol.html">[Fault Tolerance]</a> >
|
||||
|
||||
<hr>
|
||||
|
||||
<font size=-1>
|
||||
Html'ed by <a href="http://www.seds.org/~spider/">Hartmut Frommert</a>
|
||||
</font>
|
||||
|
||||
</body></html>
|
||||
Reference in New Issue
Block a user