next up previous
Next: Read Caching Implementation Up: Asynchronous Writing and Lookup Previous: Asynchronous Writing

   
Lookup Caching

Lookups of files and retrieval of file attributes constitute a significant fraction of NFS traffic.6 Clients can reduce this traffic by caching file and directory attributes for a few seconds, so that getting attributes for files that have been recently accessed can be performed without communicating with the server.

The current Linux implementation of lookup caching has two shortcomings. Accessing the cache is expensive since a linear search is used to find elements (requiring a complete pass for each cache miss). Partially because of this slow search, the size of the cache is fixed at 64 entries. This small cache size limits the number of hits, reducing the effectiveness of the cache in avoiding RPCs. Since the lookup-cache is a system-wide resource, the small 64 element cache is easily exhausted. By increasing the number of cached attributes, we will observe fewer cache misses which will reduce network traffic and server load.7

We have replaced the linear array with a hashing scheme to allow for more efficient operations, thus reducing client CPU usage and permitting larger cache sizes. The hash function is computed from the directory inode and the filename. Collisions are resolved with side chaining. To avoid a kmalloc() call for each cache entry insertion, the cache is preallocated in a single block and initially organized into a free list. If we attempt to add an entry to the cache and the free list is exhausted, a scavenging pass runs over all the entries in the cache and moves expired ones to the free list. In the unlikely event that this scavenging fails to produce any free slots, the new entry is dropped instead of being added to the cache. It is common when searching the cache for a file to find an expired entry for that file. In this case the entry is immediately moved to the free list, so that when the lookup RPC completes, the insertion will always find a free spot without scavenging.

A complication in lookup caching is that the cache can be accessed in two ways. The usual method is to lookup the file inode given the directory inode and the filename; the cache is optimized to handle this case efficiently. The second access method is when we have the file inode and want to update the file attributes field stored in the lookup cache. This case occurs, for instance, when a read RPC returns the current file attributes along with the read data. We avoid this case whenever possible because it requires a linear-time search of the entire cache. The need to perform a linear search limits how large we can make the cache before the client CPU time spent in these searches overwhelms the advantage in lookup speed. (We chose not to provide a reverse-hash to avoid requiring even more precious kernel memory for the lookup cache.)

Another limiting factor is the inability of kmalloc() to allocate a large block of memory. We only cache files whose names are 32 characters or fewer in order to reduce the size of each cache entry (NFS allows names of up to 255 characters, but names longer than 32 characters are unusual). With this restriction, we are able to provide a cache of 511 entries (512 entries are allocated, but the zeroth is reserved for free list management).


next up previous
Next: Read Caching Implementation Up: Asynchronous Writing and Lookup Previous: Asynchronous Writing
Greg Badros
1998-04-23