[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

MDC1-GC:lessons learned



A word 97 version of the same (has better formatting) is also attached.

Some lessons learned - MDC1                     10/6/98

The most important lesson was that the software we tested was extremely
resilient.  We experienced no crashes at all, only one hang up because
of an unusual race condition.

Although expected, we could demonstrate that the most dramatic
improvement comes from clustering of events to match queries, thus
minimizing the number of files that have to be read, even when
processing time per event is long.

We were also able to show that using a caching policy that looks at what
is in the cache to determine which file to cache next is a big win. 
This was achieved even without information on which file resides in
which tape, but so far we used very few tapes.   As mentioned below
(item 2), scheduling file caching based on tape information should
further improve efficiency.

Below are some items that I think will be useful for future test
planning and system design.  We may want to discuss then in the
post-MDC1 meeting.  This reflects the collective experience of Henrik,
Luis, and Dave Z. during the tests.  Please feel free to add items to
the list, especially if you think we should discuss them in the
Post-MDC1 meeting.

Arie.

1. Minimizing tape dismounts
 
Observation: HPSS was dismounting a tape even if the next file requested
is in the same file. 
Reason: dismount time on HPSS was set to 15 seconds.  If there is no
request pending for a file from the same tape, it dismounts the tape.
Lesson: it is better to schedule multiple requests to the same tape
ahead of time.

2.  Long dismount/mount times

Observation: it takes on the average 2 min to dismount and mount a tape.
Reason: it takes 90 sec to rewind a full tape, and 90 sec to seek to end
of tape.  Mount/dismount is about 17 sec.  We observed longer times.
Lesson: this confirms that it is important to schedule multiple reads
out of the same tape when possible.
Implication: we need to find a way to get tape IDs dynamically and use
that in the caching policy.  Currently this means having a client API to
HPSS.

3. Transfer rate between caches

Observation: most of the time we got about 2 MB/s.  Sometimes we got as
much as 5 or 6, but we observed often .5 to 1 MB/s.
Reason: network is shared.
Lesson: the transfer rate between HPSS cache and local cache can
dominate.  Even if we avoid dismounts, and transfer from tape to
hpss_cache is fast, the effective transfer rate is determined by the
transfer rate between the caches.
Implication: caching ahead into the local cache can be beneficial.  But
if processing time per file is long, it should be limited to 1.

4.  HPSS misbehavior

Observation: we got several unexpected errors from HPSS, such as "can't
mount tape" (error 17), "path name too long", etc.  We also had HPSS
malfunction and fixes during test runs.
Lesson: we need to be prepared for all such behavior.  In one test we
resumed operation automatically after HPSS malfunction by periodically
requesting transfer of the last file requested.
Implication: we need to decide what is appropriate for the Storage
Manager to do if we get errors from HPSS and Objectivity.

5.  Observing system during tests

Observation: it was useful observing SAMMI during tests, but quite
difficult to follow what's happening, like repeated dismounts.  We also
had no quick way to see what is the status of each query, the status of
the cache, etc. 
Reason:  we have no access to logs of HPSS actions for our COS. 
Observing our logs is awkward
Lesson: it is worth developing or using some graphical tools to display
historical and dynamic system behavior.  If possible, we should get
access to HPSS logs.

6.  Emptying cache during tests

Observation: emptying of the HPSS cache for our COS was the most costly
in terms of successful tests.  Thus, we had many wasted tests.
Reason: no direct control of purging of file from HPSS cache.  Thus, we
had to use special setup of migration policy for the COS.
Lesson: it is better to empty the cache directly with a client API.

7.  Cache size matters

Observation:  We observed (as expected) that when processing time per
file is short, having a large cache makes a big difference.  Less so, if
processing times are large.
Lesson: it is important to determine a fairly accurate estimate of
processing times, and perhaps use that in caching policies.

8.  Objectivity overhead appears low

Observation: observed overhead for getting 5000 events from objectivity
was only 100 sec.  However the test was made with "minimal user code",
and the user code was on the same machine as Objectivity.
Lesson: we should plan tests were the user code is on a PC-Linux over a
network to see the effect.

9. Non-uniform file sizes

Observation: file sizes for this test varied from 4 MB to 600 MB.  Our
test plans were based on the assumption that file sizes are pretty
uniform.
Lesson: it was impossible to perform a controlled experiment without
knowing ahead of time the file sizes, the number of events qualified for
a query, AND the order they will be read in.  For example, the no-policy
test requires that we know this information to set the cache size.
 Implication: to design tests, we will need to extract the file size and
tape ID, and run the QE and QM to get the caching order.

Some