[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Tests settup for MDC1



Planned tests for MDC1				Aug. 27th, 1998

This is a summary of a discussion of the people involved with the
Storage Manager (Alex, Henrik, Luis, Arie) regarding the test setup for
MDC1.  We suggest to proceed as described below.

1.  Tests will be conducted in two configurations:
     a) A stand alone Storage Manager exercised by the INQ_EI test tool
         (Objectivity is also not involved).
     b) A full system that includes the Query Object, the Event
Iterators,
         and Objectivity.

The idea is to check the overhead introduced by the storage manager
separately and to help isolate sources of problems if they arise.  The
identical queries will be run in both configurations.   Configuration a)
should be similar to the "no touch" option of configutaion b) in that
Objectivity is not invoked.

2)  In both configurations we'd like to control the following:
      a) which queries to run
      b) when to start each query relative to each other
      c) the length of time that each file is processed

3)  To make the length of time realistic we think that it should be
proportional to the number of events in a file.  There will be two
methods of doing that:
      a) a fixed parameter per event that we can vary for each test
           (e.g. 1, 30, 1000 sec - corresponding to different analysis
types:
           Hadron, event-by-event, etc.)
      b) a random perturbation of the parameter to simulate variation of 
           real analysis

4)  For the configuration of stand alone storage manager, we will
develop a parameterized script that will let us set the queries to be
run (an ordered list), the delay of starting consecutive queries, the
parameter to processing time per event, and perturb yes/no option.
The script will also generate a window for each query, so we can
dynamically monitor the progress.

5)  A similar setup needs to be made for the full system configuration.

6)  The components of the Storage Manager will write log files as
follows:
    a)  the QE will log: query request and termination times
    b)  the QM will log: query arrival times, file request and caching
         times, and EI request and termination times.
    c)  the CM will log: file caching request and termination times

Log files will use the format: 
time (in time_t units), what_is_measured, QID (string), FID (int), EI_ID
(int), and a string for special info of each module.  Field can be left
null when appropriate.  The reason for a common file format for the SM
modules is to make it easy to sort and search.

7)  Cache size setting will be done in the Config file.

8)  We need to have a way of removing all files from the cache after
each test.


We also discussed the possibility of running SLAC's oofs instead of our
cache manager (if oofs will be available at RHIC).  The purpose is to
emulate the effect of order-optimized EIs without caching overlap.  By
using our "dummy CM", and setting off the use of query overlap in the
Caching Policy module, we can use the Storage Manager directly with the
oofs.  The oofs will still share files when they are in cache, but we
can see the effect of lack of optimization over all queries. 

In addition to measuring performance, we hope to discover with the above
controlled experiments the bottleneck in the system for various query
mixes.  If the event processing time is very small, we will expect the
caching of files from tape to be the bottleneck.  If the event
processing is very long, we expect the cache size to be the bottleneck. 
It will be interesting to find the event processing time that balances
the two.

Another test we may want to do (if we have the time) is to launch lots
of queries (100-1000) simultaneously to test the robustness of the
system, and perhaps the performance of Objectivity in such cases.

(BTW, Alex  now added to the Caching Policy the passing of a file to all
queries who need it immediately, event if they are out of order in the
query queue).

Please comment on this plan quickly (especially Dave Malon and Doug
Olson), so we can modify them if you have any  suggestions.

Tests