Grand Challenge Collaboration Meeting Minutes of the Working Group on Simulations and Data Generation --------------------------------------------------------------- submitted by G.Odyniec/LBL WG on Simulations and Data Generation discussed the following topics: 1. event format for generator 2. quantity of required events and their sources (generator, geant, ...) 3. data volume and CPU requirements 4. quantity of "direct" and "duplicated" data (by "duplicated data" we meant the original events modified/randomized in some fashion) 5. "special" events (events with an artificially enhanced signal) Ad.1 RHIC experiments time scale do not allow for full implementation of oo format, whereas in LHC experiments it is still feasible. We forsee that ATLAS will go exclusively into oo format, whereas for STAR/PHENIX the "intermediate" solution was proposed. Namely, the essential information extracted from the event/track tables will be stored in the form of oo header and subsequently subjected to Arie Shoshani's clustering/cataloging algorithms. The transformation between tables and objects could be done by hand (following some template provided by Ed May) or automatically. Ad.2 Not more than 100 K events/sample is needed for testing all, but two (e-b-e and HBT), physics goals. Venus code was selected as a primary generator due to convenience and of common sense (CPU and time requirements); we are aware of the fact that there are serious problems with the treatment of hard processes. The following systems will serve as a startup: heavy ions: 100,000/sample Au+Au @ 200 GeV/N and 60 GeV/N S+S @ 200 GeV/N and 60 GeV/N pp: 1,000,000 events In the case of heavy ions - it will require 17 K hours of T3E computer time (generator only). For pp - one week of PDSF (SUN's) is needed. Venus events will be processed with the GEANT code and slow/fast simulator. In the case of heavy ions, it appears to be a very CPU, time- and manpower- intensive process. We will discuss the technical side of the operation at the upcoming software meeting of STAR Coll.(next week). Ad.3 Data volume and CPU requirements: CPU: STAR: generator: 4* 100,000 -> 17 K hours geant: 70 K hours ----------- ~ 90 K hours ATLAS: generator: 1,000,000 events -> 138 hours geant: ? ----------- much less than for heavy ions STORAGE: STAR: 1 event =~20 MB (Au+Au @ 200 GeV/N) 100,000 -> ~2.5 Tb all -> ~5 Tb ATLAS: 1 event =~10kB 1,000,000 -> 10 Gb (generator only, but it is still a very small number) With the current G.C. allocation - there is enough CPU and storage for the present G.C. needs. Ad.4/Ad.5 100,000 events/sample for direct production, the amount of the "duplicated" events t.b.d. Ad.6 These events will be provided by physicists working on the specific physics topics. For the moment F.Wang, N.Xu and G.Odyniec from LBL, and R.Longacre from BNL agreed to generate samples of "special" events.