In order to make data set sizes more manageable, the Skim1 process split the reconstructed data into six ``'' containing summarized information rather than the full information output by Pass1. Each contains data satisfying requirements for physics in one or two broad categories (see recon:superstreams). About half of the events surviving the Pass1 process were written out by Skim1, with many of those events being written into multiple .
|2||Topological vertexing and||Illinois|
|3||Calibration and rare decays||CBPF, Brazil|
|5||Diffractive (light quark states)||California, Davis|
|6||Hadronic meson decays||California, Davis|
As with Pass1, Skim1 used clusters of computers to take advantage of the parallelism inherent in high energy physics data. However, in Skim1, data were analyzed as disk files, each containing about 40,000 events, rather than Pass1's much smaller chunks of data. Because Pass1 did not save the reconstructed calorimetry information, Skim1 executed the calorimetry algorithms again. Skim1 also re-ran the Vee and reconstruction since those algorithms were improved during Pass1 production.
Six output files, one for each , were generated from each input file. These files were concatenated and output to six sets of 200-500 8 mm tapes. The data were also transferred over the Internet to Fermilab for easy access by experimenters.
Skim1 was run on two computer clusters of about 4000 MIPS each, located at the University of Colorado and Vanderbilt University. The University of Colorado cluster consisted entirely of Digital30 workstations using the Alpha CPU. The Vanderbilt University system was a mixed system of Alpha workstations and workstations based on the Intel Pentium II processor running Linux. Skim1 began in October, 1998 and finished in February, 1999. An overview of the Skim1 process at Colorado is shown in recon:skim1_overview. The process used at Vanderbilt was similar.