WEDNESDAY, NOVEMBER 14

  • TIME MIGRATION IN THE OIL INDUSTRY
    Chair: Ray Paden, IBM
    Time: 3:30-5:00 PM

    Scalability Analysis of Distributed 3D Prestack Time Migration
    Kevin Hellman, Aliant Geophysical

    3D prestack time migration is a seismic imaging application which is well suited for parallel computation in distributed memory clustered computer environments. The basic algorithm involves data aggregation at a volume of output locations, using (potentially) all of the input data at each of these output locations. Parallelization may be designed in either output or input domains. Since the majority of the processing time is spent in the summation kernel, time migration is often thought of as "embarrassingly" parallel, and not much importance is attached to the parallelization scheme. For seismic surveys of actual exploration size, however, the details of parallelization can have a dramatic impact on the scalability, and hence the runtime, of prestack time migration. Simple timing models for three common approaches to parallelization will be introduced which characterize the total throughput time and parallel efficiency of the process with respect to machine size, cpu performance, and speed of data movement. The turnaround time of production sized jobs turns out to be highly dependent on the choice of parallel algorithm, and the choice itself will change with the parameters of the project.

    Computational Elements, Requirements and Tradeoffs for Imaging Normal-Incidence Seismic Data
    Jim McClean, PGS Research
    Authors: Jim McClean and Steve Kelly, PGS Research

    Exploration seismic recordings are often processed to simulate an experiment in which the source and receiver are coincident at the same surface location. We outline an algorithm for imaging preprocessed recordings of this type using an approximate form of the scalar wave equation. This outline will consist of a description of the various approximations used to reduce the computional cost while retaining acceptable accuracy.

    In general, huge datasets are handled with this method. Additional constraints include available disk and memory capacities, I/O speed and the underlying computational requirements of the algorithm. We discuss the impact of these constraints upon our processing methodology.

    We also comment on the style of parallelization that is most effective for the algorithm, as well as its scalability.