The coupled atmosphere-ocean model is the pri- 

 mary tool by which climate scientists simulate 

 the behavior of the Earth's climate system. As a 

 first step in porting a fully coupled model to 

 massively parallel processor systems, scientists 

 at several sites are redesigning separate atmo- 

 sphere and ocean model elements for execution 

 on scalable parallel systems. 



A parallel ocean model, referred to as the 

 Parallel Ocean Program (POP), has demonstrat- 

 ed gigaflop performance during early data-paral- 

 lel experiments on the 1,024-node Thinking 

 Machines CM-5 at DOE's High Performance 

 Computing Research Center of Los Alamos 

 National Laboratory (LAND. This research 

 effort is being funded under DOE's Computer 

 Hardware Advanced Mathematics and Model 

 Physics (CHAMMP) program. LANL scientists 

 collaborate on the project with scientists at 

 NOAA's Geophysical Fluid Dynamics 

 Laboratory (GFDL), the Na\al Postgraduate 

 School (NPS), and NSF's National Center for 

 Atmospheric Research (NCAR). 



Over the past decade, the U.S. academic com- 

 munity has made extensive use of NCAR's 

 Community Climate Model (CCM) for global 

 climate research. Recently, researchers at two 

 DOE laboratories, Argonne National Laboratory 

 and Oak Ridge National Laboratory, also funded 

 under the CHAMMP program, have been collab- 

 orating with NCAR scientists to develop parallel 

 versions of the latest CCM code to run on next 

 generation massively parallel systems. This 

 model, referred to as the Parallel Community 

 Climate Model 2 (PCCM2). is -now demonstrat- 

 ing near gigaflop performance on the 512-pro- 

 cessor Intel Touchstone Delta. A data parallel 

 version for the Thinking Machines CM-5 is also 

 being prepared. In ongoing investigations, 

 researchers are exploring new parallel algo- 

 rithms designed to improve performance on 

 message passing systems and incorporating new 

 numerical methods that will impro\e climate 

 simulations. 



Scientists at the Lawrence Livermore National 

 Laboratory (LLNL) are simultaneously develop- 

 ing a message passing version of the models 

 described above. These versions treat the inher- 



ent parallelism of climate problems in a different 

 fashion and can be executed on the class of par- 

 allel systems built around the distributed memo- 

 ry architectural approach. 



A high-resolution atmospheric general circula- 

 tion model, known as SKYHI. has been devel- 

 oped and used by scientists at NOAA's GFDL to 

 investigate stratospheric circulation and ozone 

 depletion for a number of years. The ozone 

 depletion study was presented in "Grand 

 Challenges 1993" and is an example of some of 

 the scientific results obtained from this model. 

 GFDL scientists are now working with LANL 

 scientists to reconstruct the programs that make 

 up the SKYHI model in order to execute it on 

 massively parallel systems that support data par- 

 allel and message passing programming 

 paradigms. 



These various model development efforts all 

 have the same ultimate objective - to develop a 

 coupled atmospheric-ocean climate model that 

 will execute efficiently on scalable parallel com- 

 puters. 



Just as demand for more computational 

 resources is growing within the climate research 

 community, so also is the need to distribute the 

 computing load among different computers, both 

 at one site and across the country when appropri- 

 ate. With this objective in mind, researchers at 

 the University of California, Los Angeles 

 (UCLA) are collaborating with scientists at 

 NSF's San Diego Supercomputer Center 

 (SDSC), the California Institute of Technology 

 (Caltech). and NASA's Jet Propulsion 

 Laboratory (J PL) to demonstrate the feasibility 

 of distributed supercomputing of a coupled cli- 

 mate model. In this research, separate compo- 

 nents of the atmosphere and ocean codes in a 

 single climate computation are distributed over 

 multiple supercomputers connected via a high 

 speed network. Initial prototype experiments 

 have been performed between Cray Research Y- 

 MP systems at SDSC and NCAR. connected by 

 a 1.5- megabit-per-second T-1 data link. The 

 imminent availability of a gigabit-per-seccnd 

 network under the CASA gigabit testbed project 

 will soon allow for a much more advanced com- 

 munications capability. Plans then call for com- 



121 



