LANL: 12 9 2008

From SourceWiki
Jump to navigation Jump to search

Community Ice Sheet Model development workshop, Los Alamos Nat. Laboratory

On August 18-20th, the COSIM group at Los Alamos National Laboratory held a workshop on development of a U.S. Community Ice Sheet Model (CISM), which will ultimately be a component of the U.S. Community Climate System Model (CCSM). There were approximately 35 attendees from ~20 different universities, national labs and research centers.

Workshop goals

A number of recent national and international workshops and meetings have focused on identifying how next generation ice sheet models need to be improved in order to accurately capture recent, dramatic, ice sheet behaviors. The focus of this workshop was more towards (1) prioritizing model improvements that are necessary for improved sea-level rise forecasts in the next 2-5 years, (2) identifying specific tasks towards making those improvements, and (3) assigning these tasks to individual researchers and/or research groups. To organize the effort, a number of focus areas were identified. A group for each focus area met separately for several hours in order to set specific goals and devise a work plan for achieving those goals. On the final day of the workshop, the leader of each focus area reported back to the entire group.

Focus areas

Although some overlap was inevitable, four focus areas were identified prior to the meeting:

  1. ice dynamics and physics
  2. ice shelf / ocean interactions
  3. software design and coupling
  4. initialization, verification, and validation

Summary

Leaders of the four focus areas are currently working to compile the summaries into a single report, which will be available as a white paper or will form the basis of a contribution to EOS. This page can/will be updated as those summaries develop. For now, the following can serve as a rough summary of the major points to come out of the workshop.

  • models based on the 1-st order SIA (e.g. Blatter/Pattyn-type models) will likely form the dynamic core of the ice sheet/shelf model, at least for the next 2-5 years, with full-Stokes modeling as a longer-term goal (next 5-10 years)
  • with respect to the modeling of grounded ice, the single most important improvement will be the inclusion of basal hydrology models and coupling of those models to the basal boundary condition (allowing for evolution of the basal boundary condition over time)
  • other important submodels and/or parameterizations seen as a priority for the improvement of grounded ice modeling include:
  1. links between surface, englacial, and basal hydrology
  2. "hard bed" sliding (including evolution of)
  3. sliding over plastic till (including evolution of)
  4. accurate representation of grounding line behavior
  • for modeling of ice shelf/ocean interactions, the following improvements were considered important:
  1. calving
  2. constraints on shelf-base/ocean melt/freeze rates; better representation in models
  3. better understanding of ocean mixed layer and heat entrainment to shelf base (which models treat this accurately?)
  4. verify that ice/ocean coupled models can reproduce "warm" and "cold" water circulation regimes, beneath PIG and Ross/FR ice shelves, respectively
  5. determine importance of sea ice with respect to temperature of water masses circulating beneath shelves
  6. determine if small-scale details (e.g. tides, presence/absence of sediment, local topography) are important to grounding line motion
  • software and coupling issues/goals:
  1. avoid derived types; instead migrate source code to use of clear, descriptive argument lists for subroutines
  2. refine "signatures" of major subroutines (e.g. identify variables passed in/out) for more modularity
  3. alter dynamical core to use POP/CICE advection (remapping) scheme for thickness evol. and tracers
  4. alter dynamical core for use w/ POP/CICE parallelization utilities
  5. allow flexibility for stand-alone, 1-way, and 2-way coupling
  • initialization, verification, and validation goals/issues:
  1. build a set of standardized, gridded datasets for initialization (in standard format, e.g. NetCDF)
  2. versioning for standard datasets to keep track of who used what for which model runs
  3. build similar for time-series forcing and initialization through spinup
  4. include "verification" suite and regular build/test suite into code from start
  5. consider use of standardized, pre-computed initial states
  6. initialization is unresolved issue - need to come up with some "0-order" standards for how this should be done in the absence of more complete and accurate methods