Difference between revisions of "Minutes 26th Feb 2010"

From SourceWiki
Jump to navigation Jump to search
 
Line 2: Line 2:
  
 
AJP reported back from Boulder meeting:  
 
AJP reported back from Boulder meeting:  
 +
 
Much funding in the US for ice sheet model developments, looks like we can make use of some of these, in particular solver parallelisation.
 
Much funding in the US for ice sheet model developments, looks like we can make use of some of these, in particular solver parallelisation.
 +
 
Immersed boundary technique, used in z-coord ocean models to represent ice shelf base - ocean boundary, may be useful for calving front.
 
Immersed boundary technique, used in z-coord ocean models to represent ice shelf base - ocean boundary, may be useful for calving front.
 +
 +
Monsoon as discussed.
 +
 +
We currently have UM 4.7 installed on Monsoon.
 +
 +
We currently use UM 4.5 for the ice sheet coupling, not clear how much work porting the coupling would be.
  
 
Back of envelope calculation suggested that glimmer-cism might want to use around 1GB of RAM, possibly comparable to HadCM3.  DAGW and RMG established that we can allocate at least 8GB on either monsoon or BC2 head nodes.  We anticipate this might be higher on the Monsoon computational nodes (perhaps up to 64GB), but the same on the BC2 computational nodes.  In either case, it looks like we have sufficient RAM available.
 
Back of envelope calculation suggested that glimmer-cism might want to use around 1GB of RAM, possibly comparable to HadCM3.  DAGW and RMG established that we can allocate at least 8GB on either monsoon or BC2 head nodes.  We anticipate this might be higher on the Monsoon computational nodes (perhaps up to 64GB), but the same on the BC2 computational nodes.  In either case, it looks like we have sufficient RAM available.
  
Compiler:
+
Currently only compiler available is xlf (an IBM compiler).
ibm compiler, xlf
 
do we want others?
 
if ifort, should we compile the UM with ifort?
 
  
 
Need to find out how to submit parallel job other than via UMUI (might be on Twiki, will look when back up).
 
Need to find out how to submit parallel job other than via UMUI (might be on Twiki, will look when back up).
Line 17: Line 22:
  
 
1. All register on Twiki when it is back up (expected Monday).
 
1. All register on Twiki when it is back up (expected Monday).
 +
 
2. RG to install glimmer-cism-lanl on Monsoon.
 
2. RG to install glimmer-cism-lanl on Monsoon.
 +
 
3. RG to establish account request procedure for Monsoon.
 
3. RG to establish account request procedure for Monsoon.
 +
 
4. RG to establish whether Twiki is backed up.
 
4. RG to establish whether Twiki is backed up.
 +
 
4. VL to talk to Jenny and Ron about organising a postdocs response to IT review.
 
4. VL to talk to Jenny and Ron about organising a postdocs response to IT review.
 +
 
SS to send Doxygen link to SP and JJ.
 
SS to send Doxygen link to SP and JJ.
 +
 
RG and DAGW to check for limits to runtime array size allocation (done, see above).
 
RG and DAGW to check for limits to runtime array size allocation (done, see above).
 +
 
DAGW and RMG to investigate port forwarding as a way to allow easier Monsoon connectivity.
 
DAGW and RMG to investigate port forwarding as a way to allow easier Monsoon connectivity.

Revision as of 11:44, 27 February 2010

Discussed Twiki: we need to register when it's back up.

AJP reported back from Boulder meeting:

Much funding in the US for ice sheet model developments, looks like we can make use of some of these, in particular solver parallelisation.

Immersed boundary technique, used in z-coord ocean models to represent ice shelf base - ocean boundary, may be useful for calving front.

Monsoon as discussed.

We currently have UM 4.7 installed on Monsoon.

We currently use UM 4.5 for the ice sheet coupling, not clear how much work porting the coupling would be.

Back of envelope calculation suggested that glimmer-cism might want to use around 1GB of RAM, possibly comparable to HadCM3. DAGW and RMG established that we can allocate at least 8GB on either monsoon or BC2 head nodes. We anticipate this might be higher on the Monsoon computational nodes (perhaps up to 64GB), but the same on the BC2 computational nodes. In either case, it looks like we have sufficient RAM available.

Currently only compiler available is xlf (an IBM compiler).

Need to find out how to submit parallel job other than via UMUI (might be on Twiki, will look when back up).

Actions:

1. All register on Twiki when it is back up (expected Monday).

2. RG to install glimmer-cism-lanl on Monsoon.

3. RG to establish account request procedure for Monsoon.

4. RG to establish whether Twiki is backed up.

4. VL to talk to Jenny and Ron about organising a postdocs response to IT review.

SS to send Doxygen link to SP and JJ.

RG and DAGW to check for limits to runtime array size allocation (done, see above).

DAGW and RMG to investigate port forwarding as a way to allow easier Monsoon connectivity.