2

While starting a simulation on REGCM, I executed the command

mpirun -np 2 ./bin/regcmMPI sensitivity01.in

Which gives me the following:

This is RegCM trunk
   SVN Revision:  compiled at: data : Sep  9 2021  time: 18:24:31

: this run start at : 2021-09-27 12:04:43-0400 : it is submitted by : jorgenava : it is running on : jorgenava-HP-Laptop-15-dy1xxx : it is using : 1 processors : in directory : /home/jorgenava/Modelos_de_Simulacion/RegCM-master/Regrun

CPUS DIM1 = 1 CPUS DIM2 = 1

Reading model namelist file Using default dynamical parameters. Using default hydrostatc parameters. Using default cloud parameter. -------------- FATAL CALLED --------------- Fatal in file: mod_params.F90 at line: 2699 Error reading GRELLPARAM


mod_params.F90 : 2699: 1 Abort called by computing node 0 at 2021-09-27 12:04:43.080 -0400 Execution terminated because of runtime error


MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 DUP FROM 0 with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them.



It seems that [at least] one of the processes that was started with mpirun did not invoke MPI_INIT before quitting (it is possible that more than one process did not invoke MPI_INIT -- mpirun was only notified of the first one, which was on node n0).

mpirun can only be used with MPI programs (i.e., programs that invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program to run non-MPI programs over the lambooted nodes.


The contents of the file sensitivity01.in are the following:

!dominio 25 km CORDEX SUDAMERICA
!GRELL/LAND-EMANUEL/OCEAN=sensit01
 &dimparam
 iy     = 278,
 jx     = 243,
 kz     = 18,
 nsg    = 1,
 /
 &geoparam
 iproj = 'ROTMER',
 ds = 30.0, 
 ptop = 5.0,
 clat = -21.11,
 clon = -60.3,
 plat = -21.11,
 plon = -60.3,
 truelatl = -30.0,
 truelath = -60.0,
 i_band = 0,
 /
 &terrainparam
 domname = 'sensit01',
 lakedpth = .false.,
 fudge_lnd   = .false.,
 fudge_lnd_s = .false.,
 fudge_tex   = .false.,
 fudge_tex_s = .false.,
 dirter = '/home/jorgenava/Modelos_de_Simulacion/RegCM-master/Regrun/input',
 inpter = '/home/jorgenava/Modelos_de_Simulacion/Globedat/',
 /
 &debugparam
 debug_level = 0,
 /
 &boundaryparam
 nspgx  = 12,
 nspgd  = 12,
 /
 &globdatparam
 ibdyfrq = 6,
 ssttyp = 'OI_WK',
 dattyp = 'EIN15',
 gdate1 = 2000010100,
 gdate2 = 2000013100,
 dirglob = '/home/jorgenava/Modelos_de_Simulacion/RegCM-master/Regrun/input',
 inpglob = '/home/jorgenava/Modelos_de_Simulacion/Globedat',
 /
 &globwindow
 lat0 = 0.0
 lat1 = 0.0
 lon0 = 0.0
 lon1 = 0.0
 /
 &restartparam
 ifrest  = .false. ,
 mdate0  = 2000010100,
 mdate1  = 2000010100,
 mdate2  = 2000013100,
 /
 &timeparam
 dtrad   =    30.,
 dtabem  =    18.,
 dtsrf   =   600.,
 dt      =   30.,
 /
 &outparam
 ifsave  = .false. ,
   savfrq  =    0.,
 ifatm  = .true. ,
   atmfrq  =     6.,
 ifrad   = .false. ,
   radfrq  =     6.,
 ifsrf   = .true. ,
 ifsub   = .false. ,
   srffrq  =     3.,
 ifchem  = .false.,
   chemfrq =     6.,
 dirout='/home/jorgenava/Modelos_de_Simulacion/RegCM-master/Regrun/output',
 /
 &physicsparam
 iboudy  =          5,
 ibltyp  =          1,
 !idiffu   =          1, ! Diffusion scheme
 icup_lnd =         2,
 icup_ocn =         4,
 ipptls  =          1,
 iocnflx =          2,
 ipgf    =          0,
 iemiss  =          0,
 lakemod =          0,
 ichem   =          0,
 scenario = 'A1B',
 idcsst = 0,
 iseaice = 0,
 idesseas = 0,
 iconvlwp = 0,
 /
 &subexparam
 qck1land  = 0.0005,  ! Autoconversion Rate for Land
 qck1oce   = 0.0005,  ! Autoconversion Rate for Ocean
 gulland   = 0.65,    ! Fract of Gultepe eqn (qcth) when prcp occurs (land)
 guloce    = 0.30,    ! Fract of Gultepe eqn (qcth) for ocean
 cevaplnd  = 1.0e-5,  ! Raindrop evap rate coef land [[(kg m-2 s-1)-1/2]/s]
 cevapoce  = 1.0e-5,  ! Raindrop evap rate coef ocean [[(kg m-2 s-1)-1/2]/s]
 caccrlnd  = 6.0,     ! Raindrop accretion rate land  [m3/kg/s]
 caccroce  = 4.0,     ! Raindrop accretion rate ocean [m3/kg/s]
 conf      = 1.00,    ! Condensation efficiency 
 /
! &subexparam
! qck1land =   .250E-03,
! qck1oce  =   .250E-03,
! cevaplnd =   .100E-02,
! caccrlnd =      3.000,
! cftotmax =      0.75,

/ &grellparam igcc = 1, ! Cumulus closure scheme ! 1 => Arakawa & Schubert (1974) ! 2 => Fritsch & Chappell (1980) gcr0 = 0.0020, ! Conversion rate from cloud to rain edtmin = 0.20, ! Minimum Precipitation Efficiency land edtmin_ocn = 0.20, ! Minimum Precipitation Efficiency ocean edtmax = 0.80, ! Maximum Precipitation Efficiency land edtmax_ocn = 0.80, ! Maximum Precipitation Efficiency ocean edtmino = 0.20, ! Minimum Tendency Efficiency (o var) land edtmino_ocn = 0.20, ! Minimum Tendency Efficiency (o var) ocean edtmaxo = 0.80, ! Maximum Tendency Efficiency (o var) land edtmaxo_ocn = 0.80, ! Maximum Tendency Efficiency (o var) ocean edtminx = 0.20, ! Minimum Tendency Efficiency (x var) land edtminx_ocn = 0.20, ! Minimum Tendency Efficiency (x var) ocean edtmaxx = 0.80, ! Maximum Tendency Efficiency (x var) land edtmaxx_ocn = 0.80, ! Maximum Tendency Efficiency (x var) ocean shrmin = 0.30, ! Minimum Shear effect on precip eff. land shrmin_ocn = 0.30, ! Minimum Shear effect on precip eff. ocean shrmax = 0.90, ! Maximum Shear effect on precip eff. land shrmax_ocn = 0.90, ! Maximum Shear effect on precip eff. ocean pbcmax = 50.0, ! Max depth (mb) of stable layer b/twn LCL & LFC mincld = 150.0, ! Min cloud depth (mb). htmin = -250.0, ! Min convective heating htmax = 500.0, ! Max convective heating skbmax = 0.4, ! Max cloud base height in sigma dtauc = 30.0 ! Fritsch & Chappell (1980) ABE Removal Timescale (min) / &emanparam elcrit_ocn = 0.0011D0, elcrit_lnd = 0.0011D0, coeffr = 1.0D0, / &holtslagparam / &clm_inparm fpftcon = 'pft-physiology.c130503.nc', fsnowoptics = 'snicar_optics_5bnd_c090915.nc', fsnowaging = 'snicar_drdt_bst_fit_60_c070416.nc', / &clm_soilhydrology_inparm / &clm_hydrology1_inparm /

What could be causing this problem?

EDIT: If isntead I use the following command:

mpirun -n 4 ./bin/regcmMPI sensitivity01.in

I instead get the following output:

-----------------------------------------------------------------------------
Synopsis:       mpirun [options] <app> 
                mpirun [options] <where> <program> [<prog args>]

Description: Start an MPI application in LAM/MPI.

Notes: [options] Zero or more of the options listed below <app> LAM/MPI appschema <where> List of LAM nodes and/or CPUs (examples below) <program> Must be a LAM/MPI program that either invokes MPI_INIT or has exactly one of its children invoke MPI_INIT <prog args> Optional list of command line arguments to <program>

Options: -c <num> Run <num> copies of <program> (same as -np) -client <rank> <host>:<port> Run IMPI job; connect to the IMPI server <host> at port <port> as IMPI client number <rank> -D Change current working directory of new processes to the directory where the executable resides -f Do not open stdio descriptors -ger Turn on GER mode -h Print this help message -l Force line-buffered output -lamd Use LAM daemon (LAMD) mode (opposite of -c2c) -nger Turn off GER mode -np <num> Run <num> copies of <program> (same as -c) -nx Don't export LAM_MPI_* environment variables -O Universe is homogeneous -pty / -npty Use/don't use pseudo terminals when stdout is a tty -s <nodeid> Load <program> from node <nodeid> -sigs / -nsigs Catch/don't catch signals in MPI application -ssi <n> <arg> Set environment variable LAM_MPI_SSI_<n>=<arg> -toff Enable tracing with generation initially off -ton, -t Enable tracing with generation initially on -tv Launch processes under TotalView Debugger -v Be verbose -w / -nw Wait/don't wait for application to complete -wd <dir> Change current working directory of new processes to <dir> -x <envlist> Export environment vars in <envlist>

Nodes: n<list>, e.g., n0-3,5 CPUS: c<list>, e.g., c0-3,5 Extras: h (local node), o (origin node), N (all nodes), C (all CPUs)

Examples: mpirun n0-7 prog1 Executes "prog1" on nodes 0 through 7.

            mpirun -lamd -x FOO=bar,DISPLAY N prog2
            Executes &quot;prog2&quot; on all nodes using the LAMD RPI.  
            In the environment of each process, set FOO to the value
            &quot;bar&quot;, and set DISPLAY to the current value.

            mpirun n0 N prog3
            Run &quot;prog3&quot; on node 0, *and* all nodes.  This executes *2*
            copies on n0.

            mpirun C prog4 arg1 arg2
            Run &quot;prog4&quot; on each available CPU with command line
            arguments of &quot;arg1&quot; and &quot;arg2&quot;.  If each node has a
            CPU count of 1, the &quot;C&quot; is equivalent to &quot;N&quot;.  If at
            least one node has a CPU count greater than 1, LAM
            will run neighboring ranks of MPI_COMM_WORLD on that
            node.  For example, if node 0 has a CPU count of 4 and
            node 1 has a CPU count of 2, &quot;prog4&quot; will have
            MPI_COMM_WORLD ranks 0 through 3 on n0, and ranks 4
            and 5 on n1.

            mpirun c0 C prog5
            Similar to the &quot;prog3&quot; example above, this runs &quot;prog5&quot;
            on CPU 0 *and* on each available CPU.  This executes
            *2* copies on the node where CPU 0 is (i.e., n0).
            This is probably not a useful use of the &quot;C&quot; notation;
            it is only shown here for an example.

Defaults: -c2c -w -pty -nger -nsigs

What could the problem be?

0 Answers0