Replies: 1 comment 5 replies
-
| Hi @mcthreems Thanks for your patience with a reply. Several LIS team members were on leave in July/August, we have lost some staff, I'm not sure why are you getting this error after ~8 months of simulation. Did you check all of your lislog files for a message? Again, your configure.lis file might help as well. It's also possible that your simulation has run out of memory and crashed. | 
Beta Was this translation helpful? Give feedback.
                  
                    5 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
I attempted to run a simulation using a customized land cover input file, based on the igbp.bin file included in the distribution. My input is based on the MODIS land cover data for 2013. I was unable to find the MODIS files that match the built-in MODIS option available in the model, so tried this as an alternative approach. The LDT processing of the customized file occurs without errors, but when running LIS using the LDT output the model becomes unstable after ~8 months and crashes. There is no error message or acknowledgement of the crash in the log file, however the terminal output does print an mpi error (copied below). I have attached here the ldt config, lis config, land cover input file, and log file. My main question is whether there's an obvious issue with the custom input file, and if so how should I correct it?
discussion_zip.zip
MPI Error Message:
emitted longwave <0; skin T may be wrong due to inconsistent
input of SHDFAC with LAI
1 1 SHDFAC= 0.656851172 VAI= 3.49986291 TV= 332.304230 TG= 418.069336
LWDN= 324.148468 FIRA= -2471.71777 SNOWH= 0.00000000
STOP in Noah-MP
Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_DIVIDE_BY_ZERO
mpirun has exited due to process rank 1 with PID 0 on
node mz-dtn exiting improperly. There are three reasons this could occur:
this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
this process called "MPI_Abort" or "orte_abort" and the mca parameter
orte_create_session_dirs is set to false. In this case, the run-time cannot
detect that the abort call was an abnormal termination. Hence, the only
error message you will receive is this one.
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
You can avoid this message by specifying -quiet on the mpirun command line.
Beta Was this translation helpful? Give feedback.
All reactions