-
Notifications
You must be signed in to change notification settings - Fork 4
Logging into argon to access our server and submit jobs
"Argon" is our high performance computer (HPC) that allows us to run software using clusters of computers linked together. This allows us to have access to a much more powerful computer than we could afford to have for each individual in the lab. Nevertheless each of us has access to it!
Others use this resource as well, so think of argon as a community of scientists using a community of computers. Learning the expectations for use is part of being a good citizen in the local HPC community:)
- How common is it for neuroimaging labs to utilize HPCs? It's becoming more common. For an overview of different lab computing models for neuroimaging or computing intensive work generally, see here.
- argon usage wiki
- Apply for access to argon here
- If you're off campus, you'll need to be logged into VPN
- It's often helpful at first to also have our lss server (://itf-rs-store15.hpc.uiowa.edu/vosslabhpc/)mapped to your local drive. This allows you to see the filesystem you're accessing through argon in the terminal.
- On your local computer, open a shell terminal
- At prompt, type
ssh [email protected]
- two-stage security step
- information & links about argon usage
- Get to know where you are by typing
pwdto see present working directory andlsto list contents of the directory - Access our server by going to
/Shared/vosslabhpc/: type/pastecd /Shared/vosslabhpc/
You can now run software on data stored on our server, using the computing power of the hpc! But before you can do that, you need to learn a little more about how software is run on a computing cluster. Because it's a shared community, it works differently than running software interactively on your local computer. The basic components you need are (1) concept of a "job" submission, which is submitting what you want to do in a shell script to the cluster, and (2) having commands in that shell script specify parameters of how much computing power you need, who to contact when issues come up, and having the commands access software you need even though they're running inside the cluster ecosystem.