Primary documentation: https://
Common pitfalls (worth reading early): https://
Accounts¶
Reference: https://
Request a general RCC account
Associate the account with a PI. For access to the Schmidt nodes, use
pi-dfreedman
Connecting¶
Reference: https://
Open OnDemand: https://
docs .rcc .uchicago .edu /open _ondemand /open _ondemand/ VS Code: https://
docs .rcc .uchicago .edu /software /apps -and -envs /scode /main/
Software¶
Software catalog: https://
docs .rcc .uchicago .edu /software/ Compilers: https://
docs .rcc .uchicago .edu /software /compilers/
Storage¶
Docs: https://
Home:
/home/$USER(~30 GB)Configuration + small scripts
Private and cluster-specific
Scratch:
/scratch/midways3/$USER(~100 GB)Temporary job data and active processing
Node-local scratch, found at these environment variables:
$TMPDIR/$SLURM_TMPDIRTypically
/tmp/jobs/${SLURM_JOB_ID}Deleted when the job ends — copy results out
Check quota:
quota -u $USER
Copying data
scp/sftprsyncSamba
Globus (recommended for large data)
Running jobs¶
Key points:
Login nodes are for editing and submitting jobs, not compute
Jobs on compute nodes generally do not have internet access
Common Schmidt partition:
schmidt-gpuMax wall time is often 36 hours (longer by request)
Partitions: https://
docs .rcc .uchicago .edu /partitions/ Containers (Singularity): https://
docs .rcc .uchicago .edu /software /apps -and -envs /singularity/
Allocations¶
Reference: https://
Allocations cover:
Service Units (compute)
Storage
Support¶
Walk-in Lab (Regenstein): Monday–Friday, 9am–5pm
Phone: 773-702-3374
Email: info@rcc
.uchicago .edu