/scratch
is a shared high performance storage system on Flux that provides access to large amounts of disk for short periods of time at much higher speed than /home
.
/scratch
is temporary storage only, and is purged periodically by ARC admins.
Directories in /scratch
Upon the creation of a new Flux project, a directory will be created in /scratch
for each user with access to that project. The paths will be of the form /scratch/projectname_flux/username
. These directories are owned by the user and their default group and set up such that their default UNIX file permissions are 0700
.
Upon request of someone with access to make changes to the project we will modify the project’s user directories within /scratch
in the following way:
- Create a UNIX group whose membership matches the users with access to the project
- Set the group ownership of the user directories under the project’s root within
/scratch
to this new group - Set the permissions on these directories to
2750
.
In order to allow /scratch
to be controlled systematically, other modifications to the root of a project’s /scratch
directory are not be permitted.
User Directory Example
If you have access to Flux projects with the names projectA_flux
and projectB_flux
your directories in /scratch
will be:/scratch/projectA_flux/YOUR_LOGIN_ID
/scratch/projectB_flux/YOUR_LOGIN_ID
You should be careful to put data related to a particular project only in the project’s area as it may be deleted when the allocations associated with that project expire. See Policies below for details.
Policies
- Only data for currently running jobs should be left on
/scratch
. The system is not backed up and is vulnerable to data loss. - Data that has not been accessed in the past 90 days will be automatically deleted.
- 14 days after an allocation expires, the top-level directories will be set to unreadable as long as there are no currently running jobs (or the next day after all of the jobs associated with that project have completed).
- 60 days after an allocation expires, the directory and its contents will be permanently removed.
- If an allocation is renewed within 60 days, the permissions on the directory will be restored and the project members will again have access to the data therein.
- Data should be cleaned by users often, old data will be removed to maintain working space. Important data should be moved to
/home
or AFS to avoid loss.
Details
The filesystem uses the Lustre cluster file system and scales to 100s of GB/s IO. Flux’s implementation is 2 metadata servers (for redundancy) and 4 storage servers (for redundancy, performance and capacity), each with 4 storage targets, providing a total usable storage space of 1.5PB. Measured single client performance over infiniband is 1GB/s total filesystem performance is 5GB/s in optimal conditions.