Locker is a cost-optimized, high-capacity, large-file storage service for research data. Locker provides high-performance for large files, and allows investigators across U-M Ann Arbor to connect their data to computing resources necessary for their research, including U-M’s HPC clusters.
Locker can only be used for research data. It is designed for data that does not require frequent access, for example, data that is at the end of its useful life-cycle. It is tuned for large files (1MB or greater), and large numbers of small files should not be stored on the service without being aggregated in some fashion (as through tar). Locker can be used in combination with the Globus data management sharing system for hosting and sharing data with external collaborators and institutes.
- Unreplicated: $2.28 per terabyte per month ($27.36 per year)
- Replicated: $4.53 per terabyte per month ($54.37 per year).
- A U-M shortcode is required to order this service.
- Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.
Visit our Ordering Storage page to see an overview of what is needed to set up a new Locker volume.Use your UMICH (Level-1) credentials and Duo two-factor authentication to login via the ITS Service Request (SRS) Portal or the ITS General Computing provision page.
Information about the SRS and how to use it can be found throughout the SRS system.
Here are some additional resources:
Requesting or Modifying a Locker Storage Volume
Globus Server Endpoint
Locker can be made available on existing ARC Globus servers to provide high performance transfers, data sharing and access to Locker from off campus. To access Locker via Globus, request your Locker volume be added to Globus.
ARC Compute System Support
Locker can be accessed from any ARC compute service that supports the same data classifications as your export. To have your Locker export added to an ARC resource contact us with the export name and system name. Locker will be available on all login and data transfer nodes at a minimum.
Mounts will be located at
Research groups may also request system group creation to control group access to Locker volumes.
Replication – (Recommended) Optional second copy of all data in a different geographic location.
Snapshots – (Highly Recommended) Tracking of how data in a volume changes over time allowing users to recover deleted, modified, or otherwise damaged data.
Access snapshots at:
Mounting on Windows CIFS
Instructions provided when provisioned
Mounting on Linux NFS
Instructions provided when provisioned
Mounting on Apple OSX
Instructions provided when provisioned
Storage Resource Software
If you are unsure which of our storage services should be used to host your data, we have written some software that you can download and execute to analyze your files to understand how much of your data is stored in large files, how much of your data has been accessed recently, and the distribution of file sizes and access times. Use the Storage Guidance Tool to find file statistics and get a storage recommendation.
This software doesn’t examine the contents of any data files, it merely scans file attributes, it also does not store any file names after searching through the filesystem.
If you have any questions on this software, send us an email at firstname.lastname@example.org with your inquiry. If you are unsure about any of the recommendations the tool sends you, contact us at email@example.com with your inquiry.
Group Access Controls
Linux Set GID
Using Set GID (SGID) on a directory will force all files created in that directory to inherit the same group permissions as the parent directory even if the user creating them primary or effective group is different. The benefit of this combined with the creation of a group on shared systems is that all files will be created owned and accessible (by default) to members of that group
#list available group
chgrp <groupname> folder
chmod g+s folder
Windows AD Groups
Small File Limitation
Locker’s target audience are those research projects with massive data volumes in large files. Because of this design each 1TByte of Locker capacity provides only 1 Million files, e.g., 10 TByte provides 10 Million files. This works out to 1Mbyte per file average size.
Sensitive Data (ePHI, HIPAA, ITAR, EAR, CUI)
Locker is not currently supported for PHI or other data types. It is scheduled to be reviewed for support at a later date.
Abuse of Locker intentionally or not may result in performance or access being limited to preserve performance and access for other users. In the event this happens staff will be in contact with the users to engineer solutions.
Frequently Asked Questions
Q: How do I Check Locker Space and File Usage?
A: Linux or OSX Terminal use:
Space: df -h <mount-path>
Files: df -h -i <mount-path>
Q: Can Locker be Mounted on All ARC-TS Cluster Compute Nodes?
A: Currently we do not allow Locker to be mounted by very large numbers of clients. This could change in the future so let us know if this would help you. Currently we recommend using Globus to stage data between cluster scratch and Locker between runs. Globus provides a CLI so you can script.
Q: Can I Simultaneously Access Locker from Linux and Windows?
A: Currently Locker supports NFS (Linux) OR CIFS (Windows), Apple OSX supports both. This is known as Multi-Protocol or simultaneous NFS and CIFS access. Because Linux and Windows have different permissions schemes this is complex to manage. We don’t currently support his on Locker but do support it on Turbo. To work around this we recommend using Globus to ease data movement between Locker and systems that cannot mount it natively.
Q: Why can’t we use Locker as general purpose storage?
A: To maintain performance, encryption, professional support, and a low cost, Locker’s design does not make it well suited for general purpose primary storage. For this see the Turbo and MiStorage services.
Q: I deleted data but Locker still reports full?
A: Likely your export has snapshots enabled. Snapshots store changes to Locker exports over time thus deleted data is just ‘moved’ to a snapshot. Eventually snapshots will age out and free space on their own. Snapshot consumption does count against volume space used. To delete or disable snapshots to free space early contact support.
Q: I have free space but Locker reports full?
A: Likely you are at your file quota and are running average file size smaller than 1MByte. This use case is outside the support of Locker’s design and the small files should move to another storage service.
Q: I don’t see my .snapshots folder?
A: Your volume might not have snapshots enabled. If it does it is a hidden file on Linux and OSX terminals use ls -a to view all files including hidden files. To show hidden files in OSX and Windows user interfaces varies by version and can be found in their documentation and online.
Q: My volume shows 2x the size I requested!
A: The system Locker is built on tracks all copies of data in its file system. If a volume requests replication (2 copies of all data) total space will represent the primary and replica copy in total. Thus 1TB of new data will consume 2TB of Locker space.
Locker consists of two DDN GS14KX-E Gridscaler clusters running IBM Spectrum Scale. Each cluster is located in different data centers with dedicated fiber for data replication between the two sites. Each GS14KX-E cluster can hold 1680 hard drives for capacity of 10PB usable using 8TByte drives. Each hard drive is 7200RPM self encrypting and can be added to the system online. If at capacity additional GS14KX-E can be clustered to add additional performance and capacity.
By not including dedicated metadata or flash/NVMe storage we are able to keep the cost of Locker lower than other solutions such at Turbo. Thus Locker will not perform well with small IO operations and is built for capacity. Thus why we offer both services. The GS14KX-E does have support for adding NVMe/Flash for meta-data and tiering at a later date should the price of such devices become more reasonable.
Locker is directly connected to the Data Den archive via dedicated data movers and to the ARC research network by two IBM Cluster Export Services (CES) nodes or Protocol Nodes. Each CES node is connected with 100Gbps network connections and work in an active-active high availability configuration. Outside the ARC network performance is limited to 40Gbps from the campus backbone.