U-M Research Computing Package – Eligibility

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

The U-M Research Computing Package is available to all researchers on all campuses (Ann Arbor, Dearborn, Flint, and Michigan Medicine), including:

Faculty and Staff 

  • All professors and lecturers with an active appointment 
  • Emeritus faculty
  • Guest faculty who are visiting for one to two years
  • Principal investigators (e.g., research scientists) on self-funded research activities
  • PostDocs who have their own research grant 

PhD Students

PhD students may qualify for their own UMRCP resources depending on who is overseeing their research and their advisor relationship. Students should consult with their PhD program administrator to determine their eligibility. ITS will confirm this status when a UMRCP request is submitted.

Undergraduate and Masters Students

Undergraduate and Masters students do not currently qualify for their own UMRCP. They can be added as users or administrators of another person’s UMRCP. Students can also access other ITS programs such as Great Lakes for Course Accounts, and Student Teams.

U-M Research Computing Package – FAQ

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

I already have some of the ARC services listed. Do I still need to apply?

Yes. Existing ARC resources are not migrated to UMRCP support until the leader of the resources applies. Existing resources can be migrated to the UMRCP once created and do not require movement of data or changes in workflow.

Can I sign someone else up for the UMRCP?

No. To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else.

I have multiple students/collaborators in my research team. Can they share resources?

Yes. All UMRCP and ARC resources are designed to be collaborative and multidisciplinary capable. UMRCP recipients can delegate both administrative and user access to anyone with a valid campus uniquename.

I used up / need more resources than UMRCP provides, can I get more?

Yes. All services have options to purchase additional capacity. Ways to get help are shown on the ARC Help Page; be sure to include the service name and your needs when requesting additional resources. In addition, consumable resources like High Performance Computing hours are replenished each fiscal year.

I’m a faculty member who already has the UMRCP. How do I renew?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD student with my own funded research. How do I renew my UMRCP?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD, Master’s, or undergraduate student, and my advisor has left the university. My research is continuing to finish my dissertation. What should I do?

If your faculty advisor has left the university (or are about to), but you remain: an alternative administrator (Staff or Faculty who can verify the ongoing need of the allocation for wrapping up activities related to the former research group) must be assigned to the account so that the allocations can continue uninterrupted. See How do I delegate administrative management of my UMRCP or other ARC resources?

I’m an active faculty member but received a UMRCP off boarding message warning that my resources won’t be renewed. What do I need to do to keep them?

Likely your MCommunity entry is private or another status (e.g., Retired/Emeritus) which makes verifying your active research status ambiguous and needs to be confirmed. You can reply to the automated message or contact arc-support@umich.edu to confirm your active status and renew your resources for the upcoming year.

How do I make changes if I have already submitted the UMRCP request form?

If you have already submitted the UMRCP request form and you want to modify it, open a ticket by sending an email to ARC Support, arc-support@umich.edu. Be sure to include the name of your project.

How do I delegate administrative management of my UMRCP or other ARC resources?

Visit the ARC Resource Management Portal. On your project overview page look for Project Admins. Select the Add button and add a trusted university member by inputting their uniqname. This is helpful when the project owner is leaving the university but requires the resources to allow students to finish their dissertation work, as the new delegated admin can confirm the ongoing need each June for UMRCP renewal. Admins can make any change to your project so be sure to only add members you trust.

U-M Research Computing Package

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

The University of Michigan Research Computing Package (UMRCP), provided by ITS, is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources.

On this page:

What is the UMRCP

As a research faculty member with the University of Michigan, it’s very likely you are eligible for Advanced Research Computing (ARC) resources through the University of Michigan Research Computing Package program.

This program makes research computing resources widely available for eligible faculty and researchers. Every UMRCP is customized:

  • The specific, and total, UMRCP allocations that would be available to you depend on the selections you make in the UMRCP request form.
  • After making selections in the request form, and once the requests have been approved and resources are created: then each research project enrolled in our UMRCP program has one or more cost-covered annual allocations of research computing or storage resources from ARC.

Some research scientists and PhD students are also eligible for U-M Research Computing Package allocations with ARC. The UMRCP program is designed so that most U-M researchers will have access to at least one set of UMRCP resources.

What is included with the UMRCP

The U-M Research Computing Package provides:

80,000 CPU HOURS OF HIGH-PERFORMANCE COMPUTING

  • CPU credits are allocated on the Great Lakes cluster or the Armis2 protected data cluster; they can be used for standard, larger memory, or GPU resources, and renewed every July 1.  Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account. 

10 TB OF REPLICATED, HIGH-PERFORMANCE STORAGE

  • The Turbo Research Storage service can be used on many ARC/ITS, unit, Globus.org, and laboratory systems. Turbo is best for short-term, active storage that researchers access regularly. Capacity can be split across sensitive and non-sensitive science data.

100 TB OF ARCHIVE STORAGE

  • Replicated, durable storage for massive data volumes. The Data Den Research Archive storage service also supports reliable data transfer and international data sharing via Globus.org. Capacity can be split across sensitive and non-sensitive science data. Data Den is best for storing data that researchers do not need regularly, but need to retain because of NIH and NSF grant requirements.

16 GB of RAM for Secure Enclave Services

  • The Secure Enclave Services provides U-M researchers with high-performance, secure, and flexible computing environments enabling the analysis of sensitive data sets restricted by federal privacy laws, proprietary access agreements, or confidentiality requirements. The Secure Enclave Services are a virtual server environment designed for the secure hosting and analysis of restricted research data.The UMRCP provides researchers up to 16GB of RAM (memory) per machine, per month at no cost. Beginning July 1, 2023, each active virtual machine will be charged a provisional rate of $7.00 per GB of RAM (memory) per month for any services consumed greater than 16 GB of RAM.

ADDITIONAL HPC ALLOCATIONS FOR COURSEWORK

Additional allocations are available for instructors and student teams. This is not part of the UMRCP, but these allocations are covered by ITS.

How to get ARC resources through the UMRCP

The UMRCP request form allows you to choose the ARC resource options that will best meet your research needs.

Each completed set of UMRCP resources will be a customized set of research computing allocations. If you decide to submit a request for UMRCP resources, the first step will be determining which resources to select, based on your research needs. The UMRCP request form includes multiple options to choose from for each type of resource.

Please note that of the ARC resource allocations available through the UMRCP, only those that are specifically requested through selection in the form will be created and allocated for your project.

  • The request form allows you to create a customized UMRCP by opting-in to the ARC services that will best meet your research needs.
  • Only the resources selected in the form will be included in your request.

Determine research needs: HPC, HPC storage, longer-term data retention requirements, data sensitivity

To make sure all ARC resource options that would best meet your research needs are selected in the UMRCP request form, please review the information on this site:

In addition to a form option to select an annual high-performance computing allocation — with the option of selecting either (or both) the Great Lakes cluster or Armis2 (HIPAA-aligned) cluster for this annual allocation — there are also high-performance computing storage and archival storage options available.

The UMRCP Requesting Resources Guide includes ARC resource overviews, as well as additional information through links, that describe the differences between the options that are available.

Depending on your research needs, you can plan on at least 30 minutes to review the information in the Requesting Resources Guide and complete the request form.

Complete the UMRCP request form

Only the eligible PI can submit the UMRCP request form:

  • The form validation process creates groups, and if the request is approved these groups and all allocated ARC resources will be linked to the person logged in — who can only be the eligible primary faculty member or researcher.

After submitting the UMRCP form an approval process is initiated. You will be contacted if any additional information is required from you for determining your eligibility.

Once approved, the creation and allocation process for each ARC resource involves multiple steps for each resource, and coordination between multiple teams.

  • As each ARC resource allocation that was requested is completed, a notification is sent to let you know that the resource is ready to be accessed.
  • The notification you receive will include information about your ARC resource.

If after reviewing the information provided in one of these notifications you have any questions, or there are any issues, please feel free to contact ARC directly.

Manage access to ARC resources provided through the UMRCP

Once UMRCP resource allocations have been created, then project owners (PIs), or any additional project administrators who have been designated, can grant any person who has a fully active U-M account access to those ARC resources.

Access to ARC resources is restricted:

  • Only project owners, and any designated ARC project administrators, are permitted to manage user access.

Most ARC projects can be managed by project owners, and any designated ARC project administrators, in the Resource Management Portal application.

Turbo User Guide

By | Uncategorized

Turbo Storage User Guide

Turbo is a high-capacity, fast, reliable and secure data storage service for researchers across the University of Michigan.

Turbo is configured to be easily sharable with on-campus resources such as the Great Lakes HPC cluster, as well as off-campus systems and collaborators. Researchers can purchase storage space via the Ordering Storage page and determining the option that best fits their needs.More details can be found on the Turbo Specifications page.

This user guide is intended to provide basic information and troubleshooting assistance for Turbo Research Storage. However, if at any point you have problems or questions, feel free to contact us at arc-support@umich.edu.  Please include the following information when contacting us:

1) Name of Turbo Volume   2) Hostname or IP Address of your System   3) Client Operating System

Go to Turbo Storage Overview   TO SEARCH THIS USER GUIDE, USE THE COMMAND + F (MAC) OR CTRL + F (WIN) KEYBOARD SHORTCUTS.

What is Turbo

What is Turbo?

Turbo is a high-capacity, high-performance network storage solution, providing secure and reliable data storage to users across the University of Michigan. Available to our High Performance Computing Clusters (such as Great Lakes, Lighthouse, and Armis2) as well as to local hosts in labs (or at home), Turbo enables our researchers to access and leverage their data quickly and easily.

Turbo is also designed to be highly flexible. While it is tuned for large files (roughly 1 megabyte in size or larger), it is still capable of efficiently storing small files, such as: word documents, spreadsheets, image files, etc. Additionally, Turbo supports the storing of sensitive data on HIPAA compliant systems, such as the Armis 2 cluster managed by ARC.

Why Use ARC Network Storage?

Now, you may be asking yourself: I can get Hard Drives much cheaper than Turbo Storage, why should I use ARC Network Storage?

  • Reliability — Turbo offers numerous levels of data protection to ensure the accessibility and integrity of user data.
  • Accessibility — Turbo storage can be accessed from multiple devices and multiple locations simultaneously.
  • Flexibility — Turbo can accommodate multiple file types of various sizes.
  • Ease of Management — changes to storage size, the addition or removal of snapshots, and a many other options can easily be configured.

Storage Resource Software

If you are unsure which of our storage services should be used to host your data, we have written some software that you can download and execute to analyze your files to understand how much of your data is stored in large files, how much of your data has been accessed recently, and the distribution of file sizes and access times. Download the software from the Storage Guidance Tool.

This software doesn’t examine the contents of any data files, it merely scans file attributes, it also does not store any file names after searching through the filesystem.

If you have any questions on this software or if you are unsure about any of the recommendations the tool sends you, please contact us at arcts-support@umich.edu for assistance.

Back To Top

Turbo Access Methods

The Turbo Research Storage, is what’s called “Network Attached Storage” or NAS for short. A NAS is simply a storage device that is available to users over the Network. So, if your computer is on the University of Michigan Network (or U-M VPN), you can access Turbo! The wonderful thing about a NAS, is that it requires no formatting or disk management by the User! A NAS will provide File Level access to storage, just as if you were browsing through the documents folder on your local computer.

  • NFS – “Network File System” is used for Unix and Linux based operating systems (this includes MacOS).
  • CIFS – “Common Internet File System” is used for Windows and MacOS operating systems.
  • Multi-Protocol – Allows for both CIFS and NFS Connections to the same Turbo Volume, providing access for both Windows and Linux Systems simultaneously. If you wish to request a multi-protocol volume, select the “Turbo Research Storage NFS” and set the “Multiprotocol” option to “Yes”

Back To Top

Mounting your Turbo volume

Mounting NFS Volumes

For Linux based systems:

To Successfully mount NFS Storage to your machine, you will need:

  1. Static Hostname/IP Address for your client.
  2. Client Hostname must be added to the NFS Client Export List for this volume in the SRS.
  3. NFS Utilities to be Installed

Mounting an NFS Volume:

>> mkdir [MOUNT LOCATION]
>> mount -t nfs [STORAGE VOLUME] [MOUNT LOCATION]

Storage Volume = [NAME].turbo.storage.umich.edu:/[NAME]

Mount Location = Local Directory the NFS Storage is attached to (usually /mnt/[NAME])

A Key Security feature of NFS Storage only allows access to the volume if the requesting system is included in the NFS Client Export list for that volume. The NFS Client Export List is managed on Turbo Itself by the ARC Storage Team, and requests to add/remove clients to the NFS Export list for a Turbo Volume can be managed through the SRS

There are many options for mounting and configuring NFS Storage (version, persistence, autofs, security, etc).  We recommend working with Unit IT Staff to ensure consistency, ease-of-access, security compliance, for accessing your Turbo Storage.

You should note, that If you are using multiple NFS systems to access the same Turbo Volume (i.e. Great Lakes and a Local System), you will need to ensure that the User Groups are synchronized between these systems; otherwise you will not be able to access your files as expected. If you have problems or questions on synchronizing UNIX User Groups, you can contact your Unit IT Support Team for Assistance.

Alternatively, if your Turbo Storage was provisioned as Multi-Protocol and mounted through NFS on one system, you can safely use CIFS on any number of other systems, without issue.

Mounting CIFS Volumes

Using CIFS/SMB Turbo is accessible to any clients on U-M networks (or VPN with campus profile). CIFS connections are secured entirely through their Active Directory (AD) Permissions. So, if you are a valid user in one of the permitted AD user groups, you will be able to mount your Turbo Storage through CIFS.

For Windows:

Directions are for Windows 10

  1. Open the “This PC” Menu from the Windows “File Explorer” or Start Menu
  2. Click on “Computer” in the menu and then select “Map Network Drive”
  3. Select a Drive Letter from the Drop Down List. Choose a letter after the letter H
  4. In the “Folder” Box, enter the CIFS Volume path
    \\[NAME]-win.turbo.storage.umich.edu\[NAME]
  5. Check the “Reconnect at login” box to have your computer remember this drive mapping
  6. Click on the “Connect using a different user name” and use the following settings:
    • User Name: UMROOT\your-uniqname
    • Password: your UMICH Kerberos (Level-1) password
  7. Click the “OK” button.
  8. Click the “Finish” button.

For Mac:

  1. In the Finder, click on the “Go” menu and select “Connect to Server…”
  2. In the “Server Address” field, type the server name and path:
    smb://[NAME]-win.turbo.storage.umich.edu/[NAME]
  3. Click the “Connect” button.
  4. When prompted, enter your uniqname and UMICH Kerberos (Level-1) password.
  5. Click the “Connect” button.

On the HPC clusters

If you requested for your volume to be mounted on any of the HPC clusters, Armis2, Great Lakes, or Lighthouse, you can access that volume at /nfs/turbo/<volume_name>.  So when on the command line of any of those clusters, simply typing the command:

cd /nfs/turbo/<volume-name>

where <volume_name> is the full name of the volume including prefix, like umms-jones, or lsa-johnson.  If you get ‘path not found’, or ‘directory not found’.  Submit a ticket to arc-support@umich.edu and we’ll resolve the issue.

Back To Top

Groups and Permissions

UNIX Permissions

If you are accessing your Turbo storage through NFS, your access to those files/directories will be based on their UNIX Permissions. The Permissions are assigned at the object level by the User and Group Permissions assigned to those files. By default, only the creator of a file, and the users in the group inherited from the parent directory will be able to access a file.

For more information on UNIX Permissions, feel free to check out this guide, or ask your Unit IT Staff for help!

Active Directory Permissions

If you are accessing your Turbo Storage through CIFS, your access to those files/folders will be based on their Active Directory (AD) Group Membership. AD Group membership is owned and managed by Unit IT Staff. If you need to change permissions or access to CIFS/SMB Volume, contact your unit IT team for assistance!

Unix Groups

In order to set appropriate permissions, and provide access for multiple users on our clusters, local Unix groups must be created by ARC to properly administer your NFS Turbo volume. Any user addition or deletions to Unix groups on any ARC service must be requested by contacting us at arcts-support@umich.edu.

AD Groups

In order to set appropriate permissions, and provide access for multiple users in CIFS volume, AD groups must be created by your local IT department to properly administer your CIFS volume.

Multi-protocol volumes require advanced synchronization between your Unix Groups and AD groups.  Contact your unit IT staff to help build these synchronization, and we will work with them to set it up.

Any user change requests will need to be directed towards your local IT department.

Back To Top

Snapshots and replication

What is replication?

Replication is the process of making (and maintaining) an identical duplicate copy from one storage system to another. In the case of Turbo, replicated copies are sent offsite to provide data integrity, site redundancy, and increased availability for Turbo data in the event of a systems outage.

What are Snapshots?

Snapshots are a tool provided by Enterprise Storage Systems, such as Turbo, that provide point-in-time rollback for any files/folders that are changed or deleted. Snapshots catalog changes to files/folders (on a nightly basis) and maintain copies of any files that were changed or deleted since the last snapshot. So, if a file is accidentally deleted, it may be recoverable through the previous days’ snapshots.

  • Snapshots can be requested at no additional charge, although the snapshots will count against your available capacity.
  • Your snapshots can be found in /nfs/turbo/<volume_name>/.snapshot/
  • Snapshots are available with the following options:
    • Daily snapshot, retained for 7 days (default, if selected)
    • Daily snapshot, retained for 1 day
    • Daily snapshot, retained for 3 days
    • Daily snapshot, retained for 7 days, and 1 bi-weekly snapshot
  • Snapshots are taken on a nightly basis, so if a file is created and deleted on the same day, it may not be recoverable.
  • Note: If a large volume of files are changed or deleted in a short period of time; the snapshots may consume a considerable amount of one’s allocated capacity.
  • If you suspect snapshots are taking up too much space; please contact arcts-support@umich.edu and we would be happy to help resolve the issue

Recovering Lost Files

Turbo volumes that are configured with snapshots will save previous versions of files.  Only files which have been snap-shotted overnight are recoverable.  Files that are lost on the same day they were created may not be recoverable.

From Linux based clients: To recover files lost from your directory, navigate to the .snapshot directory at the root of your share.

$ cd /nfs/turbo/flux-admin/.snapshot
$ ls -1
daily_2015-08-24-_23-30
daily_2015-08-25-_23-30
daily_2015-08-26-_23-30
daily_2015-08-27-_23-30
daily_2015-08-28-_23-30
daily_2015-08-29-_23-30
daily_2015-08-30-_23-30

You can navigate to the snapshot directories and copy files back to your file share.
Note: The .snapshot directory may not be visible until you try enter it with cd.

From Windows clients: You can recover lost files from snapshots natively:

  • Open the directory that the deleted file was held in.
  • Right click in the directory that the file or folder was stored and select “Properties”.
  • Click on the “Previous Versions” tab when the Properties window opens.
  • A list of snapshots will be displayed.
  • Select the snapshot from which you wish to restore data.
  • In the new window, locate the file(s) you wish to restore.
  • Simply drag the file(s) or folder to their correct locations

Back To Top

Sensitive Data

Turbo Storage (along with the Armis2 Research Cluster) are certified for storing Sensitive and Restricted Data. With full data encryption in-transit and at-rest; along with strict access-controls Turbo Storage provides the full suite of technical protections for sensitive data.  This includes HIPAA, FERPA, Export Controlled, and most other regulated data types.

If you have specific questions or need further details with regard to Turbo’s data protection capabilities, please check out our Data Security Policies, or feel free to contact us at arcts-support@umich.edu.

Back To Top

Turbo Storage FAQ

How do I make changes to a volume?

  • Go to the ITS Service Request System (SRS) portal.
  • Select either Turbo NFS or Turbo CIFS, depending the type of volume you have.
  • Select “Modify” and on the following page choose the Turbo volume you wish to modify
  • Near the bottom of the page, select “Edit.”
  • Enter in any changes you wish to make.
  • Near the bottom of the page, select “Save” when you are done.
  • The changes will be sent to the ARC Storage team so they can process the request. This can take up to a business day to complete. They will update you as soon as the changes have taken effect.

Can I store sensitive data on my volume?

Yes. All sensitive data must comply with our Data Security policies.

Is it possible to distribute charges for a volume across multiple Shortcodes?

Yes. If you are from Michigan Medicine (Med) or the College of Literature, Science, and the Arts (LSA) please contact your local unit support team to assist you with this process. If you are not in one of these units please contact ARC at arcts-support@umich.edu.

Back To Top

Transferring data

Globus

***Be aware that if you have a Multi-protocol or NFS Turbo volume that is mounted on Great Lakes, there is no need to set up a separate Globus endpoint for it.***

  1. Install Globus Connect Personal. Select the OS you are using and follow the documentation to install Globus Connect Personal, and setup a local Globus endpoint.
  2. Go to https://www.globus.org/ and click on “Login” in the upper right corner.
  3. Select “University of Michigan” as your organization in the drop-down field and select “Continue”
  4. This will prompt you for your uniqname and Level-1 password. Enter these and select “Login”, then authorize with Duo two-factor authentication
  5. You should be redirected to the “File Manager” tab on the left side of the screen
  6. Select the “Collection” field and type your local Globus endpoint.
  7. On the right side of the screen, select “Transfer or Sync to…”.
  8. Select the new black “Collection” field and type in the Great Lakes Globus endpoint. The endpoint is umich#greatlakes
  9. By default, you will see into your Home directory. You can change your path under the “Path” field.
  10. When you’ve selected the files you wish to transfer, select the “Start” button near the bottom of the screen. A notification for the transfer will appear in the “Activity” tab on the left side of the screen.

Cyberduck

  1. Cyberduck can be used on Mac or Windows
  2. Open Cyberduck and click the Open connection button.
  3. Set the “Server:” to: be greatlakes-xfer.arc-ts.umich.edu
  4. Set your Username: to be your uniqname.
  5. Enter your Kerberos password.
  6. Click connect.
  7. Drag and drop files between the two systems. Click the Disconnect button when completed.

WinSCP

  1. Connecting to a remote system
  2. Select the protocol you would like to use (recommend sftp or scp).
  3. In the host name field, type the hostname you would like to connect to.
  4. ex: greatlakes.arc-ts.umich.edu
  5. The port number will auto-populate based on the protocol you set.
  6. In the user name field, type your uniqname.
  7. In the password field, type your U-M Kerberos Level-1 password.
  8. Click the “Login” button.
  9. If “Continue connecting to an unknown server and add its host key to a cache?” pops up, click “Yes”.
  10. A Duo prompt will appear. Select your preferred method and authenticate with Duo.
  11. Transferring Files
  12. The left panel of winSCP will be your local machine, and the right panel of winSCP will be the remote machine you’ve connected to.
  13. On your local machine, navigate to the file/directory you wish to transfer to the remote machine.
  14. Drag the file/directory you wish to transfer to the remote machine’s panel. This will initiate the transfer.

SCP (via the command line)

SCP Uses that basic syntax of:

> scp [SOURCE LOCATION] [DESTINATION]

To copy a local file to remote destination:

> scp /path/to/file uniqname@greatlakes-xfer.arc-ts.umich.edu:/path/to/destination

To copy a local directory remote destination:

> scp -r /path/to/directory  uniqname@greatlakes-xfer.arc-ts.umich.edu:/path/to/destination

To copy a remote file to local machine:

> scp uniqname@greatlakes-xfer.arc-ts.umich.edu:/path/to/file /path/to/destination

To copy a remote directory to local machine:

> scp -r uniqname@greatlakes-xfer.arc-ts.umich.edu:/path/to/directory /path/to/destination

SFTP (via command line)

Connect to a remote server with SFTP:

> sftp uniqname@hostname
> sftp user1@greatlakes.arc-ts.umich.edu

Transfer local FILE/DIR to remote system:

> get file /remote-directory

Transfer remote FILE/DIR to local machine:

> get /remote-directory

Transfer multiple local FILES/DIRS to remote system:

> mput files /remote-directory

Transfer multiple remote FILES/DIRS to local machine:

> mget /remote-directory

Back To Top

Storage policies

Maintaining the overall stability of the system is paramount to us. System availability is based on our best efforts. We are staffed to provide support during normal business hours. We try very hard to provide support as broadly as possible, but cannot guarantee support on a 24 hour per day basis. Additionally, we perform system maintenance on a periodic basis, driven by the availability of software updates, staffing availability, and input from the user community. We do our best to schedule around your needs, but there will be times when the system is unavailable. For scheduled outages, we will announce them at least one month in advance on the ARC home page; for unscheduled outages we will announce them as quickly as we can with as much detail as we have on that same page. You can also track ARC at Twitter name umichARC.

Back To Top

.
Close up of Turbo

Turbo Research Storage

By | Systems and Services

Part of UMRCP provided by ITSTurbo is a high-capacity, reliable, secure, and fast storage solution. It is tuned for large files (which corresponds roughly to be files in 1 megabyte in size or larger), but it is still capable of efficiently storing small files, such as: word documents, spreadsheets, image files, etc.

Turbo enables investigators across the University of Michigan to store and access data needed for their research via a local computer in a lab, office or our High Performance Computing Clusters, such as: Great Lakes, Lighthouse, and Armis2. Additionally, it supports the storing of sensitive data on HIPAA compliant systems, such as the Armis2 cluster managed by ARC.


Turbo Rates

  • 10TB Turbo storage is available as part of the no-cost U-M Research Computing Package (UMRCP). Additional allocation is available for a fee as noted below (shortcode required).
  • Unreplicated: $6.49 per terabyte per month ($77.88 per year)
  • Replicated: $13.02 per terabyte per month ($156.22 per year)
  • Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Get started

Visit our Ordering Storage page to see an overview of what is needed to set up a new Turbo volume. Use your UMICH (Level-1) credentials and Duo two-factor authentication to login via the ITS Service Request (SRS) Portal or the ITS General Computing provision page.

SRS support

Information about the SRS and how to use it can be found throughout the SRS system.

Here are some additional resources:

 

If you have any questions, please contact us at arc-support@umich.edu.

Be aware that if you have a multi-protocol or NFS Turbo volume that is mounted on the Great Lakes HPC Cluster, there is no need to set up a separate Globus endpoint.

Order Service

10TB Turbo storage is available as part of the no-cost U-M Research Computing Package (UMRCP). Additional allocation is available for a fee as noted in the Rates section (shortcode required).

Order the Turbo Research Storage service via the ITS Service Request System (SRS).

For detailed instructions on filling out the order form please visit our Ordering Storage page.

Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Close up of Xeon Phi processors

Data Den Research Archive

By | Systems and Services

Data Den Production Tape LibraryPart of UMRCP provided by ITSData Den is a service for preserving electronic data generated from research activities. It is a low-cost, highly durable storage system and is the largest storage system operated by ARC.

Storing of sensitive data (including HIPAA and FERPA) is now supported (visit the Sensitive Data Guide for full details). This service is part of the U-M Research Computing Package (UMRCP) that provides storage allocations to researchers. Most researchers will not have to pay for Data Den.

Data Den is a disk-caching, tape-backed archive optimized for data that is not regularly accessed for extended periods of time (weeks to years). Data Den can hold onto research data for many years, thereby meeting the needs of the grant requirements specifically NIH and NSF.

Data Den is not meant to replace active storage services like Turbo and Locker. It is best used for data that is set aside and is not being used at all, but still needs to be stored. Data Den is best accessed through the Globus data management sharing system to move data into and out of the service. Data Den is only available in a replicated format.

Data Den can be part of a well-organized data management plan providing international data sharing, encryption, and data durability.


Data Den Rates

  • 100TB of Data Den storage is available as part of the no-cost U-M Research Computing Package (UMRCP). Additional allocation is available for a fee as noted below.
  • Data Den service rates are $20.04 per replicated terabyte (TB) per year ($1.67/TB/month).
  • Starting July 1, 2023, the new LSA Research Storage Funding Model will provide unlimited Data Den Research Archive Storage. Please see the LSA Research Storage Portal to learn more.
  • Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Getting Started – UPDATED 2023

  • Project admins can use the Portal to update user access lists
  • Use Globus.org collection UMich ARC Non-Sensitive Data Den Volume Collection your path will be in  /<volume>
  • Active Archive users that have archives mounted on ARC cluster login nodes path will be /nfs/dataden/<volume>

Requesting or Modifying a Data Den Storage Volume

Most should start with the U-M Research Computing Package rather than these instructions.

Visit the ITS Service Request System (SRS).

Please include the following:

  • Amount of storage needed (in 1TB increments)
  • MCommunity Group name (group members will receive service-related notification, and can request service changes)
  • Numeric group ID of the group that will have access to files at the top level directory.
  • Numeric user ID of person who will administer the top level directory and grant access to other users.

Globus Server Endpoint

Data Den supports the use of Globus servers to provide high performance transfers, data sharing and access to Data Den from off campus. To access Data Den via Globus, request your volume be added to Globus. The globus collection is UMich ARC Non-Sensitive Data Den Volume Collection for non-sensitive data.

Bundling Files

Because of the design of Data Den, often projects will need to be bundled to form larger single file archives. The most common tool do this with is tar. Tar can optionally compress the data also but can take much longer. We also recommend the use of archivetar when using the ARC HPC clusters and is also available as a container for Linux systems.

On the ARC HPC systems run archivetar will do the needed tar bundling and upload to Data Den. The following command will sort data into tar files of 100GB each and upload to the selected folder on Data Den using Globus. You can also use tar, zip or other commands manually.

cd <folder to archive>
module load archivetar
archivetar --prefix my-archive --dryrun
archivetar --prefix my-archive --destination-dir /<dataden-volume>/<folder>

The following tar command will bundle all file files in directory, store it in the file bundle.tar.bz2, and compress it with bzip2. It will also create a small text file bundle.index.txt that can be stored with it to quickly reference what files are in the bundle.

tar -cvjf bundle.tar.bz2 driectory | tee bundle.index.txt

To extract the bundle:

tar -xvjf bundle.tar.bz2

Optionally omit -j to save time compressing, and omit -v to not print the bundle progress as it runs.

Compressing an archive can be accelerated on multi-core systems using pigz and lbzip2. The following will work on all ARC systems:

tar --use-compress-program=lbzip2 -cvf bundle.tar.bz2 brockp | tee bundle.index.txt

To extract the bundle:

tar --use-compress-program=lbzip2 -xvf bundle.tar.bz2

Storage Resource Software

If you are unsure which of our storage services should be used to host your data, we have written some software that you can download and execute to analyze your files to understand how much of your data is stored in large files, how much of your data has been accessed recently, and the distribution of file sizes and access times. Use the Storage Guidance Tool to find file statistics and get a storage recommendation.

This software doesn’t examine the contents of any data files, it merely scans file attributes, it also does not store any file names after searching through the filesystem.

If you have any questions on this software, send an email to arc-support@umich.edu. If you are unsure about any of the recommendations the tool sends you, contact us at arc-support@umich.edu.

Policies

Small File Limitation and Optimal Size

Data Den’s underlying technology does not work well with small files. Because of this design, each 1TB of Data Den capacity provides only 10,000 files, and only migrates files 100 MByte or larger. The optimal file size range from 10 – 200 GBytes.

Maximum File Size

The maximum file size is 8 TByte, but files should optimally not be larger than 1 TByte. Larger archives can be split before uploading to Data Den with the split -b 200G filename command.

Sensitive Data — ePHI/HIPAA/ITAR/EAR/CUI

Data Den now supports sensitive data such as HIPAA, PHI, other data types. Refer to the Sensitive Data Guide for full details.

System Abuse

Abuse, generally by excessive recalls of data better suited for active storage, of Data Den intentionally or not may result in performance or access being limited to preserve performance and access for other users. In the event this happens, staff will be in contact with users to engineer more appropriate solutions.

Frequently Asked Questions

Q: How do I check Data Den space and file usage?
A: Contact arc-support@umich.edu

Q: I’m getting Disk Quota Exceeded when uploading multiple TB of data but I’m well below my quota why?
A: This means you overran the allocated cache space for your volume. This space size is estimated for how much new data would be ingested in 48 hours. Leave the transfer running and contact arc-support@umich.edu. We will increase the cache space for the ingest and Globus will automatically resume the transfer where it left off.

Q: How can I use Data Den in my data management plan?
A: Data Den can provide low cost storage and data sharing with Globus to allow access to select data on Data Den anywhere in the world on demand. Most can get Data Den for no cost through UMRCP.

Q: Why can’t Data Den be accessible directly from hosts or my personal machine?
A: Data Den can appear as a local disk, but is not currently allowed in most cases as it presents many opportunities to disrupt use of the services by other users. Anyone with more than 100TB of data and wishes to automate their data management may be interested in Data Den Active Archive and should contact arc-support@umich.edu.

Q: Can Data Den encryption be used with my restricted data?
A: Always refer to the Sensitive Data Guide as the definitive source for allowed data types. Data Den now supports sensitive data.

Q: Why can’t we use Data Den as general purpose storage?
A: Data Den’s design is inherently linear and cannot respond to user requests quickly, the way general purpose storage does.  Those with massive data volumes (100TB+) may find Active Archive a good solution to control storage costs and have a portion of data active and fast.

Q: I need to retrieve a lot (more than 50) of files from Data Den. Can you help me?
A: Yes, contact arc-support@umich.edu with the list of files you want to recall, and we will optimize the recall from tape, speeding access.

Q: I use Locker heavily and want to automate moving data as it ages to my Data Den space. Can you help me?
A: Yes. Data Den and Locker are inherently connected. Any Locker volume can potentially have a custom data migration policy automating data flow to Data Den via Active Archive. Contact arc-support@umich.edu to set up a custom migration policy.

Advanced Topics

System Configuration

Locker Data Den Architecture - V3Data Den consists of two IBM Spectrum Archive LTFSEE clusters. Each cluster is connected to a corresponding Locker cluster. The LTFSEE cluster consists of an IBM TS4500 tape library with 20PB of uncompressed capacity and 9 Linear Tape Open (LTO) 8 drives. Each drive provides optimally 360 MByte/s uncompressed performance. Each library can also add additional frames and drives for new tape technology, increasing the capacity and performance.

Each cluster has a copy of all data separated by miles protecting it against fire and other disasters. Each cluster has 2 data movers. The purpose of these data movers is to apply policies to specific exports in Locker. As data are written to these exports the data mover moves the largest and oldest files to the tape in the TS4500 library encrypting it in the process. The data mover leaves a 1 MByte stub preserving the file layouts and allowing the first 1 MByte of data to be read without involving the tape. When a file that has been migrated is read the data mover instructs the TS4500 to recall the data from the tape and place it back on the faster disk of Locker transparently to the user.

Thus Data Den is able to ingest new data quickly to disk, and over time send it to tape as it is ages and goes unused. If data is recalled it remains on disk until it again is migrated due to policy.

Order Service

100 TB Data Den storage is available as part of the no-cost U-M Research Computing Package (UMRCP). Additional allocation is available for a fee. Data Den service rates are $20.04 per replicated terabyte (TB) per year ($1.67/TB/month).

Go to the ITS Service Request System (SRS).

  • Amount of storage needed in 1 TB increments
  • MCommunity Group name (group members will receive service-related notification, and can request service changes)
  • Numeric group ID of the group that will have access to files at the top level directory.
  • Numeric user ID of person who will administer the top level directory and grant access to other users

Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Contact arc-support@umich.edu with any questions.

Close up of Flux

ARC-TS Storage

By | Uncategorized

Several levels of data storage are provided with an allocation of ARC-TS HPC services, varying by capacity, I/O rate, and longevity of storage.

Storage type / location Description Best Used For Access and Policy Details
/tmp
Local directory unique to each node. Not shared. High-speed read and writes for small files (less than 10GB)
/home
Shared across the entire cluster. Only for use with currently running jobs. Quota of 80GB per user. Currently running jobs.
/scratch
Lustre-based parallel file system shared across all Flux nodes. Large reads and writes of very large data files. Checkpoint/restart files and large data sets that are frequently read from or written to are common examples. Also, code that uses MPI. ARC-TS /scratch information
AFS
AFS is a filesystem maintained and backed up by ITS. It is the only storage option available for Flux that is regularly backed up, and is therefore the most secure choice. It is only available on Flux login nodes and can provide up to 10GB of backed-up storage. Storing important files. NOT available for running jobs on compute nodes. ARC-TS AFS information
Turbo
Turbo Research Storage is a high-speed storage service providing NFSv3 and NFSv4 access. It is available only for research data. Data stored in Turbo can be easily shared with collaborators when used in combination with the Globus file transfer service. Storing research data. ARC-TS Turbo page
Long-term storage
Users who need long-term storage can purchase it from ITS MiStorage. Once established, it can be mounted on the Flux login and compute nodes. Long-term storage. ITS MiStorage page

Researchers in the Medical School and College of Literature, Science, and the Arts can take advantage of free or subsidized storage options through their respective academic units.

Locker Large-File Storage

By | Systems and Services

Locker is a cost-optimized, high-capacity, large-file storage service for research data. Locker provides high-performance for large files, and allows investigators across U-M Ann Arbor to connect their data to computing resources necessary for their research, including U-M’s HPC clusters.

Locker can only be used for research data. It is designed for data that does not require frequent access, for example, data that is at the end of its useful life-cycle. It is tuned for large files (1MB or greater), and large numbers of small files should not be stored on the service without being aggregated in some fashion (as through tar). Locker can be used in combination with the Globus data management sharing system for hosting and sharing data with external collaborators and institutes.


Locker rates

  • Unreplicated: $2.28 per terabyte per month ($27.36 per year)
  • Replicated: $4.53 per terabyte per month ($54.37 per year).
  • A U-M shortcode is required to order this service.
  • Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Get started

Visit our Ordering Storage page to see an overview of what is needed to set up a new Locker volume.Use your UMICH (Level-1) credentials and Duo two-factor authentication to login via the ITS Service Request (SRS) Portal or the ITS General Computing provision page.

SRS support

Information about the SRS and how to use it can be found throughout the SRS system.

Here are some additional resources:

Getting Started

Requesting or Modifying a Locker Storage Volume

Globus Server Endpoint

Locker can be made available on existing ARC Globus servers to provide high performance transfers, data sharing and access to Locker from off campus. To access Locker via Globus, request your Locker volume be added to Globus.

ARC Compute System Support

Locker can be accessed from any ARC compute service that supports the same data classifications as your export. To have your Locker export added to an ARC resource contact us with the export name and system name. Locker will be available on all login and data transfer nodes at a minimum.

Mounts will be located at
/nfs/locker/<export-name>/

Research groups may also request system group creation to control group access to Locker volumes.

Optional Features

Replication – (Recommended) Optional second copy of all data in a different geographic location.

Snapshots – (Highly Recommended) Tracking of how data in a volume changes over time allowing users to recover deleted, modified, or otherwise damaged data.

Access snapshots at:
<mount-location>/.snapshots/<date>

Using Locker

Mounting on Windows CIFS
Instructions provided when provisioned

Mounting on Linux NFS
Instructions provided when provisioned

Mounting on Apple OSX
Instructions provided when provisioned

Storage Resource Software

If you are unsure which of our storage services should be used to host your data, we have written some software that you can download and execute to analyze your files to understand how much of your data is stored in large files, how much of your data has been accessed recently, and the distribution of file sizes and access times. Use the Storage Guidance Tool to find file statistics and get a storage recommendation.

This software doesn’t examine the contents of any data files, it merely scans file attributes, it also does not store any file names after searching through the filesystem.

If you have any questions on this software, send us an email at arc-support@umich.edu with your inquiry. If you are unsure about any of the recommendations the tool sends you, contact us at arc-support@umich.edu with your inquiry.

Group Access Controls

Linux Set GID

Using Set GID (SGID) on a directory will force all files created in that directory to inherit the same group permissions as the parent directory even if the user creating them primary or effective group is different. The benefit of this combined with the creation of a group on shared systems is that all files will be created owned and accessible (by default) to members of that group

#list available group
groups
chgrp <groupname> folder
chmod g+s folder

Windows AD Groups

Contact arc-support@umich.edu

Policies

Small File Limitation

Locker’s target audience are those research projects with massive data volumes in large files. Because of this design each 1TByte of Locker capacity provides only 1 Million files, e.g., 10 TByte provides 10 Million files. This works out to 1Mbyte per file average size.

Sensitive Data (ePHI, HIPAA, ITAR, EAR, CUI)

Locker is not currently supported for PHI or other data types. It is scheduled to be reviewed for support at a later date.

System Abuse

Abuse of Locker intentionally or not may result in performance or access being limited to preserve performance and access for other users. In the event this happens staff will be in contact with the users to engineer solutions.

Frequently Asked Questions

Q: How do I Check Locker Space and File Usage?

A: Linux or OSX Terminal use:

    Space: df -h <mount-path>
    Files: df -h -i <mount-path>

Q: Can Locker be Mounted on All ARC-TS Cluster Compute Nodes?

A: Currently we do not allow Locker to be mounted by very large numbers of clients. This could change in the future so let us know if this would help you. Currently we recommend using Globus to stage data between cluster scratch and Locker between runs. Globus provides a CLI so you can script.

Q: Can I Simultaneously Access Locker from Linux and Windows?

A: Currently Locker supports NFS (Linux) OR CIFS (Windows), Apple OSX supports both. This is known as Multi-Protocol or simultaneous NFS and CIFS access. Because Linux and Windows have different permissions schemes this is complex to manage. We don’t currently support his on Locker but do support it on Turbo. To work around this we recommend using Globus to ease data movement between Locker and systems that cannot mount it natively.

Q: Why can’t we use Locker as general purpose storage?

A: To maintain performance, encryption, professional support, and a low cost, Locker’s design does not make it well suited for general purpose primary storage. For this see the Turbo and MiStorage services.

Q: I deleted data but Locker still reports full?

A: Likely your export has snapshots enabled. Snapshots store changes to Locker exports over time thus deleted data is just ‘moved’ to a snapshot. Eventually snapshots will age out and free space on their own. Snapshot consumption does count against volume space used. To delete or disable snapshots to free space early contact support.

Q: I have free space but Locker reports full?

A: Likely you are at your file quota and are running average file size smaller than 1MByte. This use case is outside the support of Locker’s design and the small files should move to another storage service.

Q: I don’t see my .snapshots folder?

A: Your volume might not have snapshots enabled. If it does it is a hidden file on Linux and OSX terminals use ls -a to view all files including hidden files. To show hidden files in OSX and Windows user interfaces varies by version and can be found in their documentation and online.

Q: My volume shows 2x the size I requested!

A: The system Locker is built on tracks all copies of data in its file system. If a volume requests replication (2 copies of all data) total space will represent the primary and replica copy in total. Thus 1TB of new data will consume 2TB of Locker space.

Advanced Topics

System Configuration

Locker consists of two DDN GS14KX-E Gridscaler clusters running IBM Spectrum Scale. Each cluster is located in different data centers with dedicated fiber for data replication between the two sites. Each GS14KX-E cluster can hold 1680 hard drives for capacity of 10PB usable using 8TByte drives. Each hard drive is 7200RPM self encrypting and can be added to the system online. If at capacity additional GS14KX-E can be clustered to add additional performance and capacity.

By not including dedicated metadata or flash/NVMe storage we are able to keep the cost of Locker lower than other solutions such at Turbo. Thus Locker will not perform well with small IO operations and is built for capacity. Thus why we offer both services. The GS14KX-E does have support for adding NVMe/Flash for meta-data and tiering at a later date should the price of such devices become more reasonable.

Locker is directly connected to the Data Den archive via dedicated data movers and to the ARC research network by two IBM Cluster Export Services (CES) nodes or Protocol Nodes. Each CES node is connected with 100Gbps network connections and work in an active-active high availability configuration. Outside the ARC network performance is limited to 40Gbps from the campus backbone.

Order Service

Locker rates are $4.53 per replicated terabyte per month ($54.37 per year), and $2.28 per un-replicated terabyte per month ($27.36 per year).

Order the Locker Research Storage service via the ITS Service Request System (SRS).

For detailed instructions on filling out the order form please visit our Ordering Storage page.

All Locker volumes (NFS and CIFS) require an Active Directory Group. If you need assistance setting one up please email us at arc-support@umich.edu.

All Locker volumes must be ordered with a minimum of 10 terabytes with increments of 1 terabyte thereafter.

Not sure if this is the right storage service for you? Try the ITS Data Storage Finder to find storage solutions to meet your needs.

Close up of ethernet cables on Flux

OSiRIS

By | Systems and Services

Open Storage Research Infrastructure (OSiRIS) is a collaboration between U-M, Wayne State University, Michigan State University and Indiana University to build a distributed, multi-institutional storage infrastructure that will allow researchers at any of our three campuses to read, write, manage and share large amounts of data directly from their computing facility locations on each campus.

By providing a single data infrastructure that supports computational access on the data “in-place,” OSiRIS meets many of the data-intensive and collaboration challenges faced by our research communities and enable these communities to easily undertake research collaborations beyond the border of their own Universities.

OSiRIS will use commercial off-the-shelf hardware coupled with CEPH software to build a high performance software-defined storage system. The system is composed of a number of building-block components: storage head- nodes with SSDs + 60-disk SAS shelf, RHEV cluster (2-hosts), Globus Connect servers, a PERFSONAR network monitoring node and reliable, OpenFlow capable switches.

OSiRIS will deploy a software-defined storage (e.g., commodity hardware with storage logic abstracted into a software layer) service for our universities using the CEPH Storage Cluster as the primary means of organizing the storage hardware required.

OSiRIS is funded by a grant from the National Science Foundation; the Principal Investigator is Shawn McKee, Research Scientist in the Department of Physics and the Director of the Center for Network and Storage-Enabled Collaborative Computational Science (CNSECCS). CNSECCS and OSiRIS are operated under the auspices of the Michigan Institute for Computational Discovery and Engineering (MICDE).

Turbo Storage Configuration

By | Uncategorized

Part of UMRCP provided by ITSARC Turbo Research Storage provides scalable storage to University of Michigan researchers. The service provides CIFS, NFSv3 and NFSv4 access. Performance is intended to be sufficient for both IO operations and bulk file access, allowing researchers to work with data in place and avoid excessive data staging.

Turbo can only be used for research data. It is tuned for large files (1MB or greater) but is capable of handling small files such as documents, spreadsheets, etc. Turbo in combination with Globus sharing should work well for sharing and hosting data for external collaborators and institutes.

Turbo is available 24 hours a day, 7 days a week. When needed, routine, non-disruptive maintenance will be scheduled between 11 p.m. Saturday and 7 a.m. Sunday. Users will be notified of any planned maintenance involving service disruptions 30 days in advance, through the Turbo user group email and the ARC twitter account (@umichARC), or via the contact information submitted by users.

For support, email arc-support@umich.edu.

Backup and Snapshot Capability

The Turbo service provides optional replication and snapshots. Snapshots provide customers with a read-only copy of their data that is frozen at a specific point in time. With replication, data is written to storage at a separate location for purposes of data protection and service continuity.

Snapshot capability may be requested for no additional fee, and is available with the following options:

  • Daily snapshot, retained for 1 day
  • Daily snapshot, retained for 3 days
  • Daily snapshot, retained for 7 days
  • Daily snapshot, retained for 7 days, and 1 bi-weekly snapshot.

Please be aware that Snapshots will take up space on your volume. If you have 10TB of storage and have 5TB of data, you’ll potentially have 5TB of Snapshots and have used up all of your storage space.

Files can be restored from the “.snapshot” hidden directory in users’ top level departmental share. Permissions for that directory are set by the Turbo account holder. For help, email arc-support@umich.edu.

Data Security

The service offers two security levels, one for regulated and/or sensitive data and one for non-sensitive data.

The more secure service includes safeguards required by HIPAA. Accordingly, you may use it to maintain Protected Health Information (PHI). To satisfy internal HIPAA requirements, consult with Information and Infrastructure Assurance (IIA), and they will work with you to document your data sets and their location. (Contact IIA via the ITS Service Center.) Complying with HIPAA’s requirements is a shared responsibility. Users sharing and storing PHI in Turbo Research Storage are responsible for complying with HIPAA safeguards, including:

  • Using and disclosing only the minimum necessary PHI for the intended purpose.
  • Obtaining all required authorizations for using and disclosing PHI.
  • Ensuring that PHI is seen only by those who are authorized to see it.
  • Following any additional steps required by your unit to comply with HIPAA.
  • Social Security numbers should only be used where required by law or where they are essential for university business processes. If you must use SSNs, it is preferred that you use institutional resources designed to house this data, such as the Data Warehouse. IIA can help you explore appropriate storage locations or work with you to appropriately encrypt the data if those alternatives will not work for you.

For more information on security policies, see the University Standard Practice Guide (PDF), and the IIA Sensitive Data Guide

System Requirements

The following Operating Systems and mounting options are supported:

Operating System CIFS NFSv3 NFSv4 NFSv4 with Kerberos
Linux Yes Yes Yes Yes
Mac OSX 10.9 (Mavericks) Yes Yes Yes Yes
Mac OSX 10.10 (Yosemite) Yes Yes Yes No
Windows 10+ Yes Yes Yes No

Order Service

Order the Turbo Research Storage service via the ITS Service Request System (SRS).

For detailed instructions on filling out the order form please visit our Ordering Storage page.