User Tools

Site Tools


pub:forge:storage

##Please note this page is still under construction##

Leased Storage Management

You've got a storage lease, now you need to know how to use it.

On Cluster Access

You're storage will be found in the following location on any cluster attached node /mnt/stor/${dept}{a..z}/${USER} where ${dept} is your department's short name and ${USER} is your Forge username. Moving data in and out will be the same method as on the Forge, but with this location instead of your home directory.

Folder Management

You and a designated user group will have permission to create folders directly underneath this directory. All files created under this directory will have the same group membership, but be owned by the file creator. Additional access can be managed by use of the following command:

 setfacl 

There is already loads of documentation on how to use setfacl to manage access to files in a more ways than simply ownership, and group ownership. Here is a good basic set of examples, and some information on setfacl and getfacl.

For convenience here is an example of creating a directory for a user and giving them access to it with setfacl

  cd /mnt/stor/ita/blspcy
  mkdir ./weaverjon
  setfacl -m u:weaverjon:rwx ./weaverjon
  

Now Jon has full access to his directory which I created in my storage lease. I can check that with getfacl

  [blspcy@login-44-0 blspcy]$ getfacl ./weaverjon/
  # file: weaverjon/
  # owner: blspcy
  # group: blspcy
  user::rwx
  user:weaverjon:rwx
  group::r-x
  mask::rwx
  other::r-x

Quota Management

You can also subdivide the amount of space you have into folders underneath the root of your storage lease. This is done with extended attributes of the folders you create. You will be able to set subfolders to a higher quota than the parent directory, however the parent directory will take precedence for enforcement of the quota. In the following example I have a 3TB quota on my leased volume, I would like to give Jon 500GB of it. All quotas are in bytes, remember that a TB is roughly 1 followed by 12 zeros. First I'm going to double check my quota, then set the quota for the folder I created for Jon in the above example.

  [blspcy@login-44-0 ita]$ getfattr -n ceph.quota.max_bytes /mnt/stor/ita
  getfattr: Removing leading '/' from absolute path names
  # file: mnt/stor/ita
  ceph.quota.max_bytes="3000000000000"
  [blspcy@login-44-0 ita]$ cd blspcy/
  [blspcy@login-44-0 blspcy]$ setfattr -n ceph.quota.max_bytes -v 500000000000 ./weaverjon
  [blspcy@login-44-0 blspcy]$ getfattr -n ceph.quota.max_bytes ./weaverjon
  # file: weaverjon
  ceph.quota.max_bytes="500000000000"

I've now given Jon, who has access to the folder weaverjon in my leased volume, an upper limit of roughly 500GB of my 3TB space in his folder.

Size monitoring

One of the important things you may find yourself doing is trying to find out how much space you are using. Since this is a shared volume monitoring how much of it you have left with df does not work. We strongly advise against using du to find out how much space is being used due to how read intensive this action can be. Fortunately there is a fast easy way for you to monitor how much space is being used in your volume. Due to the type of storage this is the ls utility will actually summarize how much space is under a particular folder. For this example I will use my fictional 3TB storage lease to demonstrate.

  
  [blspcy@login-44-0 blspcy]$ pwd
  /mnt/stor/ita/blspcy
  [blspcy@login-44-0 blspcy]$ ls -alh
  total 3.0K
  drwxrwx---   6 blspcy   nic-cluster-admins 897G Apr  4 08:43 .
  drwxr-xr-x   3 root     root               897G Dec 14 14:35 ..
  drwxr-xr-x  14 root     root               586G Feb 11 08:08 mctdh84.13.1
  drwxrwxr-x+  3 rlhaffer rlhaffer           235G Mar 14 11:27 rlhaffer
  drwxrwxr-x   3 blspcy   blspcy              76G Apr  4 08:43 src
  drwxrwxr-x+  2 blspcy   blspcy              258 Jan  4 11:50 weaverjon
  

I see in the output of ls -alh that the directory I am in, and all directories under it are consuming around 897GB of space. I see that 586GB of that is in mctdh… 235GB of it is in rlhaffer 76GB of it is in src, and 258 bytes are in weaverjon.

It is important to note, that while this is an easy way for you to get a quick breakdown of your storage space, and where it is being used, the number does not update immediately. For instance, I could navigate down a few folder levels into mctdh and remove several large files and come back up to find that the output of ls hasn't changed at this directory level yet. This change happens asynchronously on the file server, so it may take some time for the changes to be reflected in ls at the root level of your storage lease.

pub/forge/storage.txt · Last modified: 2022/05/06 20:15 (external edit)