github.com/artpar/rclone@v1.67.3/docs/content/oracleobjectstorage/tutorial_mount.md (about)

     1  ---
     2  title: "Oracle Object Storage Mount"
     3  description: "Oracle Object Storage mounting tutorial"
     4  slug: tutorial_mount
     5  url: /oracleobjectstorage/tutorial_mount/
     6  ---
     7  # {{< icon "fa fa-cloud" >}} Mount Buckets and Expose via NFS Tutorial 
     8  This runbook shows how to [mount](/commands/rclone_mount/) *Oracle Object Storage* buckets as local file system in
     9  OCI compute Instance using rclone tool. 
    10  
    11  You will also learn how to export the rclone mounts as NFS mount, so that other NFS client can access them.
    12  
    13  Usage Pattern :
    14  
    15  NFS Client --> NFS Server --> RClone Mount --> OCI Object Storage
    16  
    17  ## Step 1 : Install Rclone
    18  
    19  In oracle linux 8, Rclone can be installed from
    20  [OL8_Developer](https://yum.oracle.com/repo/OracleLinux/OL8/developer/x86_64/index.html) Yum Repo, Please enable the
    21  repo if not enabled already.
    22  
    23  ```shell
    24  [opc@base-inst-boot ~]$ sudo yum-config-manager --enable ol8_developer
    25  [opc@base-inst-boot ~]$ sudo yum install -y rclone
    26  [opc@base-inst-boot ~]$ sudo yum install -y fuse
    27  # rclone will prefer fuse3 if available
    28  [opc@base-inst-boot ~]$ sudo yum install -y fuse3
    29  [opc@base-inst-boot ~]$ yum info rclone
    30  Last metadata expiration check: 0:01:58 ago on Fri 07 Apr 2023 05:53:43 PM GMT.
    31  Installed Packages
    32  Name                : rclone
    33  Version             : 1.62.2
    34  Release             : 1.0.1.el8
    35  Architecture        : x86_64
    36  Size                : 67 M
    37  Source              : rclone-1.62.2-1.0.1.el8.src.rpm
    38  Repository          : @System
    39  From repo           : ol8_developer
    40  Summary             : rsync for cloud storage
    41  URL                 : http://rclone.org/
    42  License             : MIT
    43  Description         : Rclone is a command line program to sync files and directories to and from various cloud services.  	
    44  ```
    45  
    46  To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs,
    47  e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.
    48  
    49  ```shell
    50  ln -s /usr/bin/rclone /sbin/mount.rclone
    51  ```
    52  
    53  ## Step 2: Setup Rclone Configuration file
    54  
    55  Let's assume you want to access 3 buckets from the oci compute instance using instance principal provider as means of
    56  authenticating with object storage service.
    57  
    58  - namespace-a, bucket-a,
    59  - namespace-b, bucket-b,
    60  - namespace-c, bucket-c
    61  
    62  Rclone configuration file needs to have 3 remote sections, one section of each of above 3 buckets. Create a
    63  configuration file in a accessible location that rclone program can read.
    64  
    65  ```shell
    66  
    67  [opc@base-inst-boot ~]$ mkdir -p /etc/rclone
    68  [opc@base-inst-boot ~]$ sudo touch /etc/artpar/artpar.conf
    69   
    70   
    71  # add below contents to /etc/artpar/artpar.conf
    72  [opc@base-inst-boot ~]$ cat /etc/artpar/artpar.conf
    73   
    74   
    75  [ossa]
    76  type = oracleobjectstorage
    77  provider = instance_principal_auth
    78  namespace = namespace-a
    79  compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-a
    80  region = us-ashburn-1
    81   
    82  [ossb]
    83  type = oracleobjectstorage
    84  provider = instance_principal_auth
    85  namespace = namespace-b
    86  compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-b
    87  region = us-ashburn-1
    88   
    89   
    90  [ossc]
    91  type = oracleobjectstorage
    92  provider = instance_principal_auth
    93  namespace = namespace-c
    94  compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-c
    95  region = us-ashburn-1
    96   
    97  # List remotes
    98  [opc@base-inst-boot ~]$ rclone --config /etc/artpar/artpar.conf listremotes
    99  ossa:
   100  ossb:
   101  ossc:
   102   
   103  # Now please ensure you do not see below errors while listing the bucket,
   104  # i.e you should fix the settings to see if namespace, compartment, bucket name are all correct. 
   105  # and you must have a dynamic group policy to allow the instance to use object-family in compartment.
   106   
   107  [opc@base-inst-boot ~]$ rclone --config /etc/artpar/artpar.conf ls ossa:
   108  2023/04/07 19:09:21 Failed to ls: Error returned by ObjectStorage Service. Http Status Code: 404. Error Code: NamespaceNotFound. Opc request id: iad-1:kVVAb0knsVXDvu9aHUGHRs3gSNBOFO2_334B6co82LrPMWo2lM5PuBKNxJOTmZsS. Message: You do not have authorization to perform this request, or the requested resource could not be found.
   109  Operation Name: ListBuckets
   110  Timestamp: 2023-04-07 19:09:21 +0000 GMT
   111  Client Version: Oracle-GoSDK/65.32.0
   112  Request Endpoint: GET https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-a/b?compartmentId=ocid1.compartment.oc1..aaaaaaaa...compartment-a
   113  Troubleshooting Tips: See https://docs.oracle.com/iaas/Content/API/References/apierrors.htm#apierrors_404__404_namespacenotfound for more information about resolving this error.
   114  Also see https://docs.oracle.com/iaas/api/#/en/objectstorage/20160918/Bucket/ListBuckets for details on this operation's requirements.
   115  To get more info on the failing request, you can set OCI_GO_SDK_DEBUG env var to info or higher level to log the request/response details.
   116  If you are unable to resolve this ObjectStorage issue, please contact Oracle support and provide them this full error message.
   117  [opc@base-inst-boot ~]$
   118  
   119  ```
   120  
   121  ## Step 3: Setup Dynamic Group and Add IAM Policy.
   122  Just like a human user has an identity identified by its USER-PRINCIPAL, every OCI compute instance is also a robotic
   123  user identified by its INSTANCE-PRINCIPAL. The instance principal key is automatically fetched by rclone/with-oci-sdk
   124  from instance-metadata to make calls to object storage.
   125  
   126  Similar to [user-group](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managinggroups.htm),
   127  [instance groups](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm)
   128  is known as dynamic-group in IAM.
   129  
   130  Create a dynamic group say rclone-dynamic-group that the oci compute instance becomes a member of the below group
   131  says all instances belonging to compartment a...c is member of this dynamic-group.
   132  
   133  ```shell
   134  any {instance.compartment.id = '<compartment_ocid_a>', 
   135       instance.compartment.id = '<compartment_ocid_b>', 
   136       instance.compartment.id = '<compartment_ocid_c>'
   137      }
   138  ```
   139  
   140  Now that you have a dynamic group, you need to add a policy allowing what permissions this dynamic-group has.
   141  In our case, we want this dynamic-group to access object-storage. So create a policy now.
   142  
   143  ```shell
   144  allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-a
   145  allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-b
   146  allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-c
   147  ```
   148  
   149  After you add the policy, now ensure the rclone can list files in your bucket, if not please troubleshoot any mistakes
   150  you did so far. Please note, identity can take upto a minute to ensure policy gets reflected.
   151  
   152  ## Step 4: Setup Mount Folders
   153  Let's assume you have to mount 3 buckets, bucket-a, bucket-b, bucket-c at path /opt/mnt/bucket-a, /opt/mnt/bucket-b,
   154  /opt/mnt/bucket-c respectively.
   155  
   156  Create the mount folder and set its ownership to desired user, group.
   157  ```shell
   158  [opc@base-inst-boot ~]$ sudo mkdir /opt/mnt
   159  [opc@base-inst-boot ~]$ sudo chown -R opc:adm /opt/mnt
   160  ```
   161  
   162  Set chmod permissions to user, group, others as desired for each mount path
   163  ```shell
   164  [opc@base-inst-boot ~]$ sudo chmod 764 /opt/mnt
   165  [opc@base-inst-boot ~]$ ls -al /opt/mnt/
   166  total 0
   167  drwxrw-r--. 2 opc adm 6 Apr 7 18:01 .
   168  drwxr-xr-x. 10 root root 179 Apr 7 18:01 ..
   169  
   170  [opc@base-inst-boot ~]$ mkdir -p /opt/mnt/bucket-a
   171  [opc@base-inst-boot ~]$ mkdir -p /opt/mnt/bucket-b
   172  [opc@base-inst-boot ~]$ mkdir -p /opt/mnt/bucket-c
   173  
   174  [opc@base-inst-boot ~]$ ls -al /opt/mnt
   175  total 0
   176  drwxrw-r--. 5 opc adm 54 Apr 7 18:17 .
   177  drwxr-xr-x. 10 root root 179 Apr 7 18:01 ..
   178  drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-a
   179  drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-b
   180  drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-c
   181  ```
   182  
   183  ## Step 5: Identify Rclone mount CLI configuration settings to use.
   184  Please read through this [rclone mount](https://rclone.org/commands/rclone_mount/) page completely to really
   185  understand the mount and its flags, what is rclone
   186  [virtual file system](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system) mode settings and
   187  how to effectively use them for desired Read/Write consistencies.
   188  
   189  Local File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable.
   190  Object storage can throw several errors like 429, 503, 404 etc. The rclone sync/copy commands cope with this with
   191  lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads.
   192  Please Look at the VFS File Caching for solutions to make mount more reliable.
   193  
   194  First lets understand the rclone mount flags and some global flags for troubleshooting.
   195  
   196  ```shell
   197  
   198  rclone mount \
   199      ossa:bucket-a \                     # Remote:bucket-name
   200      /opt/mnt/bucket-a \                 # Local mount folder
   201      --config /etc/artpar/artpar.conf \  # Path to rclone config file
   202      --allow-non-empty \                 # Allow mounting over a non-empty directory
   203      --dir-perms 0770 \                  # Directory permissions (default 0777)
   204      --file-perms 0660 \                 # File permissions (default 0666)
   205      --allow-other \                     # Allow access to other users
   206      --umask 0117  \                     # sets (660) rw-rw---- as permissions for the mount using the umask
   207      --transfers 8 \                     # default 4, can be set to adjust the number of parallel uploads of modified files to remote from the cache
   208      --tpslimit 50  \                    # Limit HTTP transactions per second to this. A transaction is roughly defined as an API call;
   209                                          # its exact meaning will depend on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its response
   210      --cache-dir /tmp/rclone/cache       # Directory rclone will use for caching.
   211      --dir-cache-time 5m \               # Time to cache directory entries for (default 5m0s)
   212      --vfs-cache-mode writes \           # Cache mode off|minimal|writes|full (default off), writes gives the maximum compatibility like a local disk
   213      --vfs-cache-max-age 20m \           # Max age of objects in the cache (default 1h0m0s)
   214      --vfs-cache-max-size 10G \          # Max total size of objects in the cache (default off)
   215      --vfs-cache-poll-interval 1m \      # Interval to poll the cache for stale objects (default 1m0s)
   216      --vfs-write-back 5s   \             # Time to writeback files after last use when using cache (default 5s). 
   217                                          # Note that files are written back to the remote only when they are closed and
   218                                          # if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or
   219                                          # dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
   220      --vfs-fast-fingerprint              # Use fast (less accurate) fingerprints for change detection.    
   221      --log-level ERROR \                            # log level, can be DEBUG, INFO, ERROR
   222      --log-file /var/log/rclone/oosa-bucket-a.log   # rclone application log
   223      
   224  ```
   225  
   226  ### --vfs-cache-mode writes
   227  
   228  In this mode files opened for read only are still read directly from the remote, write only and read/write files are
   229  buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be
   230  retried at exponentially increasing intervals up to 1 minute.
   231  
   232  VFS cache mode of writes is recommended, so that application can have maximum compatibility of using remote storage
   233  as a local disk, when write is finished, file is closed, it is uploaded to backend remote after vfs-write-back duration
   234  has elapsed. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone
   235  is run with the same flags.
   236  
   237  ### --tpslimit float
   238  
   239  Limit transactions per second to this number. Default is 0 which is used to mean unlimited transactions per second.
   240  
   241  A transaction is roughly defined as an API call; its exact meaning will depend on the backend. For HTTP based backends
   242  it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP.
   243  
   244  For example, to limit rclone to 10 transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds
   245  use --tpslimit 0.5.
   246  
   247  Use this when the number of transactions per second from rclone is causing a problem with the cloud storage
   248  provider (e.g. getting you banned or rate limited or throttled).
   249  
   250  This can be very useful for rclone mount to control the behaviour of applications using it. Let's guess and say Object
   251  storage allows roughly 100 tps per tenant, so to be on safe side, it will be wise to set this at 50. (tune it to actuals per
   252  region)
   253  
   254  ### --vfs-fast-fingerprint
   255  
   256  If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This
   257  makes the fingerprinting less accurate but much faster and will improve the opening time of cached files. If you are
   258  running a vfs cache over local, s3, object storage or swift backends then using this flag is recommended.
   259  
   260  Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file.
   261  Fingerprints are made from:
   262  - size
   263  - modification time
   264  - hash
   265    where available on an object.
   266  
   267  
   268  ## Step 6: Mounting Options, Use Any one option
   269  
   270  ### Step 6a: Run as a Service Daemon: Configure FSTAB entry for Rclone mount
   271  Add this entry in /etc/fstab :
   272  ```shell
   273  ossa:bucket-a /opt/mnt/bucket-a rclone rw,umask=0117,nofail,_netdev,args2env,config=/etc/artpar/artpar.conf,uid=1000,gid=4,
   274  file_perms=0760,dir_perms=0760,allow_other,vfs_cache_mode=writes,cache_dir=/tmp/rclone/cache 0 0
   275  ```
   276  IMPORTANT: Please note in fstab entry arguments are specified as underscore instead of dash,
   277  example: vfs_cache_mode=writes instead of vfs-cache-mode=writes
   278  Rclone in the mount helper mode will split -o argument(s) by comma, replace _ by - and prepend -- to
   279  get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes.
   280  Any inner quotes inside outer quotes of the same type should be doubled.
   281  
   282  
   283  then run sudo mount -av
   284  ```shell
   285  
   286  [opc@base-inst-boot ~]$ sudo mount -av
   287  /                    : ignored
   288  /boot                : already mounted
   289  /boot/efi            : already mounted
   290  /var/oled            : already mounted
   291  /dev/shm             : already mounted
   292  none                 : ignored
   293  /opt/mnt/bucket-a    : already mounted   # This is the bucket mounted information, running mount -av again and again is idempotent.
   294  
   295  ```
   296  
   297  ## Step 6b: Run as a Service Daemon: Configure systemd entry for Rclone mount
   298  
   299  If you are familiar with configuring systemd unit files, you can also configure the each rclone mount into a
   300  systemd units file.
   301  various examples in git search: https://github.com/search?l=Shell&q=rclone+unit&type=Code
   302  ```shell
   303  tee "/etc/systemd/system/rclonebucketa.service" > /dev/null <<EOF
   304  [Unit]
   305  Description=RCloneMounting
   306  After=multi-user.target
   307  [Service]
   308  Type=simple
   309  User=0
   310  Group=0
   311  ExecStart=/bin/bash /etc/rclone/scripts/bucket-a.sh
   312  ExecStop=/bin/fusermount -uz /opt/mnt/bucket-a
   313  TimeoutStopSec=20
   314  KillMode=process
   315  RemainAfterExit=yes
   316  [Install]
   317  WantedBy=multi-user.target
   318  EOF
   319  ```
   320  
   321  ## Step 7: Optional: Mount Nanny, for resiliency, recover from process crash.
   322  Sometimes, rclone process crashes and the mount points are left in dangling state where its mounted but the rclone
   323  mount process is gone. To clean up the mount point you can force unmount by running this command.
   324  ```shell
   325  sudo fusermount -uz /opt/mnt/bucket-a
   326  ```
   327  One can also run a rclone_mount_nanny script, which detects and cleans up mount errors by unmounting and
   328  then auto-mounting.
   329  
   330  Content of /etc/rclone/scripts/rclone_nanny_script.sh
   331  ```shell
   332  
   333  #!/bin/bash
   334  erroneous_list=$(df 2>&1 | grep -i 'Transport endpoint is not connected' | awk '{print ""$2"" }' | tr -d \:)
   335  rclone_list=$(findmnt -t fuse.rclone -n 2>&1 | awk '{print ""$1"" }' | tr -d \:)
   336  IFS=$'\n'; set -f
   337  intersection=$(comm -12 <(printf '%s\n' "$erroneous_list" | sort) <(printf '%s\n' "$rclone_list" | sort))
   338  for directory in $intersection
   339  do
   340      echo "$directory is being fixed."
   341      sudo fusermount -uz "$directory"
   342  done
   343  sudo mount -av
   344  
   345  ```
   346  Script to idempotently add a Cron job to babysit the mount paths every 5 minutes
   347  ```shell
   348  echo "Creating rclone nanny cron job."
   349  croncmd="/etc/rclone/scripts/rclone_nanny_script.sh"
   350  cronjob="*/5 * * * * $croncmd"
   351  # idempotency - adds rclone_nanny cronjob only if absent.
   352  ( crontab -l | grep -v -F "$croncmd" || : ; echo "$cronjob" ) | crontab -
   353  echo "Finished creating rclone nanny cron job."
   354  ```
   355  
   356  Ensure the crontab is added, so that above nanny script runs every 5 minutes.
   357  ```shell
   358  [opc@base-inst-boot ~]$ sudo crontab -l
   359  */5 * * * * /etc/rclone/scripts/rclone_nanny_script.sh
   360  [opc@base-inst-boot ~]$  
   361  ```
   362  
   363  ## Step 8: Optional: Setup NFS server to access the mount points of rclone
   364  
   365  Let's say you want to make the rclone mount path /opt/mnt/bucket-a available as a NFS server export so that other
   366  clients can access it by using a NFS client.
   367  
   368  ### Step 8a : Setup NFS server
   369  
   370  Install NFS Utils
   371  ```shell
   372  sudo yum install -y nfs-utils
   373  ```
   374  
   375  Export the desired directory via NFS Server in the same machine where rclone has mounted to, ensure NFS service has
   376  desired permissions to read the directory. If it runs as root, then it will have permissions for sure, but if it runs
   377  as separate user then ensure that user has necessary desired privileges.
   378  ```shell
   379  # this gives opc user and adm (administrators group) ownership to the path, so any user belonging to adm group will be able to access the files.
   380  [opc@tools ~]$ sudo chown -R opc:adm /opt/mnt/bucket-a/
   381  [opc@tools ~]$ sudo chmod 764 /opt/mnt/bucket-a/
   382   
   383  # Not export the mount path of rclone for exposing via nfs server
   384  # There are various nfs export options that you should keep per desired usage.
   385  # Syntax is
   386  # <path> <allowed-ipaddr>(<option>)
   387  [opc@tools ~]$ cat /etc/exports
   388  /opt/mnt/bucket-a *(fsid=1,rw)
   389   
   390   
   391  # Restart NFS server
   392  [opc@tools ~]$ sudo systemctl restart nfs-server
   393   
   394   
   395  # Show Export paths
   396  [opc@tools ~]$ showmount -e
   397  Export list for tools:
   398  /opt/mnt/bucket-a *
   399   
   400  # Know the port NFS server is running as, in this case it's listening on port 2049
   401  [opc@tools ~]$ sudo rpcinfo -p | grep nfs
   402  100003 3 tcp 2049 nfs
   403  100003 4 tcp 2049 nfs
   404  100227 3 tcp 2049 nfs_acl
   405   
   406  # Allow NFS service via firewall
   407  [opc@tools ~]$ sudo firewall-cmd --add-service=nfs --permanent
   408  Warning: ALREADY_ENABLED: nfs
   409  success
   410  [opc@tools ~]$ sudo firewall-cmd --reload
   411  success
   412  [opc@tools ~]$
   413   
   414  # Check status of NFS service
   415  [opc@tools ~]$ sudo systemctl status nfs-server.service
   416  ● nfs-server.service - NFS server and services
   417     Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
   418     Active: active (exited) since Wed 2023-04-19 17:59:58 GMT; 13min ago
   419    Process: 2833741 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
   420    Process: 2833740 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
   421    Process: 2833737 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
   422    Process: 2833766 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exit>
   423    Process: 2833756 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
   424    Process: 2833754 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
   425   Main PID: 2833766 (code=exited, status=0/SUCCESS)
   426      Tasks: 0 (limit: 48514)
   427     Memory: 0B
   428     CGroup: /system.slice/nfs-server.service
   429   
   430  Apr 19 17:59:58 tools systemd[1]: Starting NFS server and services...
   431  Apr 19 17:59:58 tools systemd[1]: Started NFS server and services.  
   432  ```
   433  
   434  ### Step 8b : Setup NFS client
   435  
   436  Now to connect to the NFS server from a different client machine, ensure the client machine can reach to nfs server machine over tcp port 2049, ensure your subnet network acls allow from desired source IP ranges to destination:2049 port.
   437  
   438  In the client machine Mount the external NFS
   439  
   440  ```shell
   441  # Install nfs-utils
   442  [opc@base-inst-boot ~]$ sudo yum install -y nfs-utils
   443   
   444  # In /etc/fstab, add the below entry
   445  [opc@base-inst-boot ~]$ cat /etc/fstab | grep nfs
   446  <ProvideYourIPAddress>:/opt/mnt/bucket-a /opt/mnt/buckert-a nfs rw 0 0
   447   
   448  # remount so that newly added path gets mounted.
   449  [opc@base-inst-boot ~]$ sudo mount -av
   450  / : ignored
   451  /boot : already mounted
   452  /boot/efi : already mounted
   453  /var/oled : already mounted
   454  /dev/shm : already mounted
   455  /home/opc/share_drive/bucketa: already mounted
   456  /opt/mnt/bucket-a: successfully mounted # this is the NFS mount
   457  ```
   458  
   459  ### Step 8c : Test Connection
   460  
   461  ```shell
   462  # List files to test connection
   463  [opc@base-inst-boot ~]$ ls -al /opt/mnt/bucket-a
   464  total 1
   465  drw-rw----. 1 opc adm 0 Apr 18 17:28 .
   466  drwxrw-r--. 7 opc adm 85 Apr 18 17:36 ..
   467  drw-rw----. 1 opc adm 0 Apr 18 17:29 FILES
   468  -rw-rw----. 1 opc adm 15 Apr 18 18:13 nfs.txt
   469  ```
   470  
   471