0

I'm a newbie, trying to create an user in Ubuntu Server 22.04, with reading permissions to all existing directories and files, so it could backup everything copying them via SFTP to the backup server (that is a Windows Server 2019). I tried to apply capabilities(7) but I guess I'm doing it wrong, because the backup-user can't read directories and files that don't have "others" permissions (ex.: rwxrwx---). What am I doing wrong? Is there any other way to create an user with "read only" permissions to all files and directories in the system?

I created the user backup-user with:

sudo useradd backup-user -c "User to execute backups" -d /

And defined a password with:

sudo passwd backup-user

Then edited the file /etc/security/capability.conf with:

sudo nano /etc/security/capability.conf

Adding at the end of file the line:

cap_dac_read_search backup-user

Then logged as backup-user and tried:

cd /var/log/apache2

Receiving:

-sh: 1: cd: can't cd to /var/log/apache2

Also tried to add in the end of /etc/security/capability.conf, instead, the line:

cap_dac_override backup-user

But got the same results.

The permissions on /var/log/apache2 directory are:

drwxr-x---  root      adm  

When logged as backup-user, the result for capsh --print is:

Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
Ambient set =
Current IAB:
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
 secure-no-ambient-raise: no (unlocked)
uid=1004(backup-apesp) euid=1004(backup-apesp)
gid=1004(backup-apesp)
groups=1004(backup-apesp)
Guessed mode: UNCERTAIN (0)

When logged as a sudo user, the result for sudo capsh --print is:

Current: =ep
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
Ambient set =
Current IAB:
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
 secure-no-ambient-raise: no (unlocked)
uid=0(root) euid=0(root)
gid=0(root)
groups=0(root)
Guessed mode: UNCERTAIN (0)
Cintya
  • 9
  • 1
  • 3

2 Answers2

1

You can achieve this using Access Control Lists (ACLs), which allow you to grant extra file permissions to select users or groups without changing the owner or group of the file. (Credit goes to this answer.)

First of all, to get the dependency out of the way, ensure you have the setfacl command (e.g. just type it in a terminal) and if you not, install the acl package, which contains it.

Then, you can use setfacl to give backup-user the permission to access any files and directories that you want to backup, without having to change their owner or group:

backup_paths=(
   /home/jack
   /home/mary
   /var/log
)

for path in "${backup_paths[@]"; do setfacl -Rm backup-user:rwX "$path" setfacl -Rdm backup-user:rwX "$path" done

A few notes about the setfacl invocations:

  • The -R switch applies the permission recursively.

  • The -m switch is simply required before the permission spec for whatever reason (See setfacl --help).

  • Observe the capital X in :rwX, which means "Allow this user to execute the file if the owner can execute it".

  • We have to invoke the command twice: Once without -d in order to change the permission for existing files and directories, and a second one with -d to change the default permission on directories, which causes this permission to be applied for any files/directories created under it in the future.

See also setfacl --help and man setfacl.

If you really want to allow backup-user to access everything, you can invoke the same two commands on / instead.

0

Since your comment to the other top-level answer said that you're actually trying to mirror some server directories to another machine (with some difficulties), I'll write another solution that I think will get the job done fairly easily.

First of all, rsync might be the perfect tool for this. It can mirror entire directory trees while preserving their properties (owner, group, permissions, timestamps (and timestamps are important for some servers), etc.), contains options to selectively whitelist/blacklist files within the mirrored file tree, and performs the task as efficiently as possible (So if a backup has already been performed, it compares source and destination trees and only transmits new changes, so e.g. if the source and dest file are are identical in their timestamp and size, the transfer is skipped; and you can customize this behavior).

The most basic usage of rsync for backup could be as little as this:

rsync -ai SRC DEST

Where each of SRC and DEST can be a local directory, or a location in a remote mount. We'll get to these in a bit, but let's explain the switches first:

  • -a (which stands for --archive) is actually a shorthand for -rogptlD. The meaning of those switches is:

    • r: recursive
    • o: preserve owner
    • g: preserve group
    • p: preserve permissions
    • t: preserve timestamp
    • l: preserve symlinks as symlinks
    • D: preserve special files (devices/fifos/etc.) as special files
  • -i stands for "itemize changes", which prints a line for each transmitted/updated file or directory that beginning with a multi-column prefix that explains what is being updated. (The format of its output is out of the scope of this answer, but you can open man rsync and look for the --itemize-changes, -i section, which contains a full description of what these columns mean.)

  • The -m switch can also be used to prune (i.e. not backup) empty directories, but you might want to be careful using it if there are any empty directories that your server server requires to exist.

As you see, with just two switches, rsync is already performing probably 90% of the task. If you want to selectively mirror certain files under SRC, you can use one or more of these options:

  • --files-from=FILE: whitelist child paths (relative to SRC) listed in FILE. Nothing under SRC except the paths listed in FILE would be backed up. Each path is on a separate line, empty lines are ignored, and lines beginning with # are regarded as comments and ignored.

  • --exclude=PATTERN: glob pattern for excluding files, e.g. *.txt causes rsync to exclude all .txt files from the backup.

  • --exclude-from=FILE: read exclusion patterns from FILE, each on a separate line.

  • --include=PATTERN: overrides for exclusion patterns. For example if you use --exclude=*.txt --include=items.txt, then all .txt files would be excluded, except those named items.txt, which would be included. (Note that unlike --files-from, using --include does not automatically imply that everything else is excluded, other than those explicitly excluded with --exclude and --exclude-from.)

  • --include-from=FILE: like --include but read patterns from FILE

rsync has a myriad more options that you can check out with man rsync, but I mentioned these because they are the most likely to be needed in the most common backup tasks.

With that out of the way, now onto the "How to use rsync with a remote host" part.

If you are not fixated on using SFTP, to my understanding the preferred (and easiest) way for using rsync with remote machines is using SSH. Since SFTP is just a file transfer protocol over SSH, I'll assume that you are already able to SSH from the server to the backup machine (or vice versa). So using rsync over SSH would be as simple as:

# On the server
rsync -ai /path/to/src user@host:/path/to/dest

Or on the backup machine

rsync -ai user@host:/path/to/src /path/to/dest

With that out of the way, the only thing left is ensuring that rsync can access the files and directories you want to backup on the server. Like I said, you can run rsync as root, and you can make that easier and safer (i.e. avoid using shell scripts) by creating a systemd service for it on the server.

# /etc/systemd/system/rsync-backup.service

[Unit] Description=rsync backup service

[Service] Type=oneshot ExecStart=rsync -a --password-file=FILE SRC... user@host:DEST

Where FILE contains the password for SSH'ing to the remote (which you obviously should keep in a safe place that can only be read by root, e.g. save it in /root/password.txt and chmod go-rwx /root/password.txt).

Then you can create a timer to run it automatically everyday:

# /etc/systemd/system/rsync-backup.timer

[Unit] Description=Run backup everyday

[Timer] OnCalendar=daily Accuracy=1min Persistent=true

[Install] WantedBy=timers.target

Then run systemctl daemon-reload to have systemd read the new unit files. Then you can enable the timer with:

systemctl enable rsync-backup.timer

This would perform the backup everyday, with rsync running as root and having permissions to access everything on the local machine, which to my understanding should be safe as long as you're not without relying on wacky shell scripts invoking external commands left and right as root, which would be a nightmare to secure. And rsync is only communicating over SSH to access the remote host, on which it doesn't need to have any special permissions, so it can SSH as a normal user. But if that fails for some reason, you can SSH as root too, though I'd recommend using a strong GPG key for SSH'ing rather than a password in that case.

Unless there is something I'm missing here, this sounds like a complete and fairly easy solution for your server backup needs.

Edit: Also, while I don't think it's going to be a solution to your server-backup-with-too-much-permissions problem, you might want to check out rclone, which is an even better solution than rsync for many use cases. It advertises itself as "The Swiss army knife of cloud storage". It supports many providers including Google Drive, Dropbox, Amazon, Azure, and general protocols like SFTP, SMB, WebDAV, etc., and has the ability to mount remotes from any of those providers/protocols as a normal directory, which allows you to access them just as if they were ordinary files on your filesystem, using any programs you want. The mounting does not require root (it's performed using FUSER), so any user can mount remote directories without sudo or any other special permissions.

A quick walk through its usage:

# List available backends
rclone help backends

Start CLI wizard which walks you through

the creation of a new remote:

rclone config

There are many commands that you can use,

such as rclone {copy|move|sync} SRC DEST,

but probably the most intuitive way to use it

is by mounting the remote as a local directory:

mkdir -p ~/Remotes/GDrive rclone mount gdrive:/ ~/Remotes/GDrive

This will run in the foreground so you should

switch to another terminal window.

Then you can access files on the remote just as if

they were local files, using any program you want:

cd ~/Remote/GDrive ls # print list of files on the remote

Copy/download files from the remote to your machine

cp -v -- *.png ~/Pictures

Copy/upload files from your machine to the remote

cp -v -- ~/Music/*.ogg .

touch NEW-FILE # Create new file vim script.py # Edit new/existing file

Browse remote with graphical file manager

dolphin . &

etc.

To unmount the remote, simply return to the terminal where rclone mount is running, and kill it with Ctrl-C. Or if you spawned it in the background, you can kill it with killall rclone.