25

After committing the infamous mistake of deleting my entire file system via sudo rm -rf /*, recovering from the horrendous damage that I had done and coping with the fact that I just lost 6 years off my lifespan, I started wondering why is it even possible to do that, and what could be done to prevent this mistake from happening.

One solution that was suggested to me is revoking root access from my account, but that is inconvenient, because a lot of commands require root access and when you have to run a few dozen commands every day, that gets annoying.

Backing up your system is the obvious way to go. But restoring a backup also requires some downtime, and depending on your system that downtime could be days or weeks, which could be unacceptable in some cases.

My question is: Why not implement a confirmation when the user tries to delete their filesystem? So that when you actually want to do that, you just hit Y or enter, and if you don't at least you don't lose everything.

7 Answers7

60

Meet safe-rm, the “wrapper around the rm command to prevent accidental deletions”:

safe-rm prevents the accidental deletion of important files by replacing rm with a wrapper which checks the given arguments against a configurable blacklist of files and directories which should never be removed.

Users who attempt to delete one of these protected files or directories will not be able to do so and will be shown a warning message instead. (man safe-rm)

If the installation link above doesn’t work for you just use sudo apt install safe-rm instead. The default configuration already contains the system directories, let’s try rm /* for example:

$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…

As you see, this would prevent you from deleting /home, where I suppose your personal files are stored. However, it does not prevent you from deleting ~ or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos directory just add its absolute path with the tilde resolved to safe-rm’s config file /etc/safe-rm.conf, e.g.:

echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf

For the cases where you run rm without sudo1 and the -f flag it’s a good idea to add an alias for your shell that makes rm’s -i flag the default. This way rm asks for every file before deleting it:

alias rm='rm -i'

A similarly useful flag is -I, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i, while still giving protection against most mistakes”:

alias rm='rm -I'

The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.


1: sudo ignores aliases, one can work around that by defining alias sudo='sudo ' though

dessert
  • 40,956
26

Confirmation is already there, the problem is the -f in the command, that is --force; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).

An example:

 rm -r ./*
 rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
 rm: cannot remove './mozilla_mvaschetto0': Directory not empty
 rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
 rm: descend into write-protected directory './systemd-private-     890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'? 
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
 rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n

It is different with --force option: I will not get any confirmation and files are deleted.

The problem is to know the command and its parameters, navigate more in the man of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz I'm asking myself, "what does xzf mean?"

Then I read the tar manpage and discovered it.

SusanW
  • 185
AtomiX84
  • 1,231
21

rm is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.

Adding a special case to prompt on rm /* would not be possible as the rm command doesn't see it in this form. The * wildcard is expanded by the shell before being passed to rm, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm does have a default protection against system destruction by refusing to remove / without the --no-preserve-root option.

By default there are three protections against deleting your system in this way:

  1. Permissions - regular users won't be able to remove important files. You bypassed this with sudo
  2. Directories - by default rm will not remove directories. You bypassed this with the -r flag
  3. Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag

To remove all the contents of a folder, rather than running rm /path/to/folder/*, do rm -rf /path/to/folder, then mkdir /path/to/folder as this will trigger the --preserve-root protection as well as removing any dotfiles in the folder

rhellen
  • 394
9

Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)


But there are things you can do to reduce the likelihood of mistakes like this:

  • alias rm='rm -I' to prompt if deleting more than 3 things.
  • alias mv and cp to mv -i and cp -i (many normal use-cases for these don't involve overwriting a destination file).
  • alias sudo='sudo ' to do alias expansion on the first argument to sudo

I find rm -I is a lot more useful than rm -i. It usually don't prompt during normal use, so getting prompted when you didn't expect it is a lot more noticeable / better warning. With -i (before I discovered -I), I got used to typing \rm to disable alias expansion, after being sure I'd typed the command correctly.

You don't want to get in the habit of relying on rm -i or -I aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*. (Also a good habit in case you're ever in an environment without your aliases).

Defend against fat-fingering Enter by typing ls -d /*foo* first, then up-arrow and change that to rm -r after you've finished typing. So the command line never contains rm -rf ~/ or similar dangerous commands at any point. You only "arm" it by changing ls to rm with control-a, alt-d to go to the start of the line and adding the -r or the -f after you've finished typing the ~/some/sub/dir/ part of the command.

Depending on what you're deleting, actually run the ls -d first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm (without -r or -rf) so it's just control-a / control-right (or alt+f) / space / -r.

(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far. And of course up-arrow history that you can search with control-r / control-s.)

Avoid -rf unless you actually need it to silence prompts about removing read-only files.

Take extra time to think before pressing return on a sudo command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.

Peter Cordes
  • 2,287
6

Well the short answer is to not run such a command.

The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.

The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.

Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of

sudo

I guess you could add an alias or a function to ensure that this can never be run.

HaoZeke
  • 449
4

If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.

Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.

3

rm is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo and don't think much about using it. Functions that directly modify files like rm, dd, chroot, etc. require extreme care in use.

Nowadays I like to use trash (without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.

Even then you may make mistakes so make sure to make backups of your entire filesystem.

qwr
  • 2,969