A painful lesson of a linux quibble

By: | Comments: No Comments

Posted in categories: Computer Tips, Work related

It has been longly debated that there is not a liable way to do undelete on linux file system.
Only for this, extundelete has provided a workaround.
However, mainly the need of undelete comes from the over-powerful command “rm”.

Suppose any user committed “rm -rf /*” or “cd /; rm -rf *”, the command will plow the entire file system, entering all subdirectories with d??????r?x permission, looking for all folders with d??????rwx permission, and remove all files inside, no matter who is the owner and what the permission of the files have.

Why? Why my files has been set of no others privilege, say -rw——, and it is still deleted by somebody I had never know who he is, even if he wasn’t able to see a word in the file?

Because linux manages file system in such a way that the entry of a file is a record in the directory file, so that if somebody gained writing access of a directory, he is allowed to create and remove files in the directory, despite the permission of the file itself.

Then why there isn’t an access control bit on the file itself to control its right of being deleted? The well-known quibble about this is that, “as long as you assign others the writing privilege of a file, he can always zero off your file, which is equivalent to deleteing the file. So what’s the point of having a deletion control bit?”

This is erronous both technically and mentally.

Technically, the writing privilege is controlled by the “w” access control bit of the file. However, the deletion privilege is controlled by the access bit of the directory it stays. So, even if you have removed the writing privilege of the file itself, as long as the others writing privilege is still granted on the directory it resides, it can still be removed by anonymous.

Mentally, a erronously carried out “rm -rf .” is a very common human mistake on boring computer works. However, to systematically zero off all files that can be modified, you will have to make a script to do such a thing, like
#!/usr/bin/perl
find(\&wanted, “\.”);
sub wanted { system “echo \ \ > $_”; }
Everybody that gained access of linux will have to learn “rm -rf”. How many people a skilled enough to program a script to zero off files? “rm” is needed in routine normal operation, however, only jerks would systematically zero off files.

This is to say, without a self controlled access control bit on a file itself to help keeping its precense, a file is exposed to not-malicious human error. With a self controlled access control bit, although even when it is off (by default), “rm -rf” committed by yourself can still eliminate the file, it will be strong enough to withstand the attack of “rm -rf” issued by others. This does not make too much difference to single user linux system like MAC or desktop linux. However, this is very critical to multiuser linux platform, especially when there are plenty linxu newbie users.

Currently there is no workaround on this problem. Selinux cannot distinguish one legal user from another. Sticky bit restricts others while also restricts self. Manipulating rm command will disrupt system operation of removing temporary files. Aliasing of “rm” does not stop users of using “/bin/rm”.

There are a few things you may want to do to safe guard your data at this point:
1) If you do not want to expose anything to others, please do:
cd ~; chmod 700 ..
This will make your files only visible to yourself, and be free of any random deletion;
2) If you do need to have a group people to access your files, please do
cd ~; chmod 750 ..
This will make your home directory visible by the users list to your group.
Then you may want to have a few folders like:
chmod 755 public; chmod 750 group; chmod 700 private
so that you categorize your files by the privilege of others. The things in group and private folders will be free of anonymous deletion, and the things in public is vunerable and you’d better have a backup of it in your private folder.

If you do not want to change your current directory structure, please run
find . -type d -perm /002
This will give you a list of folders that is vunerable to guest deletion attack. You may want to take action on them, either
chmod o-rwx folder_name
to close them, or
chmod -R o-w folder_name
to keep others from creating and deleting files in there.

Be the first to comment!

Leave a Reply