This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new!


Bring back deleted files with lsof

By Michael Stutz on November 16, 2006 (8:00:00 AM)

Share    Print    Comments   

There you are, happily playing around with an audio file you've spent all afternoon tweaking, and you're thinking, "Wow, doesn't it sound great? Lemme just move it over here." At that point your subconscious chimes in, "Um, you meant mv, not rm, right?" Oops. I feel your pain -- this happens to everyone. But there's a straightforward method to recover your lost file, and since it works on every standard Linux system, everyone ought to know how to do it.

Briefly, a file as it appears somewhere on a Linux filesystem is actually just a link to an inode, which contains all of the file's properties, such as permissions and ownership, as well as the addresses of the data blocks where the file's content is stored on disk. When you rm a file, you're removing the link that points to its inode, but not the inode itself; other processes (such as your audio player) might still have it open. It's only after they're through and all links are removed that an inode and the data blocks it pointed to are made available for writing.

This delay is your key to a quick and happy recovery: if a process still has the file open, the data's there somewhere, even though according to the directory listing the file already appears to be gone.

This is where the Linux process pseudo-filesystem, the /proc directory, comes into play. Every process on the system has a directory here with its name on it, inside of which lies many things -- including an fd ("file descriptor") subdirectory containing links to all files that the process has open. Even if a file has been removed from the filesystem, a copy of the data will be right here:

/proc/process id/fd/file descriptor

To know where to go, you need to get the id of the process that has the file open, and the file descriptor. These you get with lsof, whose name means "list open files." (It actually does a whole lot more than this and is so useful that almost every system has it installed. If yours isn't one of them, you can grab the latest version straight from its author.)

Once you get that information from lsof, you can just copy the data out of /proc and call it a day.

This whole thing is best demonstrated with a live example. First, create a text file that you can delete and then bring back:

$ man lsof | col -b > myfile

Then have a look at the contents of the file that you just created:

$ less myfile

You should see a plaintext version of lsof's huge man page looking out at you, courtesy of less.

Now press Ctrl-Z to suspend less. Back at a shell prompt make sure your file is still there:

$ ls -l myfile
-rw-r--r--  1 jimbo jimbo 114383 Oct 31 16:14 myfile
$ stat myfile
  File: `myfile'
  Size: 114383          Blocks: 232        IO Block: 4096   regular file
Device: 341h/833d       Inode: 1276722     Links: 1
Access: (0644/-rw-r--r--)  Uid: ( 1010/    jimbo)   Gid: ( 1010/    jimbo)
Access: 2006-10-31 16:15:08.423715488 -0400
Modify: 2006-10-31 16:14:52.684417746 -0400
Change: 2006-10-31 16:14:52.684417746 -0400

Yup, it's there all right. OK, go ahead and oops it:

$ rm myfile
$ ls -l myfile
ls: myfile: No such file or directory
$ stat myfile
stat: cannot stat `myfile': No such file or directory

It's gone.

At this point, you must not allow the process still using the file to exit, because once that happens, the file will really be gone and your troubles will intensify. Your background less process in this walkthrough isn't going anywhere (unless you kill the process or exit the shell), but if this were a video or sound file that you were playing, the first thing to do at the point where you realize you deleted the file would be to immediately pause the application playback, or otherwise freeze the process, so that it doesn't eventually stop playing the file and exit.

Now to bring the file back. First see what lsof has to say about it:

$ lsof | grep myfile
less      4158    jimbo    4r      REG       3,65   114383   1276722 /home/jimbo/myfile (deleted)

The first column gives you the name of the command associated with the process, the second column is the process id, and the number in the fourth column is the file descriptor (the "r" means that it's a regular file). Now you know that process 4158 still has the file open, and you know the file descriptor, 4. That's everything you have to know to copy it out of /proc.

You might think that using the -a flag with cp is the right thing to do here, since you're restoring the file -- but it's actually important that you don't do that. Otherwise, instead of copying the literal data contained in the file, you'll be copying a now-broken symbolic link to the file as it once was listed in its original directory:

$ ls -l /proc/4158/fd/4
lr-x------  1 jimbo jimbo 64 Oct 31 16:18 /proc/4158/fd/4 -> /home/jimbo/myfile (deleted)
$ cp -a /proc/4158/fd/4 myfile.wrong
$ ls -l myfile.wrong
lrwxr-xr-x  1 jimbo jimbo 24 Oct 31 16:22 myfile.wrong -> /home/jimbo/myfile (deleted)
$ file myfile.wrong
myfile.wrong: broken symbolic link to `/home/jimbo/myfile (deleted)'
$ file /proc/4158/fd/4
/proc/4158/fd/4: broken symbolic link to `/home/jimbo/myfile (deleted)'

So instead of all that, just a plain old cp will do the trick:

$ cp /proc/4158/fd/4 myfile.saved

And finally, verify that you've done good:

$ ls -l myfile.saved
-rw-r--r--  1 jimbo jimbo 114383 Oct 31 16:25 myfile.saved
$ man lsof | col -b >
$ cmp myfile.saved

No complaints from cmp -- your restoration is the real deal.

Incidentally, there are a lot of useful things you can do with lsof in addition to rescuing lost files.

Share    Print    Comments   


on Bring back deleted files with lsof

Note: Comments are owned by the poster. We are not responsible for their content.

Even better than copying

Posted by: Anonymous Coward on November 17, 2006 02:44 AM
You can relink the file with debugfs (on ext2 and ext3). See <a href="" title=""></a> (and don't forget to read comments for an even better solution to find the inode number)


Wikipedia article

Posted by: Anonymous Coward on November 17, 2006 06:15 AM
Wikipedia has an article about 'lsof';
* <a href="" title=""></a>


Re:Wikipedia article

Posted by: Anonymous Coward on November 19, 2006 07:48 PM
Why the fuck do we care?


moving / deleting files while using them?

Posted by: Anonymous Coward on November 17, 2006 06:52 AM
quote from TA:
"When you rm a file, you're removing the link that points to its inode, but not the inode itself; other processes (such as your audio player) might still have it open"

as per subject, this indicates it's possible to rm a file while it's in use or am i misreading this?
can s.b. explain pls?


Re:moving / deleting files while using them?

Posted by: Anonymous Coward on November 17, 2006 07:05 AM
Often, yes you can


Re:moving / deleting files while using them?

Posted by: Anonymous Coward on November 17, 2006 01:15 PM
As explained in the article, the directory entry that is deleted when you issue 'rm ' does just that. No file is deleted, it is just the directory reference to the inode. If no there are no other references to that inode then, indeed that could be considered deleted. Any open file handles that refer to that inode will keep that inode alive.

So no the file is not removed, just the directory entry to the 'real file' so to speak.

To try to make this clearer, when a process openes a file via its directory entry, it really opens the file (inode entry) that is pointed to by the directory entry. There may be any number of directory entries pointing to the same inode (hard links).

That's how I understand things anyway..


via reference counting

Posted by: Administrator on November 17, 2006 02:22 PM
The directory entry just contains the filename and an inode number. The inode is where the other attributes of the file reside (file size, permissions, owner/group, etc., as in "stat" in the article).

One of the attributes is the inode reference count. All files of any type (regular, directory, socket, pipe, symlink) have a reference count, as indicated by "Links" in the "stat" output. This indicates, as the name implies, how many references there are to that inode at any one moment.

Some operations will increment the reference count, such as hard linking or opening. Many processes can have one inode open (e.g. glibc or other app libraries). Note that symlinking does not increment the ref count, because it references by name/path, not inode.

Other operations decrement the ref count, like closing or using "rm". Remember, "rm" primarily deletes the directory entry, which is itself a reference to the inode.

When the inode reference count reaches zero, then all references to that inode are gone (all file descriptors closed in processes, all directory entries gone), and the filesystem driver releases all the inode's disk blocks for re-use by other files (inodes). Note that the disk blocks are not erased; they are merely marked as available.

By way of example:

create file<nobr> <wbr></nobr>/tmp/test-file, placed in inode 17044

  ->inode 17044 ref count set to 1
create a hard link from<nobr> <wbr></nobr>/tmp/test-file to<nobr> <wbr></nobr>/tmp/test-file-link

  ->inode 17044 ref count incremented to 2
process 549 opens<nobr> <wbr></nobr>/tmp/test-file as file #5

  ->inode 17044 ref count incremented to 3
"rm<nobr> <wbr></nobr>/tmp/test-file-link"

  ->inode 17044 ref count decremented to 2
"rm<nobr> <wbr></nobr>/tmp/test-file"

  ->inode 17044 ref count decremented to 1
<nobr> <wbr></nobr>...(note: this is when this article would come in handy! process 549 still has inode 17044 open)...
process 549 closes file #5

  -> inode 17044 ref count decremented to 0, data blocks marked as "available"

For an example "real world" application of this behavior, check the man page for "tmpfile". This function returns a unique temporary file, opened and ready for use, to be deleted automatically on close. It accomplishes this by creating the file by name (refcnt++), opening it (refcnt++), then deleting the filename (refcnt--) and passing the open file descriptor back to the calling code. The refcnt is 1 on the inode, until the program closes the file descriptor; once this happens, the inode and its data blocks are released.

Sometimes, the reference count to a file is inconsistent with the system state (think "power failed while I was using"); fsck will fix this. Also, not all filesystems accessible to Linux are inode-based (iso9660, vfat), so reference counting is not available as part of the filesystem infrastructure.

Now, if you add journaling (ext3, jfs, xfs, ReiserFS), managing all this gets a lot more complicated...



Posted by: Anonymous Coward on November 17, 2006 07:31 AM
That's awesome. ++article;


Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 07:57 AM
Quick! Get into the file system and relink an accidentally deleted file, while it's still held open(?) by "some" application. Best of luck with that approach. It may work from time to time but, I can guarantee you that it will let you down when you need it most.

Linux file systems SUCK! Netware has had a filesystem that allowed restoration of multiple versions of deleted files(Salvage) since 1992, at least! More recently, they came up with Novell Storage Services(NSS) that not only has "Salvage" capabilities, it also has Copy On Write which basically allows you to backup open files and more. Microsoft, not to be out done for more than a few years, released their Volume Shadow Copy which tries to be similar to NSS. Meanwhile, back in Linuxland we still piss around with antiquated relics like Ext2/3 and oddball hacks like this article offers. Make sense man!

Why doesn't Linux have a recoverable file system with block level support for things like backing up open files? Don't you think that Ext2/3 are getting rather stale when compared to the file systems of commercial operating systems?

Well? Don't you?


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 08:18 AM
You mean like LVM ?


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 18, 2006 01:53 AM
Yes! Now if only LVM was the default file system configuration and was easily accessible and useable by regular users, as it is with Novell's Salvage and Microsoft's Volume Shadow Copy.

LVM is a good start though. Thanks for having a clue.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 18, 2006 02:59 AM
"LVM is a good start though. Thanks for having a clue."

- In short, you'll catch more help with honey than a slap in the face.

Do you really find it more productive to be arrogant and obnoxious? There has to have been a better way you could have worded that leaving out the "SUCKS!", "Get A CLUE" and lovely ending "Thanks for having a clue" as if you where indavidually entitled to everyone else's time.

I'm certainly not the most Linux knowledgable here but I'm here to learn. Though my indigation with some vendors (Hiya ATI.. love the lack of FOSS support your providing, sure would like to one day see 100% of my video card work in more than winBlows) however the FOSS community is generally helpful and always more knowledgable.

If you want to contrast helpfull commentary, read the normal Newsforge responses (dude, try program ABC, it may help) then say that of CNet (I run ABC proprietary OS and all other OS users are stupid).

It's a joy to see the normally mature comentary on NewsForge and other FOSS news sites after reading Fanboy crap for pages after each major news outlet article.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 08:33 AM
The only real difference between this and Windows is perspective.

If you delete a file using a file browser in Gnome or KDE, it goes to a trash can just like in Windows.

If you delete a file in DOS in Windows, it goes poof just like from a shell in linux.

It's just that linux users like their shells.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 08:36 AM
Plus, if you really want, you can use libtrash[1] and have files deleted from the shell go into your trash first as well.

[1]: <a href="" title=""><nobr>t<wbr></nobr> rash/</a>


You're Clueless!

Posted by: Anonymous Coward on November 18, 2006 01:35 AM
What happens when you delete a file from the command line?
What happens when a file is changed by you/someone else/an application?
What happens when a client workstation deletes a file from a file server over the network?

In the case of Linux file systems like Ext2/3, Reiser, JFS, and others that information is GONE! In the case of Netware and -less so- Windows, the file or a previous version of the file can be recovered almost instantly by any user with appropriate rights to the file. No need for funky and unreliable hacks like the article proposes. No need to drag out backup tapes. No need to call the help desk.

When you get a clue, try to offer a reasonable argument. Until such time, try to avoid posting!



Re:You're Clueless!

Posted by: Anonymous Coward on March 04, 2007 06:33 PM
if a client workstation or a program deletes a file under windows, it's gone and the trash is shourt-cutted, as if it were under linux.

*you* should avoid posting clueless comments...


Backing up open files works, doesn't it?

Posted by: Anonymous Coward on November 17, 2006 02:03 PM
My Debian system backs up open files from ext3 partitions every day.


Think Outside The Juice Box

Posted by: Anonymous Coward on November 18, 2006 01:47 AM
Do you really think that your Debian system is a significant indicator of the overall Linux world? While it's wonderful that your box runs daily backups of open files, do you not think that others out in the wide-wild-world may have slightly different setups that have slightly different requirements than yours???

When are people, such as yourself, going to realize that their own personal and anecdotal experience is COMPLETELY INSIGNIFICANT in the grand scheme of things?


Re:Think Outside The Juice Box

Posted by: Anonymous Coward on November 18, 2006 03:07 AM
I wonder if offering personal anecdotal responses would ever inspire someone else to think "hey, I hadn't thought of that, maybe I try that this weekend to see if it works for me."

I wonder if having more information than you had before reading the article/comment is a good or bad thing. See, I kinda think that the more information you can get and understand, the better off you are to replicate a solution or build on and develop your own solution.

But then, I'm just one lowly indavidual in a wide wild world so what the heck could my attempt to increase your knowledge possibly do to help huh.


Re:Think Outside The Juice Box

Posted by: Administrator on November 18, 2006 03:04 AM
Since the grapdparent comment seemed to claim that Linux filesystems are incapable of backing up open files, this single instance is perfectly enough to disprove that universal statement. Therefore it is totally significant.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 10:14 PM
Linux users generally like their computer to do what it's told. If we tell it to delete a file, we expect it to be deleted. There's nothing more annoying than little "helpers" which say "I know you said to delete it, but you didn't *really* mean that, did you?"

"Yes, you stupid annoying thing, I did!!"

Obviously a consequence of that is that human error is so much more irreparable.

- ayteebee

Good article by the way, this is a neat little trick which is bound to come in useful sometimes!


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 17, 2006 11:43 PM
Oh come on! Write your own god damn rm alias to back up your shit.

rm -> save path of file; move file to recycle bin
un-rm -> load path of file; restore file from bin
empty -> clear out bin and paths


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 18, 2006 12:23 AM
IMHO, there's no need to move such a complexity into kernel space.

Handling your files with subversion or even with a stackable file system interface with fuse should do the trick.

KISS principle rocks.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 18, 2006 01:29 AM
They suck, do they, lol? Does Windows YET have a filesystem with journaling? Chkdsk? Defrag? What are THOSE? In linux, those are not required - I haven't used either in years.

Linux not only has more advanced filesystems in general, but more to choose from. Need one that is optimized for handeling thousands of very small files? They have it. Need a filesystem that can delete VERY large files VERY quickly? They have that.

Finally, what do you mean "commercial"? You mean like XFS or JFS, made by IBM and Sun, lol?

Get with the program, dude. There are filesystems that allow encryption - real encryption, not that fake-crap that NTFS gives you. There are filesystems that use compression, ideal for cd-rom based liveCDs and there are filesystems that let you stack multiple mount points together, giving you and your apps the illusion that they can write to read-only media like CDs and DVDs - very handy for many tasks.

Allof the above are free and most are usually included in most major distros. Now, if you want commercial as in you have to pay big bucks for it, well, they have those too, like that new one from IBM that's made for ginourmous filesystems with ginourmous data. Dontcha wonder why they don't use Windows machines to build supercomputers? Hmmm, lol...

As someone else said, trash cans are fine for the noob crowd. No one really undeletes in windows anymore...


You're Even More Clueless!

Posted by: Anonymous Coward on November 18, 2006 02:05 AM
Windows has had a journaling file system since NT 4 back in 1993. As you know, it's called NTFS.

Chkdsk is the same as fsck, get a clue. As for defrag, what's your point? I guarantee you that WHEN someone offers a defrag for Ext or Reiser, it will be hailed as the greatest thing since sliced bread. Yawn.

Large and small file support? Encryption and compression? Are you truly so clueless as to believe that these features are not available in NTFS and NSS?

Just because something is free does not make it inherrently superior! Freeness also does not negate the fact that the feature is COMPLETELY MISSING! The ability to salvage deleted files has nothing to do with trash cans. If you had a clue, you'd have avoided posting!


Re:You're Even More Clueless!

Posted by: Anonymous Coward on November 20, 2006 05:14 PM

You're not so clued up yourself. NT4 with NTFS first saw the light of day in 1996 - before that it was NT3.51 with HPFS.

If defrag was necessary for Linux filesystems, or Novell for that matter, don't you think it would have been made available MANY years ago (the Netware volume structure is more than 15 years old and NSS is nearly nine). "The Community" is incredibly efficient at creating apparently useless small programmes in no time flat because somebody had a need. These useless appendages then begin to lead a life of their own and before you know it, they are full-blown, efficient and highly desirable applications.

The Linux world has had fifteen years to come up with dfrg, KDEfrag! Gefrag or something similar - since it hasn't appeared yet, I for one will not be holding my breath or losing a night's sleep over it.

It is obviously not necessary!


Re:You're Even More Clueless!

Posted by: Anonymous Coward on November 20, 2006 09:57 PM
There's e2defrag, but that's now very old, always was experimental and now long unsupported.

However, ext2 and ext3 handle defragmentation in the background, so your defragger is built into the filesystem so you don't have to do anything special to defrag your filesystem. It just happens when needed.


Re:You're Even More Clueless!

Posted by: Anonymous Coward on November 22, 2006 08:03 AM
It is obviously not necessary!

That's what they said about a GUI for Linux, back when it didn;t have one that was worth a crap. But, by your reasoning, it's obviously not necessary for Linux to have a recoverable file system. So, if that's the case, why are we still discussing NASTY and pathetic attempts to recover files like this article describes? This despite the fact that "the Linux world has had fifteen years to come up with" something. Anything!

You may wish to go down a rat hole about whether or not Ext2/3 needs defrag, while I couldn't care less. My original statement stands. The Ext2/3, Reiser et al are severely lacking when compared to modern file systems from Novell and Microsoft. But, true to form, the Linux crowd here doesn't want to admit the facts and would rather argue to the death with irrelevant minutae.

Just look at this thread, the excuses are outrageous and oh so typical. 'Linux taught me not to make such mistakes' WTF!?!?!?!? You're kidding yourselves!!!



Posted by: Anonymous Coward on December 25, 2006 01:51 AM
The fact that you are stupid enough to think Windows, let alone Novell, is better than linux means you probably are too much of a moron to use linux anyways.

Ohh, it's not point and click, I don't know how to set up apache. I want Microsoft to give me little windows to click; boo hoo. Most of the sysadmins I have met over the years think they are so smart because they can follow the Microsoft dialog boxes... but give them cli and tell them to set up a domain name server using bind and they will cry like little girls.

There are plenty of ways you can back your data up, either by writing scripts to move data to an offsite backup, putting files on CD, etc. Most people who use linux are using them on secure sites (I remember when you could go to a website and it would tell you your NT password, that's REAL security ha) and when they delete a file, they don't want people easily getting into it again! It could contain passwords or sensitive data. Why do we want 30 copies of this sensitive data floating around the computer?

Also, because all you can talk about is Ext2/3 I doubt you really know much about the _other_ filesystems Linux has to offer, even though other users have talked about them.

The NSA, thats the National Security Agency, uses Linux distributions. I cannot name 1 major university using Windows on their servers, but I know about 40 who use Linux. I do work out of a Supercomputer Center sometimes, and ALL of their machines are linux.

Let's have some fun shall we? Let's go to the top 500 supercomputer list and find out how many are running Windows... <a href="" title=""></a> NONE. Not a single one. If Windows is so SUPERIOR, it would be the obvious choice for supercomputers, right? Idiot.

I will not try to argue that Linux is the best thing for a desktop OS, however with newer distributions like Kubuntu it is very practical (I put kubuntu on my gf's laptop and she never has any problems, and she knows NOTHING about computers). When she deletes a file it goes to the recycling bin... and she never bothers to empty it (I empty it every now and then).

But for the most part, Linux is a server OS. You shouldn't be in such a hurry when you're on your server that you run around rm'ing files. It is nice though to have a backup way of getting files after they are gone, so this article is handy.

So, I conclude, either:
1. you are just stupid and can't use linux
2. you are a microsoft employee or work indirectly for microsoft
3. both 1 and 2

(you are definitely not a novell employee because novell hates microsoft)


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 19, 2006 12:19 AM
Best of luck to you!

I have no clue of what point you are trying to make therefore you suck.


Re:Best Of Luck With That

Posted by: Anonymous Coward on November 21, 2006 01:12 AM
Eventhough my GNU/Linux system did not have 'salvage' capabilities, I am not afraid deleting accidentally my files. I have good friend that called 'backup'.

GNU/Linux, un*X also trained me to be very cautious when typing commands. As I remembered,the last time I accidentally deleted important files was 7 years ago<nobr> <wbr></nobr>:) Yes, technology can ease people doing things, whether the right or wrong thing. But technology cannot prevent us to make mistakes, because NOTHING IS PERFECT.


Re:Best Of Luck With That

Posted by: Anonymous Coward on December 18, 2006 09:37 PM
Quick: How do you mount more than 26 filesystems under Windows?!

Aha!<nobr> <wbr></nobr>...Made you think, right? You see, Microsoft chose the number 26 because... well that's how many letters are in the English alphabet. See? Makes sense, right? Subtle, but clever.

Anywho... You are probably not the type that goes in for all that "thinking" mumbo jumbo, and clearly your definition of state-of-the-art will always be "whatever Novell or Microsoft spoon feeds me", so I won't waste any more of your precious time. You need to get back to whatever it is they pay you to do.


Re:Best Of Luck With That

Posted by: Anonymous Coward on March 04, 2007 07:13 PM
not that easy.

first of all, you cannot user A: and B: as "mountpoints" for something else than floppy disks
and thus 26 is still too much<nobr> <wbr></nobr>:-)

in the other hand, since windows 2k you can choose in the disk manager mountpoints for partitions that are not drive letters but regular directories, as in posix oses. and of course since windows is a industry-leading on-the-edge software, this config is scriptable as in posix oses (well, it's scriptable in vbs and needs to walk through the registry, but it' scriptable anyway).



Posted by: Anonymous Coward on November 17, 2006 08:02 AM
Remember all people to make backup of all your important data so it don't disappear if something happens!
I make backup of my important data on a USB flashdrive.
But you can buy cheap hard disk 250 gb for 100$ maybe...



Posted by: Anonymous Coward on November 17, 2006 08:29 AM
Not to be snarky (well, maybe a little) but to make this useful all you have to do is use less to keep open every file you might accidently delete.

So it's interesting, but not as general as the title of the article makes it sound.



Posted by: Anonymous Coward on November 17, 2006 04:14 PM
You didn't get the point.
less was just an example of showing that a process has to keep the file busy.
It could be just any other process.

As the author mentioned, it could be an audio player, or a document editor or a daemon.



Posted by: Anonymous Coward on November 18, 2006 01:59 AM
Lucky us 99% of the files are not in use.
This trick only works if you are deleting a file that you also happen to have open in some other program.


vim<nobr> <wbr></nobr>/var/www/index.html
<type some stuff>
<nobr> <wbr></nobr>:wq
rm<nobr> <wbr></nobr>/var/www/index.html

I bet no program is using this file so indeed it's gone.

It's better to use boxbackup or something like that then to rely on the assumption that some other process might have the file open so we can try to rescue it.



Posted by: Anonymous Coward on November 18, 2006 02:59 AM
Great. So just keep every file opened with less, cat, or some other program.

You see, that doesn't make it any more helpful.


Or even better...

Posted by: Anonymous Coward on November 20, 2006 11:48 PM
don't let it happen in the first place!

As usual in Linux, there are more than one way to do it:

* <a href="" title="">libtrash</a> is a dynamic library which, when preloaded wit provides a wrapper for numerous libc calls, especially unlink(), so that every program, instead of removing a file, moves it to a configurable place in the file system.

* there are several projects which use the FUSE framework to implement versioning filesystems. Just have a look at <a href="" title=""><nobr>s<wbr></nobr> tems</a>



Small mistake

Posted by: Anonymous Coward on November 21, 2006 07:52 AM
The number in fourth column is the file descriptor, but the "r" means it's open in read mode. Type of the file is in fifth column, so a REGular file.


I'm working on something better - link them back

Posted by: Anonymous [ip:] on August 29, 2007 07:01 AM
I'm working on a little better solution - given the file name under the /proc file system (i.e. /proc/<pid>/fd/<fd>) and a new name for the file, my kernel module will link back that i-node to a name of your choice. Of course the new name must be on the same file system as the old one. The advantage is that twofold: 1. if the program changes the file then you get the updated one (since it's the actual file, not a copy) 2. If the file is too large to copy then you don't need the extra space.


This story has been archived. Comments can no longer be posted.

Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya