What is correct way for an FTP server to prevent corrupted uploaded files because of late append?

1.2k views Asked by At

Using pureftpd I uploaded 1% of a 1276541542 byte file or about 15 megs. Then I killed the network connection abnormally to simulate a client getting kicked off their ISP. Then I waited an hour. Then I re-connected and issued an APPE (append) command and uploaded the rest of the file. The final size of the file on the server after the upload finished was 1292326238. i.e. about 15 megs MORE than it should be. Corrupt file. What is correct way for an FTP server to prevent corrupted uploaded files because of late append?

2

There are 2 answers

6
Steffen Ullrich On

What is correct way for an FTP server to prevent corrupted uploaded files because of late append?

There is no way for the FTP server to prevent corrupted uploaded files because the server does not know what the file should be.

But the server can help the client to do a proper upload by implementing the SIZE command. Using this command the client can determine the current file size at the server and thus the position in the file where the upload should be continued. Of course this logic has to be implemented at the client.

0
Andrew Arrow On

i have pure-ftpd answers about it’s upload-script

i’m running pure-uploadscript --run /home/aa/done.rb —daemonize

and my done.rb program is

#!/usr/bin/env ruby
puts "done"
f=File.open("/home/aa/ddd.txt", "w")
f << "test"
f.close

and when I run pure-ftpd —uploadscript and upload a file, sure enough the done.rb program is run.

(I know it’s run cuz there is a new file called ddd.txt)

BUT when I’m uploading a big file and kill the ftp client in middle of upload done.rb is STILL run. (Yes I deleted ddd.txt first.)

Therefore, the answer to the question is, EVEN pureftpd can't handle this because of the limits of FTP protocol.