We are using s3fs to mount an S3 bucket as a local file system. It works great. My question though is, when accessing the files on S3 (via the local mount), how do we ensure that the file is completely downloaded?
For example, if we were using local storage, I could access a file (in PHP) using file_get_contents('path/to/file'), and the command would throw an error if the file was not accessed properly.
When using S3 bucket as a local file system, I can still access a file using file_get_contents('path/to/file'). But how can I ensure that s3f3 completely downloaded the file and did not only partially download the file? Will file_get_contents throw an error in such a situation or do I need to check md5 hashes? Is there some way to get md5 hashes of the remote files while using s3f3?