I am using PDFBox to process PDF files. This is running in the cloud (a kubernetes cluster inside GCP). The problem is that some of the PDF files that needs processing are very big - up to 2GB in size.
Since I am running in the cloud, the amount of memory I can use is not unlimited, so I have opted for the use of MemoryUsageSetting.setupTempFileOnly(). This works fine under many circumstances, but when the PDF files become BIG, my system crashes and is restarted.
This is due to the fact that my /tmp folder inside my docker container is in fact mapped to some of the memory of the host that my docker image is running on - 64GB, which is shared among all the other docker containers running on that host. I have tried to get someone who has access to the platform to give me a /tmp area that is bigger, but this seems to be impossible / unwanted.
So my question is - is there any way to setup PDFBox to use any form of Cloud Storage or an NFS drive as it's tempfile location?