I'm trying to process aws s3 put into bucket, with a simple string, I couldn't do this with alpakka (scala) but I can process with same request using aws java sdk
Using alpakka my thread just hangs not processing anything, Future.onComplete not triggering
I've tried to specify aplakka conf file like this ('*' marks covers sensitive data) :
alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "********"
secret-access-key = "********"
}
region {
provider = static
default-region = "*****"
}
}
}
I do have ~/.aws/credentials on my machine correct, I can connect both with aws sdk and aws cli
As I understand ideally I may not specify any apakka.s3 creds at all, like in aws java sdk
I've already checked this article https://discuss.lightbend.com/t/alpakka-s3-connection-issue/6551/2 nothing worked
My example is strainghforward scala code from docs:
val file: Source[ByteString, NotUsed] =
Source.single(ByteString(body))
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] =
S3.multipartUpload(bucket, bucketKey)
val result: Future[MultipartUploadResult] =
file.runWith(s3Sink)
but actually I also need my source to be InputStream
val source: Source[ByteString, Future[IOResult]] = StreamConverters.fromInputStream(() => is, 4096)
PS: I don't actually get why i need to specify some host like this:
endpoint-url = "http://localhost:9000"
If you leave
alpakka.s3.awsempty, it will use the default AWS configuration methods, as in the CLI (e.g. you can use theAWS_REGIONenvironment variable to set the region and the standard AWS credential file). You can also leavealpakka.s3.aws.credentialsempty to use the default AWS credential methods and set the AWS region viaendpoint-urlis only for use with alternative (non-AWS) implementations of the S3 API (e.g. minio). If you're setting it, you will not be able to connect to AWS S3.