I have a deployed s3 bucket with the following associated tf module
module "example_s3_bucket_v1" {
source = "../modules/s3/s3_bucket"
app_name = "${var.tag_application_name}"
aws_profile = "${var.aws_profile}"
aws_region = "${var.aws_region}"
bucket_name = "example_bucket.v1"
kms_key_arn = "${data.aws_kms_alias.example_kms.target_key_arn}"
glacier_transition_time = "1000"
retention_time = "2500"
tag_app_id = "${var.tag_application_id}"
tag_application_name = "${var.tag_application_name}"
tag_component = "${var.tag_component}"
tags = {
Name = "Example"
}
}
The s3 bucket already has files stored within it, some of which have been migrated to glacier storage after their age exceeded the glacier_transition_time variable. However, I do not want any files moved to glacier after this time period. Would restoring all files that are currently stored in glacier and then removing the glacier_transition_time variable and redeploying the bucket's config via terraform achieve this? Are there any side effects of removing this variable if the bucket is already deployed? Thank you.