Мы реализовали AWS TransferManager с MultipartUpload и ResumableTransfer для загрузки файлов.
Реализовано решение в соответствии с приведенным ниже:
https://aws.amazon.com/blogs/developer/pausing-and-resuming-transfers-using-transfer-manager/ https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-transfermanager.html https://aws.amazon.com/blogs/mobile/pause-and-resume-amazon-s3-transfers-using-the-aws-mobile-sdk-for-android/
Количество процессов находилось под полным контролем при загрузке файла без MultipartUpload и ResumableTransfer, но начало расти экспоненциально после того, как мы реализовали вышеупомянутый подход.
SampleCode ниже:
try {
AmazonS3 s3client = s3ClientFactory.createClient();
xferManager = TransferManagerBuilder.standard()
.withS3Client(s3client)
.withMinimumUploadPartSize(6291456L) //6 * 1024 * 1024(long) (represents 6MB)
.withMultipartUploadThreshold(6291456L) //6 * 1024 * 1024(long) (represents 6MB)
.withExecutorFactory(() -> Executors.newFixedThreadPool(3))
.build();
String resumableTargetFile ="/path/to/resumableTargetFile";
Upload upload = xferManager.upload(putRequest, new S3ProgressListener() {
ExecutorService executor = Executors.newFixedThreadPool(1);
@Override
public void progressChanged(ProgressEvent progressEvent) {
double pct = progressEvent.getBytesTransferred() * 100.0 / progressEvent.getBytes();
LOGGER.info("Upload status for file - " + fileName + " is: " + Double.toString(pct) + "%");
switch (progressEvent.getEventType()) {
case TRANSFER_STARTED_EVENT:
LOGGER.info("Started uploading file {} to S3", fileName);
break;
case TRANSFER_COMPLETED_EVENT:
LOGGER.info("Completed uploading file {} to S3", fileName);
break;
case TRANSFER_CANCELED_EVENT:
LOGGER.warn("Upload of file {} to S3 was aborted", fileName);
break;
case TRANSFER_FAILED_EVENT:
LOGGER.error("Failed uploading file {} to S3", fileName);
break;
default:
break;
}
}
@Override
public void onPersistableTransfer(final PersistableTransfer persistableTransfer) {
executor.submit(() -> {
saveTransferState(persistableTransfer, resumableTargetFile);
});
}
});
UploadResult uploadResult = upload.waitForUploadResult();
streamMD5 = uploadResult.getETag();
if (upload.isDone()) {
LOGGER.info("File {} uploaded successfully to S3 bucket {}",fileNameKey, bucketName);
}
} catch (AmazonServiceException ase) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
LOGGER.error("AmazonServiceException occurred: " + ase.getMessage());
} catch (SdkClientException sdce) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
LOGGER.error("SdkClientException occurred: " + sdce.getMessage());
} catch (AmazonClientException ace) {
LOGGER.error("AWS Exception occurred: " + ace.getMessage());
} catch (Exception e) {
LOGGER.error("Exception occurred during files processing: " + e.getMessage());
} finally {
xferManager.shutdownNow(true);
return streamMD5;
}
Чтобы узнать, не сталкивался ли кто-либо с подобной проблемой, и какие-либо материалы касательно этой проблемы