Skip to content

Commit 9af9f6a

Browse files
authored
Fixed OOM bug (Azure#19600)
1 parent d7bcc73 commit 9af9f6a

File tree

4 files changed

+4
-1
lines changed

4 files changed

+4
-1
lines changed

sdk/storage/azure-storage-blob/CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
## 12.11.0-beta.2 (Unreleased)
44
- Fixed a bug where downloading would throw a NPE on large downloads due to a lack of eTag.
5+
- Fixed a bug where more data would be buffered in buffered upload than expected due to Reactor's concatMap operator.
56

67
## 12.11.0-beta.1 (2021-02-10)
78
- Added support for the 2020-06-12 service version.

sdk/storage/azure-storage-blob/src/main/java/com/azure/storage/blob/BlobAsyncClient.java

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -530,6 +530,7 @@ private Mono<Response<BlockBlobItem>> uploadInChunks(BlockBlobAsyncClient blockB
530530
Write to the pool and upload the output.
531531
*/
532532
return chunkedSource.concatMap(pool::write)
533+
.limitRate(parallelTransferOptions.getMaxConcurrency()) // This guarantees that concatMap will only buffer maxConcurrency * chunkSize data
533534
.concatWith(Flux.defer(pool::flush))
534535
.flatMapSequential(bufferAggregator -> {
535536
// Report progress as necessary.

sdk/storage/azure-storage-file-datalake/CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Release History
22

33
## 12.5.0-beta.2 (Unreleased)
4-
4+
- Fixed a bug where more data would be buffered in buffered upload than expected due to Reactor's concatMap operator.
55

66
## 12.5.0-beta.1 (2021-02-10)
77
- Added support for the 2020-06-12 service version.

sdk/storage/azure-storage-file-datalake/src/main/java/com/azure/storage/file/datalake/DataLakeFileAsyncClient.java

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -362,6 +362,7 @@ private Mono<Response<PathInfo>> uploadInChunks(Flux<ByteBuffer> data, long file
362362
Write to the pool and upload the output.
363363
*/
364364
return chunkedSource.concatMap(pool::write)
365+
.limitRate(parallelTransferOptions.getMaxConcurrency()) // This guarantees that concatMap will only buffer maxConcurrency * chunkSize data
365366
.concatWith(Flux.defer(pool::flush))
366367
/* Map the data to a tuple 3, of buffer, buffer length, buffer offset */
367368
.map(bufferAggregator -> Tuples.of(bufferAggregator, bufferAggregator.length(), 0L))

0 commit comments

Comments
 (0)