package aws-s3

  1. Overview
  2. Docs

Streaming functions. Streaming function seeks to limit the amount of used memory used when operating of large objects by operating on streams.

val put : (?content_type:string -> ?content_encoding:string -> ?acl:string -> ?cache_control:string -> ?expect:bool -> ?meta_headers:(string * string) list -> bucket:string -> key:string -> data:string Io.Pipe.reader -> chunk_size:int -> length:int -> unit -> etag result) command

Streaming version of put.

  • parameter length

    Amount of data to copy

  • parameter chunk_size

    The size of chunks send to s3. The system will have 2 x chunk_size byte in flight

  • parameter data

    stream to be uploaded. Data will not be consumed after the result is determined. If using expect, then data may not have been consumed at all, but it is up to the caller to test if data has been consumed from the input data.

    see Aws_s3.S3.Make.put

val get : (?range:range -> bucket:string -> key:string -> data:string Io.Pipe.writer -> unit -> unit result) command

Streaming version of get. The caller must supply a data sink to which retrieved data is streamed. The result will be determined after all data has been sent to the sink, and the data sink is closed.

Connections to s3 is closed once the result has been determined. The caller should ways examine the result of the function. If the result is Ok (), then it is guaranteed that all data has been retrieved successfully and written to the data sink. In case of Error _, only parts of the data may have been written to the data sink.

The rationale for using a data sink rather than returning a pipe reader from which data can be consumed is that a reader does not allow simple relay of error states during the transfer.

For other parameters see Aws_s3.S3.Make.get