S3

tasks.fileservice.s3.S3
See theS3 companion object
class S3(val s3: S3AsyncClient)

Wrapper of AWS SDK's S3AsyncClient into fs2.Stream and cats.effect.IO

Attributes

Companion
object
Experimental
true
Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def delete(bucket: String, key: String): IO[Unit]

Deletes a file in a single request.

Deletes a file in a single request.

Attributes

def getObjectMetadata(bucket: String, key: String): IO[Option[HeadObjectResponse]]
def io[J, A](fut: => CompletableFuture[A]): IO[A]
def readFile(bucket: String, key: String): Stream[IO, Byte]

Reads a file in a single request. Suitable for small files.

Reads a file in a single request. Suitable for small files.

For big files, consider using readFileMultipart instead.

Attributes

def readFileMultipart(bucket: String, key: String, partSize: Int): Stream[IO, Byte]

Reads a file in multiple parts of the specified @partSize per request. Suitable for big files.

Reads a file in multiple parts of the specified @partSize per request. Suitable for big files.

It does so in constant memory. So at a given time, only the number of bytes indicated by @partSize will be loaded in memory.

For small files, consider using readFile instead.

Value parameters

partSize

in megabytes

Attributes

def uploadFile(bucket: String, key: String, cannedAcl: List[String], serverSideEncryption: Option[String], grantFullControl: List[String]): (IO, Byte) => PutObjectResponse

Uploads a file in a single request. Suitable for small files.

Uploads a file in a single request. Suitable for small files.

For big files, consider using uploadFileMultipart instead.

Attributes

def uploadFileMultipart(bucket: String, key: String, partSize: Int, multiPartConcurrency: Int, cannedAcl: List[String], serverSideEncryption: Option[String], grantFullControl: List[String]): (IO, Byte) => S3UploadResponse

Uploads a file in multiple parts of the specified @partSize per request. Suitable for big files.

Uploads a file in multiple parts of the specified @partSize per request. Suitable for big files.

It does so in constant memory. So at a given time, only the number of bytes indicated by @partSize will be loaded in memory.

Note: AWS S3 API does not support uploading empty files via multipart upload. It does not gracefully respond on attempting to do this and returns a 400 response with a generic error message. This function accepts a boolean uploadEmptyFile (set to false by default) to determine how to handle this scenario. If set to false (default) and no data has passed through the stream, it will gracefully abort the multi-part upload request. If set to true, and no data has passed through the stream, an empty file will be uploaded on completion. An Option[ETag] of None will be emitted on the stream if no file was uploaded, else a Some(ETag) will be emitted. Alternatively, If you need to create empty files, consider using consider using uploadFile instead.

For small files, consider using uploadFile instead.

Value parameters

bucket

the bucket name

key

the target file key

multiPartConcurrency

the number of concurrent parts to upload

partSize

the part size indicated in MBs. It must be at least 5, as required by AWS.

uploadEmptyFiles

whether to upload empty files or not, if no data has passed through the stream create an empty file default is false

Attributes

Concrete fields

val s3: S3AsyncClient