Azure Storage connector provides Akka Stream Source for Azure Storage. Currently only supports Blob and File services. For detail about these services please read Azure docs.
@@project-info{ projectId="azure-storage" }
@@@note The Akka dependencies are available from Akka’s secure library repository. To access them you need to use a secure, tokenized URL as specified at https://account.akka.io/token. @@@
Additionally, add the dependencies as below.
@@dependency [sbt,Maven,Gradle] { group=com.lightbend.akka artifact=akka-stream-alpakka-azure-storage_$scala.binary.version$ version=$project.version$ symbol2=AkkaVersion value2=$akka.version$ group2=com.typesafe.akka artifact2=akka-stream_$scala.binary.version$ version2=AkkaVersion symbol3=AkkaHttpVersion value3=$akka-http.version$ group3=com.typesafe.akka artifact3=akka-http_$scala.binary.version$ version3=AkkaHttpVersion group4=com.typesafe.akka artifact4=akka-http-xml_$scala.binary.version$ version4=AkkaHttpVersion }
The table below shows direct dependencies of this module and the second tab shows all libraries it depends on transitively.
@@dependencies { projectId="azure-storage" }
The settings for the Azure Storage connector are read by default from alpakka.azure-storage configuration section. Credentials are defined in credentials section of reference.conf.
Scala : @@snip snip { #azure-credentials }
Java : @@snip snip { #azure-credentials }
At minimum following configurations needs to be set:
authorization-type, this is the type of authorization to use as described here, possible values areanon,SharedKey, orsas. Environment variableAZURE_STORAGE_AUTHORIZATION_TYPEcan be set to override this configuration.account-name, this is the name of the blob storage or file share. Environment variableAZURE_STORAGE_ACCOUNT_NAMEcan be set to override this configuration.account-key, Account key to use to create authorization signature, mandatory forSharedKeyorSharedKeyLiteauthorization types, as described here. Environment variableAZURE_STORAGE_ACCOUNT_KEYcan be set to override this configuration.sas-tokenif authorization type issas. Environment variableAZURE_STORAGE_SAS_TOKENcan be set to override this configuration.
For environments where shared keys or SAS tokens are not desirable, the connector supports OAuth2 bearer token authentication via the Azure Identity library. This enables authentication using Managed Identity (system or user-assigned), workload identity, environment credentials, and other mechanisms supported by DefaultAzureCredential.
The azure-identity library is an optional dependency. To use bearer token authentication, add it to your project:
@@dependency [sbt,Maven,Gradle] { group=com.azure artifact=azure-identity version=1.15.4 }
To use bearer token authentication via configuration, set authorization-type to BearerToken. This will automatically use DefaultAzureCredential which tries multiple credential sources in order:
alpakka.azure-storage.credentials {
authorization-type = BearerToken
account-name = "myaccount"
}
To use DefaultAzureCredential programmatically:
Scala : @@snip snip { #bearer-token-default }
Java : @@snip snip { #bearer-token-default }
For User Assigned Managed Identity (UAMI), provide the client ID of the managed identity:
Scala : @@snip snip { #bearer-token-managed-identity }
Java : @@snip snip { #bearer-token-managed-identity }
Any com.azure.core.credential.TokenCredential implementation can be used with withTokenCredential, including ClientSecretCredential, ClientCertificateCredential, WorkloadIdentityCredential, and others from the azure-identity library.
Each function takes two parameters objectPath and requestBuilder. The objectPath is a / separated string of the path of the blob
or file, for example, my-container/my-blob or my-share/my-directory/my-file.
Each request builder is subclass of RequestBuilder which knows how to construct request for the given operation.
In this example GetBlob builder is initialized without any optional field.
Scala : @@snip snip { #simple-request-builder }
Java : @@snip snip { #simple-request-builder }
In this example GetBlob builder is initialized with given leaseId and range fields.
Scala : @@snip snip { #populate-request-builder }
Java : @@snip snip { #populate-request-builder }
In this example CreateFile builder is initialized with maxFileSize and contentType fields, which are required fields for CreateFile operation.
Scala : @@snip snip { #request-builder-with-initial-values }
Java : @@snip snip { #request-builder-with-initial-values }
ServerSideEncryption can be initialized in similar fashion.
Scala : @@snip snip { #request-builder-with-sse }
Java : @@snip snip { #request-builder-with-sse }
Some operations allow you to add additional headers, for GetBlob you can specify If-Match header, which specify this header to perform the operation only if the resource's ETag matches the value specified, this can be done by calling addHeader function.
Scala : @@snip snip { #request-builder-with-additional-headers }
Java : @@snip snip { #request-builder-with-additional-headers }
The Create Container operation creates a new container under the specified account.
Scala : @@snip snip { #create-container }
Java : @@snip snip { #create-container }
The Delete Container operation creates existing container under the specified account.
Scala : @@snip snip { #delete-container }
Java : @@snip snip { #delete-container }
The Put Block Blob operation creates a new block or updates the content of an existing block blob.
Scala : @@snip snip { #put-block-blob }
Java : @@snip snip { #put-block-blob }
The Get Blob operation reads or downloads a blob from the system, including its metadata and properties.
Scala : @@snip snip { #get-blob }
Java : @@snip snip { #get-blob }
In order to download a range of a file's data you can use overloaded method which additionally takes ByteRange as argument.
Scala : @@snip snip { #get-blob-range }
Java : @@snip snip { #get-blob-range }
The Get Blob Properties operation returns all user-defined metadata, standard HTTP properties, and system properties for the blob. (Note: Current implementation does not return user-defined metadata.)
Scala : @@snip snip { #get-blob-properties }
Java : @@snip snip { #get-blob-properties }
The Delete Blob operation deletes the specified blob.
Scala : @@snip snip { #delete-blob }
Java : @@snip snip { #delete-blob }
The Create File operation creates a new file or replaces a file.
Scala : @@snip snip { #create-file }
Java : @@snip snip { #create-file }
The Update Range operation writes a range of bytes to a file.
Scala : @@snip snip { #update-range }
Java : @@snip snip { #update-range }
Range can be cleared using ClearRange function.
Scala : @@snip snip { #clear-range }
Java : @@snip snip { #clear-range }
The Create Directory operation creates a new container under the specified account.
Scala : @@snip snip { #create-directory }
Java : @@snip snip { #create-directory }
The Delete Directory operation creates existing container under the specified account.
Scala : @@snip snip { #delete-directory }
Java : @@snip snip { #delete-directory }
The Get File operation reads or downloads a file from the system, including its metadata and properties.
Scala : @@snip snip { #get-file }
Java : @@snip snip { #get-file }
The Get File Properties operation returns all user-defined metadata, standard HTTP properties, and system properties for the file. (Note: Current implementation does not return user-defined metatdata.)
Scala : @@snip snip { #get-file-properties }
Java : @@snip snip { #get-file-properties }
The Delete File operation immediately removes the file from the storage account.
Scala : @@snip snip { #delete-file }
Java : @@snip snip { #delete-file }