Skip to content

Feature flag for rust_clients, and aws elasticache example #3831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions examples/rust/aws_lambda_and_elasticache/.cargo/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[env]
GLIDE_NAME = "valkey-glide"
GLIDE_VERSION = "1.3.4"
39 changes: 39 additions & 0 deletions examples/rust/aws_lambda_and_elasticache/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
[package]
name = "aws_lambda_and_elasticache"
version = "0.1.0"
edition = "2024"

# Note: Cargo Lambda has a limit of ~70MB for uploading.
# Even relatively small examples, like this one, with all debugging
# symbols from various dependencies can easily be hundreds of MB large.
# The below options will strip the debug symbols, which is enough for
# both Release and Dev modes.
# More info here: https://github.com/johnthagen/min-sized-rust
[profile.release]
strip = true

[profile.release.package."*"]
strip = true

[profile.dev]
strip = true

[profile.dev.package."*"]
strip = true


[dependencies]
# NOTE: For external use, you can point to GitHub directly
# e.g.
# glide-core = { git = "https://github.com/valkey-io/valkey-glide.git", rev = "875088331f4fce35b4779ab37092de6399e9464e" }
# redis = { git = "https://github.com/valkey-io/valkey-glide.git", rev = "875088331f4fce35b4779ab37092de6399e9464e" }
redis = { path = "../../../glide-core/redis-rs/redis" }
glide-core = { path = "../../../glide-core", features = ["rust_client"] }

# AWS requirements
lambda_http = "0.13.0"
aws-config = "1.6.2"

tokio = { version = "1", features = ["macros"] }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
127 changes: 127 additions & 0 deletions examples/rust/aws_lambda_and_elasticache/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Prereqs:
*This crate was built and tested on Windows 11 using Rust 1.86, and run on x86_64 Linux based Lambdas*

- Make sure you have rust / cargo installed. This package was tested with version 1.86.
- Install `cargo lambda`: https://www.cargo-lambda.info/
- Make sure you have appropriate cross compile toolchains
- You will need a version of Zig for compiling some of the C dependencies.
- Version 0.15.0 was used
- https://ziglang.org/
- The `ring` crate in particular has issues with cross compilation.
- However, this example was able to compile fine from Windows to Linux using the same cross compile toolchain that `Strawberry Perl` uses.
- https://strawberryperl.com/releases.html


## Other Notes
When creating new projects based on this, make sure to do the following:
- Make sure you have a .cargo/config.toml file with Glide environment variables:
```
[env]
GLIDE_NAME = "valkey-glide"
GLIDE_VERSION = "1.3.4" # Or the current version you're using
```

# AWS Resources
*The examples below use the `aws cli`, and assume you have proper permissions*
*!!! These examples don't necessarily follow proper security / "least privilege" principals. You will need to determine what is right for you and your organization. !!!*

## Elasticache
https://aws.amazon.com/elasticache/

### Create a Valkey Cache
You can create a cache using the following command:
```aws elasticache create-serverless-cache --serverless-cache-name my-awesome-cache --engine valkey```

### Get Cache Connection Info
It can take tens of seconds (potentially minutes) for the cache to spin up.
You can check the status of a cache using this command:
```aws elasticache describe-serverless-cache --serverless-cache-name my-awesome-cache```

Once the cache is ready, the response from the previous command will also give you the connection information.
You can set the host and ip using the `GLIDE_HOST_IP` and `GLIDE_HOST_PORT` environment variables in your lambda.

https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#:~:text=To%20set%20environment%20variables%20in,Under%20Environment%20variables%2C%20choose%20Edit.


### Delete a Valkey Cache
Once your cache is no longer needed, you can delete it with the following command:
```aws elasticache delete-serverless-cache --serverless-cache-name my-awesome-cache```


### Accessing the Cache
Elasticache resources are only available via AWS VPC.
This means they _ARE NOT_ accesible to the public internet.
They are not even accessible to other AWS rsources (like Lambdas) by default, unless those resources are also in the same VPC.

## Lambda
https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
https://aws.amazon.com/lambda
https://www.cargo-lambda.info/

As mentioned earlier, elasticache resources are only accessible behind AWS VPCs.
We must make sure our Lambda is on the same VPC as the cache.
In the example below, we'll deploy the lambda and setup all necessary config.

*NOTE: This will remove your Lambdas ability to communicate with the public Internet! You'll need additional setup to restore that functionality if you need it. Your lambda can still be invoked from the public internet.*

```
# Step 0: Create a cache if you haven't already.
aws elasticache create-serverless-cache --serverless-cache-name <ELASTICACHE_NAME> --engine valkey


# Step 1: Build and Deploy the lambda.
# In your output, note the arn value, we'll use it below.
cargo lambda build --release
cargo lambda deploy --binary-name <RUST_CRATE_NAME>


# Step 2: Get the cache information.
# In your output, look for the following:
# - Endpoint: Only included once your cache is "available"
# - SubnetIds: These are the IDs of the VPC Subnets that the cache is on. We'll use these below.
# - SecurityGroupIds: This is the security group that the cache belongs to. We'll use this below
# - Endpoint: This is the address / port that where your cache will be accessible. We'll use this below.
aws elasticache describe-serverless-caches --serverless-cache-name <ELASTICACHE_NAME>


# Step 3: Get the lambda information
# In your output, look for the following:
# - Role: This is the IAM Role given to the Lambda by `cargo lambda`. We'll use this below.
aws lambda get-function --function-name <LAMBDA_ARN_FROM_STEP_1>


# Step 4: Give your lambda the correct permissions.
# https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#configuration-vpc-permissions
# The `AWSLambdaENIManagementAccess` policy is managed by Amazon, and gives permission to configure the VPC connection.
# The "lambda:InvokeFunctionUrl" permission is necessary to allow public access. Otherwise, requests will bounce back with a 403 FORBIDDEN
aws iam attach-role-policy --role-name <LAMBDA_ROLE_FROM_STEP_3> --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaENIManagementAccess
aws lambda add-permission --function-name <LAMBDA_ARN_FROM_STEP_1> --statement-id FunctionURLAllowPublicAccess --principal "*" --function-url-auth-type NONE --action lambda:InvokeFunctionUrl


# Step 5: Add the VPC Config and Environment Variables that will allow the Lambda to connect to the cache.
aws lambda update-function-configuration \
--function-name <LAMBDA_ARN_FROM_STEP_1> \
--vpc-config SubnetIds=<COMMA_SEPARATED_SUBNETS_FROM_STEP_4>,SecurityGroupIds=<COMMA_SEPARATED_SUBNETS_FROM_STEP_4> \
--envrironment "Variables={GLIDE_HOST_IP=<ENDPOINT_ADDRESS_FROM_STEP_2>,GLIDE_HOST_PORT=<ENDPOINT_PORT_FROM_STEP_2>}"


# Step 6: Setup your lambda so it can be called via HTTP
# Note: Using AuthType = NONE means ANYONE will be able to access the lambda if they have the URL.
# For demo purposes that's fine, but you should consider either requiring AWS_IAM or setting up
# your own form of authentication.
# In your output, look for the following:
# - FunctionUrl: This is the URL that you will be able to send requests to.
aws lambda create-function-url-config --function-name <LAMBDA_ARN_FROM_STEP_1> --auth-type NONE
```


# Querying your Lambda
The lambda should now be accessible via HTTP. We should be able to query it using tools like `curl` or anything else.

```
curl "<FUNCTION_URL_FROM_STEP_6>" -H "content-type: application/json" -d '{"SetValue": {"key": "SomeKey", "value": {"field1" : 0} }}'
# Returns: {"SetValue": {}}

curl "<FUNCTION_URL_FROM_STEP_6>" -H "content-type: application/json" -d '{"GetValue": {"key": "SomeKey" } }'
# Returns: {"GetValue":{"value":{"field1":0}}}
```
220 changes: 220 additions & 0 deletions examples/rust/aws_lambda_and_elasticache/src/main.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,220 @@
use lambda_http::{run, service_fn, Error as LambdaError, Response};

use serde::{Deserialize, Serialize};


///////////////////////////////////////////////////////////////////////////////
/// Utilities
///////////////////////////////////////////////////////////////////////////////
type LambdaResult<T> = Result<T, LambdaError>;

struct StringErr {
value: String
}

impl StringErr {
pub fn boxed<S:ToString>(value: S) -> Box<StringErr> {
Box::new(StringErr {
value: value.to_string()
})
}
}

impl std::fmt::Debug for StringErr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.value)
}
}

impl std::fmt::Display for StringErr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.value)
}
}

impl std::error::Error for StringErr {}

///////////////////////////////////////////////////////////////////////////////
/// Request / Result types
///////////////////////////////////////////////////////////////////////////////

/// These are the different types of requests that can come in.
/// The JSON format will be something like this:
/// { "<REQUEST_TYPE_NAME>": { ... }}
///
/// E.G.
/// {
/// "SetValue": {
/// "key": "SomeKey",
/// "value": "SomeValue"
/// }
/// }
#[derive(Deserialize, Serialize, Debug)]
enum RequestType {
SetValue(SetValueRequest),
GetValue(GetValueRequest)
}

/// These are the different types of responses that we may return
/// The JSON format will be something like this:
/// { "<REQUEST_TYPE_NAME>": { ... }}
///
/// E.G.
/// {
/// "GetValue": {
/// "value": "SomeValue"
/// }
/// }
#[derive(Serialize, Deserialize)]
enum ResultType {
SetValue(SetValueResponse),
GetValue(GetValueResponse)
}

#[derive(Deserialize, Serialize, Debug)]
struct SetValueRequest {
key: String,
value: serde_json::Value
}

#[derive(Deserialize, Serialize, Debug)]
struct SetValueResponse {}

#[derive(Deserialize, Serialize, Debug)]
struct GetValueRequest {
key: String
}

#[derive(Deserialize, Serialize, Debug)]
struct GetValueResponse {
value: serde_json::Value
}

///////////////////////////////////////////////////////////////////////////////
/// Handler Trait + Implementations
///////////////////////////////////////////////////////////////////////////////
trait RequestHandler {
async fn handle(self, shared_resources: &SharedResources) -> LambdaResult<ResultType>;
}

impl RequestHandler for RequestType {
async fn handle(self, shared_resources: &SharedResources) -> LambdaResult<ResultType> {
match self {
Self::SetValue(request) => request.handle(shared_resources).await,
Self::GetValue(request) => request.handle(shared_resources).await
}
}
}

impl RequestHandler for SetValueRequest {
async fn handle(self, shared_resources: &SharedResources) -> LambdaResult<ResultType> {
let mut cmd = redis::Cmd::new();
cmd.arg("SET")
.arg(self.key)
.arg(serde_json::to_string(&self.value).map_err(Box::new)?);

let _value = shared_resources.glide_client
.clone()
.send_command(&cmd, None)
.await
.map_err(Box::new)?;

Ok(ResultType::SetValue(SetValueResponse{}))
}
}

impl RequestHandler for GetValueRequest {
async fn handle(self, shared_resources: &SharedResources) -> LambdaResult<ResultType> {
let mut cmd = redis::Cmd::new();
cmd.arg("GET")
.arg(self.key);

let value = shared_resources.glide_client
.clone()
.send_command(&cmd, None)
.await
.map_err(Box::new)?;

let value : serde_json::Value = match value {
redis::Value::SimpleString(value) => serde_json::from_str(&value).map_err(Box::new)?,
redis::Value::BulkString(value) => serde_json::from_slice(&value).map_err(Box::new)?,
val => Err(StringErr::boxed(format!("Invalid value type returned from valkey! {val:?}")))?
};

Ok(ResultType::GetValue(GetValueResponse{value}))
}
}

///////////////////////////////////////////////////////////////////////////////
/// Main
///////////////////////////////////////////////////////////////////////////////

/// Lambda functions can actually process multiple requests.
/// We can save some compute time by shared resources between invocations
/// You can put other shared resources here as well, like other AWS SDK types.
struct SharedResources {
_sdk_config: aws_config::SdkConfig,
glide_client: glide_core::client::Client
}

#[tokio::main]
async fn main() -> LambdaResult<()> {
lambda_http::tracing::init_default_subscriber();

let glide_client = {
// These variables can be set with the lambda deployments.
// https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#:~:text=To%20set%20environment%20variables%20in,Under%20Environment%20variables%2C%20choose%20Edit.
let address_info = glide_core::client::NodeAddress {
host: std::env::var("GLIDE_HOST_IP").map_err(Box::new)?,
port: std::env::var("GLIDE_HOST_PORT").map_err(Box::new)?.parse().map_err(Box::new)?
};

// elasticache uses Clusters and TLS by default.
let connection_request = glide_core::client::ConnectionRequest {
addresses: vec![address_info],
cluster_mode_enabled: true,
request_timeout: std::env::var("GLIDE_REQUEST_TIMEOUT").ok().and_then(|v| v.parse::<u32>().ok()),
tls_mode: Some(glide_core::client::TlsMode::SecureTls),
..Default::default()
};

glide_core::client::Client::new(connection_request, None)
.await
.map_err(Box::new)?
};

let sdk_config = aws_config::load_defaults(aws_config::BehaviorVersion::latest()).await;
let shared_resources = SharedResources{
_sdk_config: sdk_config,
glide_client
};

// If we tried to use "shared_resources" directly in the closure we'd
// get a compiler warning that "shared_resources" was moved.
// Putting it into an explicit ref variable is fine, however, as the
// reference "shared_resources_ref" is captured, but not "shared_resources"
let shared_resources_ref = &shared_resources;

let handler = move |event: lambda_http::Request| async move {
handle(shared_resources_ref, event).await
};

run(service_fn(handler)).await
}

async fn handle(shared_resources: &SharedResources, event: lambda_http::Request) -> LambdaResult<Response<String>> {
let request : RequestType = match event.body() {
lambda_http::Body::Empty => Err(StringErr::boxed("Requests cannot be empty!"))?,
lambda_http::Body::Text(val) => serde_json::from_str(&val)?,
lambda_http::Body::Binary(val) => serde_json::from_slice(&val)?
};

let response = request.handle(shared_resources).await?;
let response_body : String = serde_json::to_string(&response).map_err(Box::new)?;

Ok(Response::builder()
.status(200)
.header("content-type", "application/json")
.body(response_body)
.map_err(Box::new)?)
}
Loading
Loading