How to configure Laravel Vapor with Localstack
Learn how to replicate your Laravel AWS environment locally using Localstack, bringing AWS services like S3, SQS, and DynamoDB to your development machine.
Embracing Laravel Vapor with Localstack
I've been in love with Laravel Vapor since the first time I tried it. Over the last year I've been steadily migrating all my applications to this service. The days of Terraform configurations, managing Elastic Container Service and Registry, and juggling load balancers are long gone. However, to fully commit to Laravel Vapor, I needed a way to run my applications in an environment that closely mimics the production setup. I'll show you how I achieved this using Localstack.
The Challenge: Local vs. Production Parity
When deploying with Laravel Vapor, your application leverages various AWS services like S3 for file storage, SES for emails, SQS for queues, and DynamoDB for caching. However, replicating this setup locally without Localstack would be quite complicated. You'd likely end up with a quasi-local solution, where your local services would need to interact with test buckets, SES configurations, and SQS queues in the cloud. This approach not only complicates the development process but also introduces potential security risks and unwanted costs.
Enter Localstack: a game-changing tool that emulates AWS services locally. In this post, I'll guide you through my process of using Localstack in Laravel projects to create a local environment that's virtually indistinguishable from a Vapor-deployed production setup.
The Solution: Docker + Localstack
I use Docker to orchestrate my local development environment, including Localstack. Here's a breakdown of my docker-compose.yml
file:
services:
# ... other services like nginx, php-fpm, node, pgsql ...
localstack:
image: localstack/localstack
ports:
- '4566:4566' # Localstack dashboard
# - "127.0.0.1:4510-4559:4510-4559" # Uncomment to expose all services
volumes:
- './docker/localstack/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh'
- 'localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
environment:
DEBUG: '${APP_DEBUG:-1}'
DYNAMODB_TABLE: '${DYNAMODB_CACHE_TABLE:-default}'
HOSTNAME_EXTERNAL: localstack
HOST_TMP_FOLDER: ${TMPDIR}
LOCALSTACK_HOST: localstack
PERSISTENCE: '${PERSISTENCE:-1}'
S3_BUCKET: '${AWS_BUCKET:-default}'
SERVICES: dynamodb,ses,sqs,s3
SES_EMAIL: '${MAIL_FROM_ADDRESS:-hello@example.com}'
SQS_ENDPOINT_STRATEGY: path
SQS_QUEUE: '${SQS_QUEUE:-default}'
networks:
- default-network
networks:
default-network:
driver: bridge
volumes:
# ... other volumes like pgsql etc...
localstack:
driver: local
Let's break down what's happening here:
- Image: We're using the official Localstack image.
- Volumes: We're mounting setup scripts and persisting data.
- Environment: We're configuring which AWS services to emulate (DynamoDB, SES, SQS, S3) and setting up initial configurations.
Configuring Laravel
To make Laravel work with our Localstack setup, we need to configure our .env
file. You leave the shared AWS config variables in the example below unchanged.
# Shared AWS config
AWS_ACCESS_KEY_ID=your-access-key
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT=http://localstack:4566
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_USE_PATH_STYLE_ENDPOINT=true
# Cache config
CACHE_STORE=dynamodb
DYNAMODB_CACHE_TABLE=default
# Filesystem config
AWS_BUCKET=default
FILESYSTEM_DISK=s3
# Queue config
QUEUE_CONNECTION=sqs
SQS_PREFIX="http://localstack:4566/queue/us-east-1/000000000000/"
SQS_QUEUE=default
# Mail config
MAIL_MAILER=ses
MAIL_FROM_ADDRESS="hello@example.com"
MAIL_FROM_NAME="Laravel App"
The key here is setting AWS_ENDPOINT
to point to our Localstack instance and configuring our Laravel services to use the appropriate drivers.
Next we update config/cache.php
, we change the endpoint
key to use the AWS_ENDPOINT
environment-variable.
'default' => env('CACHE_STORE', 'dynamodb'),
'stores' => [
...
'dynamodb' => [
'driver' => 'dynamodb',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'us-north-1'),
'table' => env('DYNAMODB_CACHE_TABLE', 'cache'),
'endpoint' => env('AWS_ENDPOINT'), // Required for Localstack to work
],
],
Next we update config/queue.php
, we add the endpoint
key and set it to use the AWS_ENDPOINT
environment-variable.
'default' => env('QUEUE_CONNECTION', 'sqs'),
'connections' => [
...
'sqs' => [
'driver' => 'sqs',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'prefix' => env('SQS_PREFIX', 'https://sqs.us-east-1.amazonaws.com/your-account-id'),
'queue' => env('SQS_QUEUE', 'default'),
'suffix' => env('SQS_SUFFIX'),
'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
'endpoint' => env('AWS_ENDPOINT'), // Required for Localstack to work
'after_commit' => false,
],
],
Now we'll configure Laravel to send mails with SES. The SES-config is located in config/services.php
, like we did for the other services we'll add the endpoint
key to the config. This way we can send emails using the SES driver in Localstack.
'ses' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
'endpoint' => env('AWS_ENDPOINT'), // Required for Localstack to work
],
If you exposed port 4566
in the docker-compose.yml
file, you can access a list of the mails at the endpoint localhost:4566/_aws/ses. You can find more information about this at Localstack pages: Simple Email Service(SES).
Setting Up AWS Services
To initialize our AWS services, I use a script docker/localstack/init-aws.sh
that runs when Localstack starts:
#!/bin/bash
# Create SQS queue
echo "Creating SQS queue"
awslocal sqs create-queue --queue-name "$SQS_QUEUE"
# Create S3 bucket
echo "Creating S3 bucket"
awslocal s3 mb s3://"$S3_BUCKET"
# Verify SES email
echo "Creating SES email"
awslocal ses verify-email-identity --email "$SES_EMAIL"
# Create DynamoDB table
if ! awslocal dynamodb describe-table --table-name "$DYNAMODB_TABLE" > /dev/null 2>&1; then
echo "Creating DynamoDB table"
awslocal dynamodb create-table \
--table-name "$DYNAMODB_TABLE" \
--attribute-definitions \
AttributeName=key,AttributeType=S \
--key-schema \
AttributeName=key,KeyType=HASH \
--provisioned-throughput \
ReadCapacityUnits=10,WriteCapacityUnits=5
else
echo "DynamoDB table already exists"
fi
This script sets up our SQS queue, S3 bucket, verifies an email for SES, and creates a DynamoDB table if it doesn't exist. Notice the mapping of the environment variables between docker-compose.yml
and Laravel's .env
file. This setup allows us to use the same configuration for both Localstack and Laravel. The Localstack variables DYNAMODB_TABLE
, S3_BUCKET
, SES_EMAIL
and SQS_QUEUE
in our script comes from the mapped variables in the docker-compose.yml
file.
Using the services
Now you can start your Laravel application and interact with the AWS services as you would in production.
docker compose -f docker-compose.yml up -d
Benefits of This Setup
- Consistency: Your local environment now closely mirrors your Vapor production environment.
- Isolation: All AWS services are contained within your local Docker setup, avoiding conflicts with other projects.
- Speed: Local development and testing are much faster without relying on actual AWS services.
- Cost-Effective: No need to worry about accidental AWS charges during development.
Conclusion
By leveraging Localstack in our Docker-based local development environment, we've created a setup that closely mirrors our Laravel Vapor production deployment. This approach not only improves our development process but also helps catch potential issues early, before they make it to production.
Remember, while this setup is powerful, it's not a perfect replication of AWS. Always thoroughly test your applications in a staging environment before deploying to production.
Here are some resources that you might find useful:
- Github - My Laravel Localstack Project.
- Github - Implementation Pull Request.
- Localstack Documentation.
Happy coding!