---
title: S3 Providers
icon: Package
---

This guide provides comprehensive configuration instructions for integrating Palmr. with various S3-compatible storage providers. Whether you're using cloud services like AWS S3 or self-hosted solutions like MinIO, this guide will help you set up reliable object storage for your files.

> **Overview:** Palmr. supports any S3-compatible storage provider, giving you flexibility to choose the solution that best fits your needs and budget.

> **Note:** Some configuration options (like presigned URL expiration) apply to **all storage types**, including filesystem storage. These are marked accordingly in the documentation.

## When to use S3-compatible storage

Consider using S3-compatible storage when you need:

- **Cloud storage** for distributed deployments
- **Scalability** beyond local filesystem limitations
- **Redundancy** and backup capabilities
- **CDN integration** for faster file delivery
- **Multi-region** file distribution

## Environment variables

### General configuration (applies to all storage types)

| Variable                   | Description                                      | Required | Default         |
| -------------------------- | ------------------------------------------------ | -------- | --------------- |
| `PRESIGNED_URL_EXPIRATION` | Duration in seconds for presigned URL expiration | No       | `3600` (1 hour) |

### S3-specific configuration

To enable S3-compatible storage, set `ENABLE_S3=true` and configure the following environment variables:

| Variable                 | Description                              | Required | Default           |
| ------------------------ | ---------------------------------------- | -------- | ----------------- |
| `S3_ENDPOINT`            | S3 provider endpoint URL                 | Yes      | -                 |
| `S3_PORT`                | Connection port                          | No       | Based on protocol |
| `S3_USE_SSL`             | Enable SSL/TLS encryption                | Yes      | `true`            |
| `S3_ACCESS_KEY`          | Access key for authentication            | Yes      | -                 |
| `S3_SECRET_KEY`          | Secret key for authentication            | Yes      | -                 |
| `S3_REGION`              | Storage region                           | Yes      | -                 |
| `S3_BUCKET_NAME`         | Bucket/container name                    | Yes      | -                 |
| `S3_FORCE_PATH_STYLE`    | Use path-style URLs                      | No       | `false`           |
| `S3_REJECT_UNAUTHORIZED` | Enable strict SSL certificate validation | No       | `true`            |

## Provider configurations

Below are tested configurations for popular S3-compatible providers. Replace the example values with your actual credentials and settings.

> **Security Note:** Never commit real credentials to version control. Use environment files or secure secret management systems.

### AWS S3

Amazon S3 is the original S3 service, offering global availability and extensive features.

```bash
ENABLE_S3=true
S3_ENDPOINT=s3.amazonaws.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-access-key-id
S3_SECRET_KEY=your-secret-access-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
# PRESIGNED_URL_EXPIRATION=3600  # Optional: 1 hour (default)
```

**Getting credentials:**

1. Go to AWS IAM Console
2. Create a new user with S3 permissions
3. Generate access keys for programmatic access

### MinIO (Self-hosted)

MinIO is perfect for self-hosted deployments and development environments.

```bash
ENABLE_S3=true
S3_ENDPOINT=localhost
S3_PORT=9000
S3_USE_SSL=false
S3_ACCESS_KEY=your-minio-access-key
S3_SECRET_KEY=your-minio-secret-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=true
```

**Setup notes:**

- Use `S3_FORCE_PATH_STYLE=true` for MinIO
- Default MinIO port is 9000
- SSL can be disabled for local development

**For MinIO with self-signed SSL certificates:**

```bash
ENABLE_S3=true
S3_ENDPOINT=your-minio-domain.com
S3_PORT=9000
S3_USE_SSL=true
S3_ACCESS_KEY=your-minio-access-key
S3_SECRET_KEY=your-minio-secret-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=true
S3_REJECT_UNAUTHORIZED=false  # Allows self-signed certificates
```

### Google Cloud Storage

Google Cloud Storage offers competitive pricing and global infrastructure.

```bash
ENABLE_S3=true
S3_ENDPOINT=storage.googleapis.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-hmac-access-id
S3_SECRET_KEY=your-hmac-secret
S3_REGION=us-central1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
```

**Getting credentials:**

1. Enable Cloud Storage API in Google Cloud Console
2. Create HMAC keys for S3 compatibility
3. Use the generated access ID and secret

### DigitalOcean Spaces

DigitalOcean Spaces provides simple, scalable object storage.

```bash
ENABLE_S3=true
S3_ENDPOINT=nyc3.digitaloceanspaces.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-spaces-access-key
S3_SECRET_KEY=your-spaces-secret-key
S3_REGION=nyc3
S3_BUCKET_NAME=your-space-name
S3_FORCE_PATH_STYLE=false
```

**Available regions:**

- `nyc3` (New York)
- `ams3` (Amsterdam)
- `sgp1` (Singapore)
- `fra1` (Frankfurt)

### Backblaze B2

Backblaze B2 offers cost-effective storage with S3-compatible API.

```bash
ENABLE_S3=true
S3_ENDPOINT=s3.us-west-002.backblazeb2.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-key-id
S3_SECRET_KEY=your-application-key
S3_REGION=us-west-002
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
# PRESIGNED_URL_EXPIRATION=7200  # Optional: 2 hours for large files
```

**Cost advantage:**

- Significantly lower storage costs
- Free egress to Cloudflare CDN
- Pay-as-you-go pricing model

### Wasabi

Wasabi provides hot cloud storage with predictable pricing.

```bash
ENABLE_S3=true
S3_ENDPOINT=s3.wasabisys.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
```

**Benefits:**

- No egress fees
- Fast performance
- Simple pricing structure

### Azure Blob Storage

Azure Blob Storage with S3-compatible API for Microsoft ecosystem integration.

```bash
ENABLE_S3=true
S3_ENDPOINT=your-account.blob.core.windows.net
S3_USE_SSL=true
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=eastus
S3_BUCKET_NAME=your-container-name
S3_FORCE_PATH_STYLE=false
```

**Setup requirements:**

- Enable S3-compatible API in Azure
- Use container name as bucket name
- Configure appropriate access policies

## Presigned URL configuration

Palmr. uses presigned URLs to provide secure, temporary access to files stored in **both S3-compatible storage and filesystem storage**. These URLs have a configurable expiration time to balance security and usability.

> **Note:** This configuration applies to **all storage types** (S3, filesystem, etc.), not just S3-compatible storage.

### Understanding presigned URLs

Presigned URLs are temporary URLs that allow direct access to files without exposing storage credentials or requiring authentication. They automatically expire after a specified time period, enhancing security by limiting access duration.

**How it works:**

- **S3 Storage:** URLs are signed by AWS/S3-compatible provider credentials
- **Filesystem Storage:** URLs use temporary tokens that are validated by Palmr server

**Default behavior:**

- Upload URLs: 1 hour (3600 seconds)
- Download URLs: 1 hour (3600 seconds)

### Configuring expiration time

You can customize the expiration time using the `PRESIGNED_URL_EXPIRATION` environment variable:

```bash
# Set URLs to expire after 2 hours (7200 seconds)
PRESIGNED_URL_EXPIRATION=7200

# Set URLs to expire after 30 minutes (1800 seconds)
PRESIGNED_URL_EXPIRATION=1800

# Set URLs to expire after 6 hours (21600 seconds)
PRESIGNED_URL_EXPIRATION=21600
```

### Choosing the right expiration time

**Shorter expiration (15-30 minutes):**

- [+] Higher security
- [+] Reduced risk of unauthorized access
- [-] May interrupt long uploads/downloads
- [-] Users may need to refresh links more often

**Longer expiration (2-6 hours):**

- [+] Better user experience for large files
- [+] Fewer interruptions during transfers
- [-] Longer exposure window if URLs are compromised
- [-] Potential for increased storage costs if users leave downloads incomplete

**Recommended settings:**

- **High security environments:** 1800 seconds (30 minutes)
- **Standard usage:** 3600 seconds (1 hour) - default
- **Large file transfers:** 7200-21600 seconds (2-6 hours)

### Example configurations

**For Backblaze B2 with extended expiration:**

```bash
ENABLE_S3=true
S3_ENDPOINT=s3.us-west-002.backblazeb2.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-key-id
S3_SECRET_KEY=your-application-key
S3_REGION=us-west-002
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
PRESIGNED_URL_EXPIRATION=7200  # 2 hours for large file transfers
```

**For high-security environments:**

```bash
ENABLE_S3=true
S3_ENDPOINT=s3.amazonaws.com
S3_USE_SSL=true
S3_ACCESS_KEY=your-access-key-id
S3_SECRET_KEY=your-secret-access-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
S3_FORCE_PATH_STYLE=false
PRESIGNED_URL_EXPIRATION=1800  # 30 minutes for enhanced security
```

## Configuration best practices

### Security considerations

- **Use IAM policies** to limit access to specific buckets and operations
- **Enable encryption** at rest and in transit
- **Rotate credentials** regularly
- **Monitor access logs** for unusual activity
- **Use HTTPS** for all connections (`S3_USE_SSL=true`)

### Performance optimization

- **Choose regions** close to your users or server location
- **Configure CDN** for faster file delivery
- **Use appropriate bucket policies** for public file access
- **Monitor bandwidth** usage and costs

### Troubleshooting common issues

**Connection errors:**

- Verify endpoint URL and port settings
- Check firewall and network connectivity
- Ensure SSL/TLS settings match provider requirements

**SSL certificate errors (self-signed certificates):**

If you encounter errors like `unable to verify the first certificate` or `UNABLE_TO_VERIFY_LEAF_SIGNATURE`, you're likely using self-signed SSL certificates. This is common with self-hosted MinIO or other S3-compatible services.

**Solution:**
Set `S3_REJECT_UNAUTHORIZED=false` in your environment variables to allow self-signed certificates:

```bash
S3_REJECT_UNAUTHORIZED=false
```

**Note:** SSL certificate validation is enabled by default (`true`) for security. Set it to `false` only when using self-hosted S3 services with self-signed certificates.

**Authentication failures:**

- Confirm access key and secret key are correct
- Verify IAM permissions for bucket operations
- Check if credentials have expired

**Bucket access issues:**

- Ensure bucket exists and is accessible
- Verify region settings match bucket location
- Check bucket policies and ACL settings

## Testing your configuration

After configuring your S3 provider, test the connection by:

1. **Upload a test file** through the Palmr. interface
2. **Verify file appears** in your S3 bucket
3. **Download the file** to confirm retrieval works
4. **Check file permissions** and access controls

> **Tip:** Start with a development bucket to test your configuration before using production storage.

## Switching between providers

To switch from filesystem to S3 storage or between S3 providers:

1. **Backup existing files** if switching from filesystem storage
2. **Update environment variables** with new provider settings
3. **Restart the Palmr. container** to apply changes
4. **Test file operations** to ensure everything works correctly

Remember that existing files won't be automatically migrated when switching storage providers.
