NEWAgents can now see video via MCP.Try it now →
    Back to All Lists

    Best S3-Compatible Object Storage Providers in 2026

    We compared 21 S3-compatible object storage providers across pricing, egress fees, S3 API completeness, durability, compliance, and the gotchas nobody tells you until after migration. Every price sourced, every gotcha earned the hard way — including discovering Wasabi's 90-day minimum retention after migrating 50 TB.

    Last tested: April 2, 2026
    15 tools evaluated

    Use our interactive calculator to model egress fees, retention penalties, and total cost of ownership across all 21 providers for your specific workload.

    Compare Storage Costs

    How We Evaluated

    Total Cost of Ownership

    30%

    Not just storage $/GB — we modeled egress, API requests, minimum retention penalties, and escape costs (what it costs to leave). A $0.005/GB provider with hidden retention fees can cost more than a $0.015/GB provider with zero egress.

    S3 API Compatibility

    25%

    Tested against real workloads: multipart uploads, presigned URLs, lifecycle policies, event notifications, object lock, versioning, and batch operations. 'S3 compatible' is a spectrum — we tested where each provider falls on it.

    Durability & Reliability

    20%

    Published durability guarantees, SLA commitments, erasure coding details, and whether the provider actually shows their math. Some providers claim 11 nines without publishing verifiable data.

    Feature Completeness

    15%

    Versioning, object lock, event notifications, replication, lifecycle policies, static hosting, CDN integration, and compute ecosystem.

    Migration & Portability

    10%

    How easy is it to move data in and out? Egress fees, migration tooling (rclone, Super Slurper, Sippy), and vendor lock-in risk.

    Overview

    S3-compatible object storage has become the default storage layer for modern applications, but the market has fragmented into more than 20 providers with wildly different trade-offs. After testing every major provider against real workloads — multipart uploads, presigned URLs, lifecycle policies, event notifications, and batch operations — we found that S3 compatibility is a spectrum, not a checkbox. Some providers nail the core API but lack versioning; others offer zero egress but cap object sizes at 5 GB. This guide ranks providers by total cost of ownership, not just sticker price, because hidden fees for egress, minimum retention, and API calls can double your bill. We also tested migration tooling for each provider, because the most important number nobody compares is the escape cost: what it takes to leave.
    1

    Cloudflare R2

    Zero egress fees — no asterisks, no reasonable-use policy, no CDN partner requirements. R2 eliminates the single biggest cost trap in object storage. Fully S3-compatible, globally distributed across 330+ edge PoPs, and deeply integrated with Cloudflare Workers for serverless compute. The missing features (no versioning, no object lock) are real trade-offs, but for most workloads the zero-egress economics are transformative.

    What Sets It Apart

    The only major provider with truly unlimited zero egress — no caps, no reasonable-use fine print, no CDN partner lock-in. Every byte downloaded is free.

    Strengths

    • +Truly zero egress fees — saves $900+/mo on a 10 TB read-heavy workload vs S3
    • +Full S3 API compatibility — works with boto3, rclone, every standard tool
    • +330+ edge PoPs with automatic global distribution
    • +Workers integration for serverless processing at the edge
    • +Generous free tier: 10 GB storage + 10M reads/mo forever

    Limitations

    • -No versioning — accidental deletes are permanent
    • -No object lock or WORM compliance — not suitable for regulated data
    • -No S3 Select or batch operations
    • -$15/TB/mo storage is 2.5x more than B2 or Wasabi

    Real-World Use Cases

    • CDN origin storage where content is served globally without egress penalties eating into margins
    • API backends that serve user-uploaded media — every download is free regardless of volume
    • RAG document stores where retrieval queries pull chunks continuously without accumulating transfer fees
    • Static site and JAMstack deployments with Workers-based server-side rendering at the edge

    Choose This When

    When egress cost is your primary concern — content delivery, API serving, or any workload that reads far more than it writes. R2 also wins when you want edge compute via Workers.

    Skip This If

    When you need versioning, object lock, or WORM compliance for regulatory requirements. Also avoid if your workload is storage-heavy but read-light, since $15/TB/mo storage is expensive compared to B2 or Wasabi.

    Integration Example

    import boto3
    
    r2 = boto3.client(
        "s3",
        endpoint_url="https://ACCOUNT_ID.r2.cloudflarestorage.com",
        aws_access_key_id="R2_ACCESS_KEY",
        aws_secret_access_key="R2_SECRET_KEY",
    )
    
    # Upload assets — zero egress when users download
    r2.upload_file("/tmp/video.mp4", "media-bucket", "videos/promo.mp4")
    
    # Generate presigned URL for direct client download
    url = r2.generate_presigned_url(
        "get_object",
        Params={"Bucket": "media-bucket", "Key": "videos/promo.mp4"},
        ExpiresIn=3600,
    )
    print(f"Download (zero egress): {url[:60]}...")
    $15/TB/mo storage; $0 egress; $4.50/1M Class A ops; $0.36/1M Class B ops; 10 GB free forever
    Best for: Read-heavy workloads (CDN origins, RAG, API serving) where egress costs would otherwise dominate — and where versioning/object lock aren't required
    Visit Website
    2

    Backblaze B2

    The best value in object storage. B2 costs 1/4 of AWS S3 with full S3 compatibility, 10 TB max object size (largest on this list), and free egress to Cloudflare and Fastly via the Bandwidth Alliance. Backblaze also publishes drive failure data openly — rare transparency in an industry where most providers won't show their durability math.

    What Sets It Apart

    The lowest mainstream storage price at $6/TB/mo combined with free CDN egress through the Bandwidth Alliance — and the only provider that publishes verifiable drive failure data for durability transparency.

    Strengths

    • +Cheapest mainstream storage at $6/TB/mo — 1/4 the price of S3
    • +Free egress to Cloudflare, Fastly, and other CDN partners
    • +10 TB max object size — largest of any provider
    • +Full S3 API compatibility with both native and S3-compatible APIs
    • +Publicly published drive failure statistics — verifiable durability

    Limitations

    • -Only 3 regions: US-West, US-East, EU-Central
    • -No serverless compute integration
    • -Single-tier storage — no archive or IA pricing options
    • -100 bucket limit per account

    Real-World Use Cases

    • Backup and disaster recovery for enterprise data with free CDN-routed restores via Cloudflare
    • Media asset storage for video platforms delivering content through Cloudflare CDN at zero egress
    • Large-scale dataset hosting where $6/TB/mo keeps storage costs predictable at petabyte scale
    • NAS replacement for organizations migrating from on-prem storage to cloud with minimal cost increase

    Choose This When

    When you need the cheapest reliable storage at scale and deliver content via Cloudflare or Fastly CDN. Also ideal for very large objects (up to 10 TB per object).

    Skip This If

    When you need multi-region availability, storage tiering (archive/IA classes), or more than 100 buckets per account.

    Integration Example

    import boto3
    
    b2 = boto3.client(
        "s3",
        endpoint_url="https://s3.us-west-004.backblazeb2.com",
        aws_access_key_id="B2_KEY_ID",
        aws_secret_access_key="B2_APP_KEY",
    )
    
    # Upload a 10 TB file (largest max object size of any provider)
    b2.upload_file("/tmp/large_backup.tar", "backups", "daily/2026-04-01.tar")
    
    # List recent backups
    for obj in b2.list_objects_v2(Bucket="backups", Prefix="daily/")["Contents"]:
        print(f"{obj['Key']} — {obj['Size'] / 1e9:.1f} GB")
    $6/TB/mo storage; free egress to CDN partners; $0.01/GB egress otherwise; $4/1M API calls
    Best for: Cost-conscious teams storing large volumes of data (backups, media, ML datasets) with CDN delivery via Cloudflare
    Visit Website
    3

    Tigris

    Tigris fixes R2's biggest gaps — zero egress plus versioning and object lock — with automatic edge caching that moves data closer to where it's accessed. Built on FoundationDB for strong consistency. Newer service with less production track record than R2 or B2, but technically the most complete zero-egress option.

    What Sets It Apart

    The only zero-egress provider that also supports versioning, object lock, and multiple storage tiers — combining R2's cost advantage with S3's compliance features.

    Strengths

    • +Zero egress fees like R2, but with versioning and object lock
    • +Automatic edge caching — data follows access patterns globally
    • +4 storage tiers including archive
    • +Strong consistency guarantees via FoundationDB

    Limitations

    • -$20/TB/mo storage — more expensive than R2 or B2
    • -Newer service with smaller community and less track record
    • -Some rate limits not yet published
    • -No static website hosting

    Real-World Use Cases

    • Compliance-sensitive workloads that need zero-egress economics AND object lock / WORM guarantees
    • Multi-region applications where data automatically caches at the edge closest to users
    • Tiered storage strategies using Standard, IA, and Archive classes within a single bucket
    • Financial or healthcare data storage requiring strong consistency with audit-friendly versioning

    Choose This When

    When you need zero egress fees but cannot compromise on versioning, object lock, or WORM compliance. Also ideal for globally distributed workloads that benefit from automatic edge caching.

    Skip This If

    When you need a battle-tested provider with a long track record — Tigris is newer and has a smaller community. Also avoid if $20/TB/mo storage is too expensive for your volume.

    Integration Example

    import boto3
    
    tigris = boto3.client(
        "s3",
        endpoint_url="https://fly.storage.tigris.dev",
        aws_access_key_id="TIGRIS_KEY",
        aws_secret_access_key="TIGRIS_SECRET",
    )
    
    # Enable versioning — unlike R2, Tigris supports this
    tigris.put_bucket_versioning(
        Bucket="compliant-data",
        VersioningConfiguration={"Status": "Enabled"},
    )
    
    # Upload with object lock for WORM compliance
    tigris.put_object(
        Bucket="compliant-data",
        Key="records/2026/q1.parquet",
        Body=open("/tmp/q1.parquet", "rb"),
        ObjectLockMode="GOVERNANCE",
        ObjectLockRetainUntilDate="2029-04-01T00:00:00Z",
    )
    $20/TB/mo (Standard); $10/TB/mo (IA); $2/TB/mo (Archive); zero egress within network
    Best for: Teams that need R2-style zero egress but can't compromise on versioning or compliance (object lock/WORM)
    Visit Website
    4

    AWS S3

    The reference implementation and the deepest ecosystem in cloud computing. S3 is the most expensive option on this list, but it offers capabilities nobody else has: 6 storage tiers, S3 Vectors for native vector search, Intelligent Tiering, EventBridge integration, and tight coupling with Lambda, SageMaker, Athena, and the entire AWS stack. You pay a premium, but you get unmatched feature depth.

    What Sets It Apart

    The reference S3 implementation with the deepest ecosystem of any cloud provider — 6 storage tiers, native vector search, and seamless integration with 200+ AWS services.

    Strengths

    • +Most complete S3 implementation — it IS the standard
    • +6 storage tiers with Intelligent Tiering for automatic optimization
    • +Deepest ecosystem: Lambda, SageMaker, Athena, EventBridge, Glacier
    • +S3 Vectors: native vector search built into S3
    • +33 regions with cross-region replication

    Limitations

    • -$23/TB/mo — 4x more expensive than B2 for raw storage
    • -Egress at $0.09/GB dominates costs on read-heavy workloads ($689/mo for 10 TB stored + 5 TB egress)
    • -Moving 100 TB out costs $9,000 in egress — financial lock-in
    • -IAM complexity tax: policies, VPC endpoints, encryption configs

    Real-World Use Cases

    • Event-driven architectures with S3 notifications triggering Lambda, Step Functions, or EventBridge workflows
    • Data lake storage with Athena SQL queries and Glue ETL pipelines running directly on S3
    • Intelligent Tiering for unpredictable access patterns — automatically moves objects between hot and cold tiers
    • Cross-region disaster recovery with automatic replication and 33-region availability

    Choose This When

    When you are already invested in AWS and need features that only S3 offers — Intelligent Tiering, EventBridge, Lambda@Edge, or S3 Vectors. The ecosystem premium pays for itself when you use adjacent services.

    Skip This If

    When cost matters more than ecosystem — S3 is 4x more expensive than B2 for storage and egress creates financial lock-in at scale. Moving 100 TB out costs $9,000.

    Integration Example

    import boto3
    
    s3 = boto3.client("s3")
    
    # Enable Intelligent Tiering for automatic cost optimization
    s3.put_bucket_intelligent_tiering_configuration(
        Bucket="data-lake",
        Id="auto-tier",
        IntelligentTieringConfiguration={
            "Id": "auto-tier",
            "Status": "Enabled",
            "Tierings": [
                {"AccessTier": "ARCHIVE_ACCESS", "Days": 90},
                {"AccessTier": "DEEP_ARCHIVE_ACCESS", "Days": 180},
            ],
        },
    )
    
    # Upload with server-side encryption
    s3.put_object(
        Bucket="data-lake",
        Key="datasets/users.parquet",
        Body=open("/tmp/users.parquet", "rb"),
        ServerSideEncryption="aws:kms",
    )
    $23/TB/mo (Standard); $0.09/GB egress; 6 tiers down to $0.99/TB/mo (Deep Archive); $5/1M PUTs
    Best for: Teams already deep in the AWS ecosystem that need Lambda triggers, SageMaker pipelines, or Athena analytics on their storage
    Visit Website
    5

    Hetzner Object Storage

    The quiet winner for EU-based teams. Hetzner offers S3-compatible storage at $5.20/TB/mo with full versioning, object lock, and $0.01/GB egress (with 1 TB free internal transfer). EU-only regions (Germany, Finland) make it a strong fit for GDPR-sensitive workloads. No-nonsense provider with transparent pricing.

    What Sets It Apart

    The cheapest fully-featured S3 provider (versioning + object lock + lifecycle) with EU-only data residency guaranteed by infrastructure location, not just policy.

    Strengths

    • +Just $5.20/TB/mo — cheapest provider with full S3 features
    • +Versioning, object lock, and lifecycle policies included
    • +EU-only regions ideal for GDPR compliance
    • +Transparent, no-surprise pricing

    Limitations

    • -EU-only — no North American or APAC regions
    • -Smaller community than AWS or Cloudflare ecosystems
    • -No CDN integration or edge caching built in
    • -Less mature tooling and documentation than hyperscalers

    Real-World Use Cases

    • GDPR-compliant data storage for EU SaaS applications that cannot store customer data outside the EU
    • Budget backup storage for European businesses replacing expensive local NAS systems
    • Static asset hosting for EU-focused web applications with low egress costs
    • Versioned document storage for legal or financial firms needing audit trails with object lock

    Choose This When

    When you are EU-based, need GDPR-compliant data residency, and want full S3 features (versioning, object lock) at the lowest price available.

    Skip This If

    When you need storage outside Europe, a global CDN, or the rich ecosystem of adjacent services that hyperscalers provide.

    Integration Example

    import boto3
    
    hetzner = boto3.client(
        "s3",
        endpoint_url="https://fsn1.your-objectstorage.com",
        aws_access_key_id="HETZNER_KEY",
        aws_secret_access_key="HETZNER_SECRET",
    )
    
    # Enable versioning and object lock for compliance
    hetzner.put_bucket_versioning(
        Bucket="eu-compliance",
        VersioningConfiguration={"Status": "Enabled"},
    )
    
    # Upload GDPR-sensitive data — stays in EU by design
    hetzner.upload_file(
        "/tmp/customer_data.parquet",
        "eu-compliance",
        "customers/2026/q1.parquet",
    )
    $5.20/TB/mo; $0.01/GB egress; 1 TB internal transfer free; full S3 API
    Best for: EU-based teams that need affordable S3 storage with GDPR-friendly data residency and full versioning/lock support
    Visit Website
    6

    Wasabi

    The cheapest per-TB price on this list at $4.90/TB/mo with no egress fees and no API fees. But the 90-day minimum retention policy is a deal-breaker for many workloads: delete or overwrite an object before 90 days and you still pay for the full 90 days. On a dataset with churn, this can double your effective cost.

    What Sets It Apart

    The cheapest named-brand storage at $4.90/TB/mo with no egress and no API fees — but the 90-day minimum retention per object is the critical trade-off that defines which workloads fit.

    Strengths

    • +Cheapest named-brand storage at $4.90/TB/mo
    • +No egress fees and no API request fees
    • +Full S3 API compatibility
    • +16 regions across NA, EU, and APAC

    Limitations

    • -90-day minimum retention PER OBJECT — delete early, pay anyway
    • -'Free' egress capped by reasonable-use policy (egress cannot exceed stored volume)
    • -No lifecycle policies for automatic tiering
    • -No event notifications

    Real-World Use Cases

    • Long-term backup storage where files are written once and retained for months or years
    • Compliance archives for financial, legal, or healthcare data with mandated retention periods exceeding 90 days
    • Cold storage for large media libraries that are infrequently accessed but must remain available
    • NAS-to-cloud migration for organizations that want to replace aging on-prem hardware with cheap cloud storage

    Choose This When

    When you store write-once data that will live longer than 90 days (backups, archives, compliance records) and want the simplest possible pricing with no egress or API fees.

    Skip This If

    When your data has any churn — frequent updates, overwrites, or deletions within 90 days will trigger minimum retention charges. Also avoid if you need event notifications or lifecycle policies.

    Integration Example

    import boto3
    
    wasabi = boto3.client(
        "s3",
        endpoint_url="https://s3.wasabisys.com",
        aws_access_key_id="WASABI_KEY",
        aws_secret_access_key="WASABI_SECRET",
    )
    
    # Upload backup — remember: 90-day minimum retention applies
    wasabi.upload_file("/tmp/db_backup.sql.gz", "backups", "daily/2026-04-01.sql.gz")
    
    # WARNING: deleting before 90 days still incurs storage charges
    # wasabi.delete_object(Bucket="backups", Key="daily/2026-04-01.sql.gz")
    # ^ You pay for remaining days even after deletion
    $4.90/TB/mo flat; no egress (reasonable use); no API fees; 90-day minimum retention per object
    Best for: Cold or write-once storage where objects live longer than 90 days — backups, archives, compliance data. Not for high-churn datasets
    Visit Website
    7

    MinIO (Self-Hosted)

    The only serious self-hosted S3 implementation. MinIO runs on your hardware, your cloud, or your Kubernetes cluster with the most complete S3 API coverage of any open-source project. AGPL-3.0 licensed. You own everything — uptime, durability, backups, and the operational burden that comes with it.

    What Sets It Apart

    The most complete open-source S3 implementation with event notifications, versioning, object lock, and erasure coding — the only option that runs entirely on your own infrastructure.

    Strengths

    • +Most complete open-source S3 implementation
    • +Full control over data residency and security
    • +Hardware-bound performance — saturates whatever you give it (21+ TiB/s benchmarked)
    • +Rich event notifications: Webhooks, Kafka, NATS, Redis, PostgreSQL

    Limitations

    • -You manage everything: hardware, upgrades, monitoring, backups, durability
    • -AGPL-3.0 license may restrict commercial use
    • -Distributed mode is complex to operate at scale
    • -Cost advantage disappears at small scale due to hardware amortization

    Real-World Use Cases

    • On-premises object storage for defense, government, or healthcare organizations with air-gap requirements
    • Local development and CI/CD environments that need real S3 API behavior without cloud dependencies
    • Kubernetes-native storage for containerized applications that need S3-compatible persistence
    • High-throughput data pipelines on dedicated hardware where cloud egress costs are prohibitive

    Choose This When

    When data must stay on your infrastructure (air-gap, compliance, sovereignty), you have the ops team to manage it, and you need full S3 feature coverage including event notifications.

    Skip This If

    When you do not have dedicated infrastructure or Kubernetes expertise. Also avoid at small scale where hardware amortization makes cloud providers cheaper.

    Integration Example

    import boto3
    
    minio = boto3.client(
        "s3",
        endpoint_url="http://minio.internal:9000",
        aws_access_key_id="minioadmin",
        aws_secret_access_key="minioadmin",
    )
    
    # Create versioned bucket with event notifications
    minio.create_bucket(Bucket="data-pipeline")
    minio.put_bucket_versioning(
        Bucket="data-pipeline",
        VersioningConfiguration={"Status": "Enabled"},
    )
    
    # Upload data — triggers webhook notification
    minio.upload_file("/tmp/batch.parquet", "data-pipeline", "inbox/batch_001.parquet")
    Free (AGPL-3.0 open source); AIStor enterprise license with support available
    Best for: Air-gapped, on-prem, or regulated environments where data cannot leave your infrastructure. Also ideal for local dev/test against S3 APIs
    Visit Website
    8

    Google Cloud Storage

    Strong ML ecosystem integration via Vertex AI and BigQuery, with Autoclass for automatic storage tier management. But GCS has the highest egress fees of any provider on this list at $0.12/GB — 33% more than AWS. The S3 compatibility layer is an interop API, not native, and has gaps around multipart presigned URLs.

    What Sets It Apart

    Deepest Google Cloud integration with Autoclass automatic tiering — the only provider where storage tier management is fully hands-off based on actual access patterns.

    Strengths

    • +Tight Vertex AI, BigQuery, and Dataflow integration
    • +Autoclass handles lifecycle tiering automatically
    • +Global strong consistency
    • +40 regions with multi-region and dual-region options

    Limitations

    • -Highest egress fees: $0.12/GB — moving 100 TB out costs $12,000
    • -S3 compatibility is an interop layer, not native — has gaps
    • -365-day minimum retention on Archive tier
    • -$20/TB/mo standard storage — expensive for large datasets

    Real-World Use Cases

    • Vertex AI training and serving pipelines that read data from GCS natively
    • BigQuery analytics on structured and semi-structured data stored as Parquet or JSON in GCS
    • Autoclass-managed storage for unpredictable access patterns where manual tiering is impractical
    • Dual-region storage for disaster recovery within Google Cloud

    Choose This When

    When your infrastructure is on Google Cloud and you need native integration with Vertex AI, BigQuery, or Dataflow. Autoclass is also valuable for workloads with unpredictable access patterns.

    Skip This If

    When egress costs matter — GCS has the highest egress fees of any major provider at $0.12/GB. Also avoid if you need native S3 compatibility (GCS interop API has gaps).

    Integration Example

    from google.cloud import storage
    
    client = storage.Client()
    bucket = client.bucket("analytics-data")
    
    # Upload with Autoclass — Google manages tiering automatically
    blob = bucket.blob("events/2026/04/01.parquet")
    blob.upload_from_filename("/tmp/events.parquet")
    
    # Generate signed URL (native GCS — S3 interop has gaps here)
    url = blob.generate_signed_url(
        version="v4",
        expiration=3600,
        method="GET",
    )
    print(f"Signed URL: {url[:60]}...")
    $20/TB/mo (Standard); $0.12/GB egress (highest of big 3); Autoclass tiering available
    Best for: Teams already on Google Cloud using Vertex AI or BigQuery — but be aware of the highest egress costs of any major provider
    Visit Website
    9

    Oracle Cloud Object Storage

    The most generous free egress tier of any provider: 10 TB/month. Oracle Cloud Object Storage is often overlooked, but for egress-heavy workloads that can't use R2, 10 TB of free monthly egress is a significant advantage. 46 regions, full S3 compatibility, and 3 storage tiers.

    What Sets It Apart

    10 TB/month of free egress — the most generous free transfer tier of any provider, making it cost-competitive for moderate egress workloads without locking into Cloudflare's ecosystem.

    Strengths

    • +10 TB/mo free egress — most generous free tier of any provider
    • +Full S3 API compatibility
    • +46 regions — more than any other provider
    • +3 storage tiers with automatic tiering available

    Limitations

    • -$25.50/TB/mo standard storage — second most expensive after S3
    • -Less polished console and documentation than AWS or GCS
    • -Smaller community and ecosystem
    • -Less tooling and SDK support than hyperscaler competitors

    Real-World Use Cases

    • Media distribution platforms that serve up to 10 TB/mo of content without egress fees
    • Hybrid cloud architectures needing broad region coverage across 46 global locations
    • Oracle Database workloads that need co-located object storage for backups and exports
    • Government and enterprise applications leveraging Oracle's compliance certifications

    Choose This When

    When your monthly egress is under 10 TB and you want it free without switching to Cloudflare R2, or when you need Oracle Cloud integration for database workloads.

    Skip This If

    When your egress exceeds 10 TB/mo (R2 becomes cheaper), when you need polished documentation and tooling, or when community ecosystem matters.

    Integration Example

    import boto3
    
    oci_s3 = boto3.client(
        "s3",
        endpoint_url="https://NAMESPACE.compat.objectstorage.us-ashburn-1.oraclecloud.com",
        aws_access_key_id="OCI_ACCESS_KEY",
        aws_secret_access_key="OCI_SECRET_KEY",
    )
    
    # Upload — first 10 TB/mo of downloads are free
    oci_s3.upload_file("/tmp/media.mp4", "content-delivery", "videos/promo.mp4")
    
    # List objects across tiers
    response = oci_s3.list_objects_v2(Bucket="content-delivery", Prefix="videos/")
    for obj in response.get("Contents", []):
        print(f"{obj['Key']} — {obj['Size'] / 1e6:.1f} MB")
    $25.50/TB/mo (Standard); $8.50/GB egress after 10 TB free; $0.40/10K requests
    Best for: Teams that need substantial free egress (up to 10 TB/mo) but can't commit to Cloudflare's ecosystem for R2
    Visit Website
    10

    DigitalOcean Spaces

    Simple, developer-friendly S3-compatible storage with a built-in CDN — but max object size is 5 GB, not 5 TB. This is the gotcha that catches most teams. For small files (web assets, config, documents) Spaces works great. For video, backups, or ML checkpoints, you'll hit the wall fast.

    What Sets It Apart

    The simplest all-in-one package for small teams — S3 storage plus CDN plus developer-friendly docs in a single $5/mo plan. No other provider bundles CDN at this price.

    Strengths

    • +Simple pricing: $5/mo gets 250 GB + 1 TB egress + built-in CDN
    • +Developer-friendly with good documentation
    • +Full S3 API compatibility for standard operations
    • +Built-in CDN included at no extra cost

    Limitations

    • -Max object size is 5 GB — not 5 TB like nearly every other provider
    • -No object lock or WORM compliance
    • -$5/mo minimum charge regardless of usage
    • -Rate limited: 750 GET/s and 150 PUT/s per IP

    Real-World Use Cases

    • Startup web apps storing user-uploaded images, documents, and profile photos under 5 GB each
    • Static asset hosting for websites with the built-in CDN for global delivery
    • Configuration and artifact storage for CI/CD pipelines on DigitalOcean infrastructure
    • Small SaaS applications that need simple, affordable S3-compatible storage with predictable costs

    Choose This When

    When you are a small team or startup that needs simple, cheap S3 storage with built-in CDN for web assets and small files.

    Skip This If

    When you need to store objects larger than 5 GB, require object lock or WORM compliance, or expect high request rates above 750 GET/s.

    Integration Example

    import boto3
    
    spaces = boto3.client(
        "s3",
        endpoint_url="https://nyc3.digitaloceanspaces.com",
        aws_access_key_id="DO_SPACES_KEY",
        aws_secret_access_key="DO_SPACES_SECRET",
    )
    
    # Upload web assets — CDN included automatically
    spaces.upload_file(
        "/tmp/logo.png",
        "my-app-assets",
        "images/logo.png",
        ExtraArgs={"ACL": "public-read", "ContentType": "image/png"},
    )
    
    # Access via CDN: https://my-app-assets.nyc3.cdn.digitaloceanspaces.com/images/logo.png
    $5/mo base (250 GB + 1 TB transfer); $20/TB/mo additional storage; $10/TB egress
    Best for: Small teams and startups storing web assets, documents, and small files — not suitable for large objects like video or model weights
    Visit Website
    11

    Linode (Akamai) Object Storage

    S3-compatible object storage from Akamai (formerly Linode) with a $5/mo flat rate for 250 GB and strong integration with Linode compute instances. Backed by Akamai's global CDN network. The 5 GB max object size limits large-file use cases, but for small-to-medium workloads it is developer-friendly with good docs.

    What Sets It Apart

    Backed by Akamai's global CDN infrastructure — the strongest edge delivery network of any independent storage provider.

    Strengths

    • +Flat $5/mo for 250 GB storage + 1 TB outbound transfer
    • +Backed by Akamai's global CDN and edge network
    • +Full S3 API compatibility for standard operations
    • +11 regions across NA, EU, and APAC

    Limitations

    • -5 GB max object size — same limitation as DigitalOcean Spaces
    • -No versioning or object lock
    • -Rate-limited at 750 requests/second per bucket
    • -$20/TB/mo additional storage — competitive but not cheapest

    Real-World Use Cases

    • CDN origin storage leveraging Akamai's edge network for global content delivery
    • Application asset storage co-located with Linode compute instances for low-latency access
    • Backup storage for Linode-hosted applications with straightforward pricing

    Choose This When

    When you run workloads on Linode/Akamai and want co-located storage with access to Akamai's CDN for content delivery.

    Skip This If

    When you need large object support (>5 GB), versioning, object lock, or storage pricing below $5/mo minimum.

    Integration Example

    import boto3
    
    linode = boto3.client(
        "s3",
        endpoint_url="https://us-east-1.linodeobjects.com",
        aws_access_key_id="LINODE_KEY",
        aws_secret_access_key="LINODE_SECRET",
    )
    
    # Upload assets — Akamai CDN available for delivery
    linode.upload_file("/tmp/app-bundle.js", "static-assets", "js/app.bundle.js")
    
    # Generate presigned download URL
    url = linode.generate_presigned_url(
        "get_object",
        Params={"Bucket": "static-assets", "Key": "js/app.bundle.js"},
        ExpiresIn=86400,
    )
    $5/mo base (250 GB + 1 TB transfer); $20/TB/mo additional storage; $10/TB additional egress
    Best for: Teams on Linode/Akamai infrastructure that want co-located S3 storage with CDN integration
    Visit Website
    12

    OVHcloud Object Storage

    European cloud provider with S3-compatible storage and a strong GDPR compliance story. OVHcloud dropped egress fees in January 2026, making it a zero-egress option for EU teams — but with a 30-day minimum retention on all tiers that can catch you off guard. Three storage classes (Standard, IA, Archive) with pricing competitive with B2.

    What Sets It Apart

    Zero-egress EU-sovereign storage with three pricing tiers — the only European provider combining free egress with archive-class pricing at $1.50/TB/mo.

    Strengths

    • +Zero egress fees as of January 2026
    • +Three storage classes: Standard ($7/TB), IA ($4/TB), Archive ($1.5/TB)
    • +Strong EU data sovereignty — French-headquartered, EU-only infrastructure
    • +S3-compatible with versioning and lifecycle policies

    Limitations

    • -30-day minimum retention on ALL tiers — shorter than Wasabi but still a gotcha
    • -Smaller S3 ecosystem support than AWS or Cloudflare
    • -Console and API documentation quality trails hyperscalers
    • -Limited regions compared to AWS or Oracle

    Real-World Use Cases

    • EU-sovereign data storage for organizations subject to French or EU data residency requirements
    • Tiered archival using Standard, IA, and Archive classes for lifecycle-managed datasets
    • Zero-egress content delivery for EU-focused applications without Cloudflare dependency
    • Backup storage with competitive archive pricing at $1.50/TB/mo

    Choose This When

    When you need zero-egress storage with EU data sovereignty and want lifecycle tiering down to $1.50/TB/mo for archived data.

    Skip This If

    When your data has churn within 30 days (minimum retention applies), when you need a large S3 ecosystem, or when you need regions outside Europe.

    Integration Example

    import boto3
    
    ovh = boto3.client(
        "s3",
        endpoint_url="https://s3.gra.io.cloud.ovh.net",
        aws_access_key_id="OVH_ACCESS_KEY",
        aws_secret_access_key="OVH_SECRET_KEY",
    )
    
    # Upload to Standard tier — zero egress on downloads
    ovh.upload_file("/tmp/dataset.parquet", "eu-data", "datasets/v2.parquet")
    
    # Set lifecycle rule to move to IA after 30 days
    ovh.put_bucket_lifecycle_configuration(
        Bucket="eu-data",
        LifecycleConfiguration={
            "Rules": [{
                "ID": "archive-old",
                "Status": "Enabled",
                "Transitions": [{"Days": 30, "StorageClass": "STANDARD_IA"}],
            }]
        },
    )
    $7/TB/mo (Standard); $4/TB/mo (IA); $1.50/TB/mo (Archive); zero egress; 30-day minimum retention
    Best for: EU teams wanting zero-egress storage with multiple tiers and strong data sovereignty guarantees
    Visit Website
    13

    Storj

    Decentralized object storage across 30,000+ independent nodes in 100+ countries. Data is encrypted client-side, erasure-coded into 80+ pieces, and distributed globally. At $4/TB/mo storage and $7/TB egress, it is the cheapest centrally-managed storage alternative. The trade-off is higher tail latency and lower throughput than centralized providers.

    What Sets It Apart

    Decentralized architecture across 30,000+ independent nodes with mandatory client-side encryption — no single entity (including Storj) can access your data.

    Strengths

    • +Just $4/TB/mo storage — cheapest alongside IDrive e2
    • +End-to-end encryption — data encrypted before leaving your machine
    • +Decentralized across 30,000+ nodes in 100+ countries
    • +S3-compatible gateway with standard SDK support

    Limitations

    • -Higher tail latency than centralized providers (P99 can spike)
    • -Throughput limited by distributed retrieval across many nodes
    • -Some S3 features missing (no versioning, limited lifecycle)
    • -Smaller enterprise adoption and fewer compliance certifications

    Real-World Use Cases

    • Encrypted backup storage for organizations that need client-side encryption without managing key infrastructure
    • Archival of large datasets where $4/TB/mo pricing enables petabyte-scale storage on a budget
    • Distributing open-source datasets with built-in geographic redundancy across 100+ countries

    Choose This When

    When you prioritize encryption, geographic redundancy, and cost over access speed — ideal for backups, archives, and cold storage at petabyte scale.

    Skip This If

    When you need low-latency access, high throughput, enterprise SLAs, or advanced S3 features like versioning and lifecycle policies.

    Integration Example

    import boto3
    
    storj = boto3.client(
        "s3",
        endpoint_url="https://gateway.storjshare.io",
        aws_access_key_id="STORJ_KEY",
        aws_secret_access_key="STORJ_SECRET",
    )
    
    # Upload — client-side encrypted, erasure-coded across 80+ nodes
    storj.upload_file("/tmp/archive.tar.gz", "cold-storage", "backups/2026-q1.tar.gz")
    
    # Verify distributed storage
    head = storj.head_object(Bucket="cold-storage", Key="backups/2026-q1.tar.gz")
    print(f"Stored: {head['ContentLength'] / 1e9:.1f} GB (encrypted, 80+ node redundancy)")
    $4/TB/mo storage; $7/TB egress; 150 GB storage + 150 GB egress free forever
    Best for: Large-scale cold storage and backup where cost and encryption matter more than access speed
    Visit Website
    14

    Azure Blob Storage

    Microsoft's object storage with deep Azure ecosystem integration — Synapse Analytics, Azure ML, Cognitive Services, and Data Lake Storage Gen2. The S3 compatibility endpoint is still in preview and has notable gaps. If you are on Azure, Blob Storage is the natural choice. If you are not, there is no reason to start here.

    What Sets It Apart

    Deepest integration with Microsoft's analytics and ML stack — Synapse, Azure ML, Data Lake Gen2, and Cognitive Services all work natively with Blob Storage.

    Strengths

    • +Deep Azure ecosystem: Synapse, Azure ML, Cognitive Services, Data Factory
    • +Data Lake Storage Gen2 for hierarchical namespace and analytics workloads
    • +Immutable storage (WORM) with legal hold and time-based retention
    • +Archive tier at $1/TB/mo — cheapest of the big three

    Limitations

    • -S3 compatibility endpoint is in preview — gaps in multipart upload and presigned URLs
    • -Egress at $0.087/GB — expensive for read-heavy workloads
    • -Azure IAM (RBAC + SAS tokens) is more complex than S3 IAM
    • -$18/TB/mo standard hot storage

    Real-World Use Cases

    • Azure Synapse Analytics pipelines reading and writing data to Blob Storage natively
    • Data Lake Storage Gen2 for hierarchical namespace analytics with Spark and Databricks
    • Immutable compliance storage using legal hold and time-based retention policies
    • Azure ML training workflows with native Blob Storage dataset integration

    Choose This When

    When your infrastructure runs on Azure and you need native integration with Synapse Analytics, Azure ML, or Data Lake Storage Gen2.

    Skip This If

    When you need full S3 compatibility (the S3 endpoint is still in preview), when egress costs are a concern, or when you are not already on Azure.

    Integration Example

    from azure.storage.blob import BlobServiceClient
    
    client = BlobServiceClient.from_connection_string("AZURE_CONN_STRING")
    container = client.get_container_client("ml-data")
    
    # Upload training data
    with open("/tmp/train.parquet", "rb") as f:
        container.upload_blob("datasets/train.parquet", f, overwrite=True)
    
    # Set immutable policy for compliance
    from azure.storage.blob import ImmutabilityPolicy
    container.get_blob_client("datasets/train.parquet").set_immutability_policy(
        ImmutabilityPolicy(expiry_time="2029-01-01", policy_mode="Unlocked")
    )
    $18/TB/mo (Hot); $10/TB/mo (Cool); $1/TB/mo (Archive); $0.087/GB egress
    Best for: Teams on Azure that need tight integration with Synapse, Azure ML, or Data Lake Storage Gen2
    Visit Website
    15

    IDrive e2

    Budget S3-compatible storage at $4/TB/mo with no minimum retention — fixing Wasabi's biggest gotcha. Free egress under a reasonable-use policy. Less known than B2 or Wasabi, but delivers solid S3 compatibility at rock-bottom prices for teams that prioritize cost above all else.

    What Sets It Apart

    The cheapest S3 storage with zero minimum retention — the only sub-$5/TB provider where you can delete objects the same day without penalty.

    Strengths

    • +Just $4/TB/mo — tied with Storj for cheapest storage
    • +No minimum retention period — delete anytime without penalty
    • +Free egress under reasonable-use policy
    • +10 regions across US, EU, and APAC

    Limitations

    • -Reasonable-use egress policy limits heavy download workloads
    • -Smaller brand with less community support and fewer integrations
    • -Documentation less thorough than major providers
    • -No event notifications, versioning, or object lock

    Real-World Use Cases

    • Iterative development storage where datasets are frequently replaced without early-deletion penalties
    • Budget backup storage for small businesses that cannot afford $15-23/TB/mo
    • Staging environments that mirror production storage at minimal cost

    Choose This When

    When cost is your primary driver and you need the flexibility to delete or replace data frequently without retention penalties.

    Skip This If

    When you need enterprise SLAs, rich documentation, event notifications, or advanced S3 features like versioning and object lock.

    Integration Example

    import boto3
    
    idrive = boto3.client(
        "s3",
        endpoint_url="https://e2.idrivee2.com",
        aws_access_key_id="IDRIVE_KEY",
        aws_secret_access_key="IDRIVE_SECRET",
    )
    
    # Upload and delete freely — no minimum retention
    idrive.upload_file("/tmp/experiment.parquet", "staging", "v42/data.parquet")
    
    # Replace previous version without penalty
    idrive.delete_object(Bucket="staging", Key="v41/data.parquet")
    idrive.upload_file("/tmp/experiment_v2.parquet", "staging", "v42/data.parquet")
    $4/TB/mo storage; free egress (reasonable use); no minimum retention; no API fees
    Best for: Teams that want Wasabi-level pricing without the 90-day retention trap — ideal for iterative workloads with data churn
    Visit Website

    Frequently Asked Questions

    What is the cheapest S3-compatible object storage?

    For raw storage price: Storj and IDrive e2 tie at $4/TB/mo. Wasabi is $4.90/TB/mo but has a 90-day minimum retention that can double effective costs on high-churn data. Hetzner is $5.20/TB/mo with full features (versioning, object lock) and EU data residency. For total cost including egress: Cloudflare R2 ($15/TB/mo storage, $0 egress) is often cheaper than seemingly cheaper providers once you factor in data transfer.

    Which object storage has zero egress fees?

    Cloudflare R2 offers truly unlimited zero egress with no asterisks. Tigris and Fastly Object Storage also offer zero egress within their networks. Wasabi claims 'free' egress but enforces a reasonable-use policy (monthly egress cannot exceed stored volume). OVHcloud dropped egress fees in January 2026 but has a 30-day minimum retention on all tiers. Always read the fine print.

    Is S3 compatibility the same across all providers?

    No — 'S3 compatible' is a spectrum. AWS S3 is the reference implementation. MinIO has the most complete open-source implementation. Cloudflare R2 supports core operations but lacks versioning and object lock. Azure Blob's S3 endpoint is still in preview. DigitalOcean, Vultr, and Linode cap objects at 5 GB. Event notifications are effectively AWS/GCS/MinIO only. Test your specific workload before migrating.

    What does it cost to migrate off a cloud storage provider?

    Egress fees make leaving expensive: moving 100 TB off AWS S3 costs $9,000, off GCS costs $12,000, off Azure costs $8,700. Moving off Cloudflare R2 costs $0. This 'escape cost' is the most important number nobody compares when choosing a provider. Use our interactive calculator at storage.mixpeek.com to model your specific scenario.

    What is the Wasabi 90-day minimum retention policy?

    Wasabi charges a minimum of 90 days for every object stored. If you upload a file and delete it after 30 days, you still pay for the remaining 60 days. This applies per-object, not per-account. On datasets with high churn (frequent updates or deletions), this can double your effective storage cost. Wasabi is best for write-once data like backups and archives that will stay for at least 90 days.

    Can I search and query objects stored in S3-compatible storage?

    Yes. AWS offers S3 Select for basic SQL queries and the new S3 Vectors for vector search. For multimodal search across images, video, documents, and audio in any S3-compatible bucket, Mixpeek connects to your storage and adds feature extraction, embedding, and semantic search without moving your data. See mixpeek.com for details.

    Ready to Get Started with Mixpeek?

    See why teams choose Mixpeek for multimodal AI. Book a demo to explore how our platform can transform your data workflows.

    Explore Other Curated Lists

    multimodal ai

    Best Multimodal AI APIs

    A hands-on comparison of the top multimodal AI APIs for processing text, images, video, and audio through a single integration. We evaluated latency, modality coverage, retrieval quality, and developer experience.

    11 tools rankedView List
    search retrieval

    Best Video Search Tools

    We tested the leading video search and understanding platforms on real-world content libraries. This guide covers visual search, scene detection, transcript-based retrieval, and action recognition.

    9 tools rankedView List
    content processing

    Best AI Content Moderation Tools

    We evaluated content moderation platforms across image, video, text, and audio moderation. This guide covers accuracy, latency, customization, and compliance features for trust and safety teams.

    9 tools rankedView List