profile picture Endowus Tech

From Thousands of Files to Just Seven: A Data Lakehouse Optimization Story

November 26, 2025  ·  By Jafet Kim de Veyra  ·  Filed under data

At Endowus, data is at the heart of everything we do, from personalizing client experiences to powering our investment insights. As our platform grows, so does our data lakehouse, and with that growth comes the inevitable engineering challenges of managing scale, cost, and performance. Recently, our Data Platform team embarked on a mission to tackle two critical issues that were impacting our data pipelines: the notorious “small file problem” and the crucial challenge of data freshness.

This post shares our journey of diagnosing these challenges, implementing a multi-faceted solution using Delta Lake, and the dramatic improvements we achieved in performance, cost, and reliability. It’s a story about how rethinking the physical layout of our data unlocked efficiencies across the entire platform.

Our Data Platform: A Quick Overview

Before diving into the challenges, it’s helpful to understand the key technologies powering our data platform - the foundations that enable our team to deliver reliable, performant data products at scale across Singapore and Hong Kong.

With this technical foundation in place, let’s explore the challenges that emerged as our platform scaled.

The Challenge: Small Files, Big Problems

Our data platform was struggling under the weight of its own success. The core of the problem lay in one of our largest and most frequently accessed table: the transaction table. It was fragmented into over 1,800 small files, a classic symptom of the “small file problem” in big data systems where data is stored in many small, inefficient chunks instead of fewer, larger ones.

This fragmentation triggered a cascade of issues that radiated outward, affecting everything from query speed to our monthly cloud bill:

  1. Degraded Performance: Queries against the table were painfully slow. Before a query engine can even read the data, it must perform costly “list” operations to discover all the files and then open each one to read its metadata. With thousands of files, this I/O overhead alone created a significant bottleneck, delaying critical analytics and reporting.
  2. Increased Costs: We were at risk of hitting the request limit for AWS Key Management Service (KMS), which we use for envelope encryption on our S3 data. Each encrypted file requires a separate API call to KMS to decrypt its unique data key. This led to a peak of over 87,000 KMS requests for a single query. This not only incurred direct KMS costs, but also consumed a large portion of our account’s KMS request quota.
  3. Pipeline Instability: The sheer volume of KMS requests created a constant risk of API throttling, which threatened the stability of our entire data pipeline. A single complex query could trigger a cascade of failures in downstream jobs, leading to operational firefighting and delays in data availability for the rest of the business.
illustration depicting data explosion

The root cause was a combination of factors common in evolving data architectures:

Compounding this, we faced a data freshness and reliability problem. Our original implementation dated back to the earliest days of Endowus. At that time, we took the simplest and most pragmatic approach of loading data in the warehouse: refresh the last six months of data every six hours. This blunt approach was quite reasonable in the early days when volumes were low. But as the business scaled, not only did it become computationally quite expensive, it also brought on the challenge of missing any updates to records older than six months. This led to data staleness, requiring manual backfills and eroding trust in the data.

Our Solution: A Two-Pronged Approach with Delta Lake

To solve these interconnected problems, we turned to the advanced capabilities of Delta Lake 3.1+. Our solution focused on two key areas: fundamentally optimizing the physical data layout and completely redesigning our data ingestion process for efficiency and accuracy.

Part 1: Solving the Small File Proliferation

Our first goal was to drastically reduce the number of files. We moved away from rigid, time-based partitioning and adopted a more dynamic, query-aware strategy using Delta Lake’s built-in features.

  1. Departitioning & Liquid Clustering: The first step was bold: we removed the Hive-style daily partitioning entirely. In its place, we implemented Delta Lake’s Liquid Clustering. Unlike traditional partitioning, which is limited to a few low-cardinality columns, liquid clustering allows for flexible, multi-dimensional clustering of data. It reorganizes the data layout based on the columns most frequently used in query WHERE clauses. For our transaction table, we clustered on transaction_type and created_date. This ensures that related data is physically co-located within the same files, allowing Delta Lake’s engine to perform highly effective data skipping and dramatically speeding up queries.
visual showing impact of clustering
  1. Optimized Writes: Traditional writes in distributed systems can be chaotic, with multiple parallel workers each creating their own small files. Delta Lake’s Optimized Writes feature solves this by adding an extra shuffle step before the write operation. This shuffle intelligently repartitions and sorts the data in memory across the cluster, ensuring that a minimal number of larger, well-packed files are written to storage from the outset.
visual comparing traditional writes vs optimized writes across partitions
  1. Scheduled Compaction: To maintain this optimized state over time as new data is ingested, we implemented a scheduled job in Airflow that runs the OPTIMIZE command. This command intelligently identifies partitions with small files and compacts them into larger, 1GB files. This periodic housekeeping ensures our table remains performant without manual intervention.

Part 2: Ensuring Data Freshness with MERGE

To solve the data reliability issue, we replaced our brute-force six-month refresh with Delta Lake’s powerful MERGE command, effectively creating an intelligent “upsert” logic.

The MERGE command allows us to atomically update, delete, or insert data into a target table based on a source table and a specified key. We use transaction_id as our merge key. When a new batch of data arrives, the MERGE operation efficiently performs the equivalent of two joins in a single, ACID-compliant transaction:

This surgical approach ensures that any update, even to a record from years ago, is accurately captured and reflected in the table. We no longer waste compute cycles re-processing unchanged data, and we’ve eliminated the risk of missing historical updates.

The Results: Dramatic Improvements Across the Board

The impact of these changes was immediate and profound, validating our approach and delivering value across performance, cost, and reliability.

On the Small File & KMS Front

chart showing drop in costs after optimization

On the Data Freshness & Reliability Front

By switching from a full refresh to the surgical MERGE operation, we saw a 79% reduction in overall resource consumption for our data ingestion pipeline in EMR Serverless.

chart showing improved resource utilization after optimiztion

Key Learnings and What’s Next

Our biggest takeaway from this initiative was a powerful reminder: optimizing the underlying physical data layout can solve multiple, seemingly unrelated problems simultaneously. By tackling the small file problem, we not only improved query performance but also slashed operational costs and fortified our pipeline’s stability. It’s a foundational concern, not just an implementation detail.

This project is a blueprint for future optimizations across our entire data lakehouse. Our next steps include:

  1. Expanding Coverage: Systematically applying these clustering and compaction techniques to other high-query, high-volume tables like orders, redemptions, and investments.
  2. Integrating into ETL: Building optimization directly into our data pipelines, triggering compaction automatically after ingestion jobs, rather than relying on separate maintenance schedules.
  3. Continuous Monitoring: Actively tracking key metrics like file counts, average file size, and query latency patterns to proactively adapt our clustering strategies as our data and access patterns evolve.
  4. Evaluating Emerging Tech: Keeping a close eye on new features in Delta Lake, Apache Iceberg, and other technologies to continuously enhance our data platform’s capabilities.

We’re excited by these results and are committed to building a data platform that is not only powerful but also efficient, reliable, and scalable.