Optimizing Flow Performance: Replacing Per-Record Lookups with Lookup Cache

The Challenge: The "One to Many" Bottleneck

We recently optimized a flow for a client moving Orders from NetSuite to a 3rd-party system. While the SKUs matched between both systems, the destination required an additional internal foreign key ID for every line item.

Initially, we implemented a standard lookup step within the order flow to fetch this ID for each item. However, this created a significant performance drag:

  • One-to-Many overhead: For an order with 50 items, the flow performed 50 individual API calls.

  • Increased Latency: The flow duration scaled linearly with the number of items, leading to timeouts and slow sync speeds.

The Solution: Decoupled Cache Refresh

Since the mapping between SKUs and their foreign keys changed infrequently, we shifted from a "just-in-time" lookup to a "pre-fetched" cache strategy.

Step 1: Build the Cache Update Flow

We created a separate, scheduled flow that:

  1. Exports all items/keys from the 3rd-party system.

  2. Imports them into a Celigo Lookup Cache.

  3. Key: SKU | Value: Foreign Key ID.

Step 2: Streamlined Order Mapping

With the cache populated, we removed the lookup step from the main Order flow entirely. Instead, we used a Lookup Cache directly in the field mapping of the Order Import step.


The Results

By moving the data retrieval to a background process, we achieved:

  • Drastic Speed Increase: The main flow no longer waits for external API responses for every line item.

  • Reduced API Consumption: We eliminated thousands of redundant API calls to the 3rd-party system.

  • Stability: The flow is now less susceptible to 3rd-party API rate limits or intermittent downtime.

Handling "Cache Misses"

What happens if a new SKU is added to NetSuite but hasn't been synced to your cache yet?

  • The Risk: The mapping will return a null/empty value, which might cause the 3rd-party system to reject the order.

  • The Process: Use the mapping settings to raise an error when there’s no match on the lookup cache. This prevents Celigo from sending the API call to the 3rd party system and the order can be easily retried after the Chache update flow has run.

Key Takeaway

If you have a data point that is required for mapping but doesn't change frequently, don't look it up during the mission-critical flow. Cache it separately and reference it locally within Celigo to keep your integrations fast and lean.

2 Likes

Curious if you had any cold hard numbers for how much faster this was?

Basically depends on how many items/orders, around 1 minute of run time saved per 40 line items.

2 Likes