Ever hit an error like response stream exceeded limit of 26214400 bytes.?
That usually happens when an API returns too much data in a single response. Most APIs offer pagination or filters to keep payloads small, but some don’t. I ran into this with Pendo data extracts—their API can return up to ~4 GB in one shot with no paging or chunking—so I put together a workaround that might help others.
The idea
Treat the response as a file, not as records.
Instead of using a normal export or Lookup additional record (per record), use Lookup additional files (per record). That lets you capture the raw response as a blob and hand it off to file storage, where downstream steps can safely stream and chunk it.
Steps
- Configure the HTTP step as usual (same request, headers, etc.), but choose Lookup additional files (per record).
- The step returns a
blobKeyinstead of records. - Write the blob to storage: use the
blobKeyto push the response to a transient storage location (e.g., NetSuite File Cabinet, S3, FTP, etc.). - Kick off a second flow when the file lands. Use file-based exports/imports to read and process the content. File steps can stream and chunk large payloads, so the size limit won’t bite you anymore.
Why this works
The 25 MB streaming limit applies to record-style responses. By treating the API output as a file, you bypass that limit and let the platform’s file pipeline do what it’s good at: chunking, streaming, and paging large files.



