Quick question on Salesforce integration capabilities in integrator.io:
Does the Salesforce connector support the Salesforce Pub/Sub API (gRPC-based)? If not, is it on the roadmap?
For subscribing to Platform Events today, does the Salesforce connector use the older CometD Streaming API under the hood? Any known limitations?
Can the HTTP connector be configured to speak gRPC / Protocol Buffers, or is it HTTP/REST only?
Has anyone built a middleware bridge (e.g., a small service that subscribes to Pub/Sub API via gRPC and forwards events to a Celigo HTTP Listener)? Curious if there are reference patterns.
1. Salesforce gRPC Pub/Sub API — support and roadmap
No native support today. We have it on the roadmap, but no timeline.
2. How real-time / Platform Event subscription works today
The Salesforce connector doesn't subscribe to Platform Events, and it doesn't use the CometD Streaming API under the hood. Real-time exports are powered by a Celigo-provided managed package — Integrator Distributed Adaptor (namespace integrator_da) — that gets installed into the Salesforce org. Mechanically:
When you build a real-time Salesforce export in integrator.io, IO writes config records into custom SObjects in your org. The package ships four:
Connection — endpoint URL + auth (OAuth for IO, SSO/user for NetSuite); password and token fields are encrypted.
Real Time Sync — the export config (child of Connection): Qualifier (where-clause filter), BatchSize, ReferencedFields (parent fields to include), SkipExportFieldId (per-record bypass), Disabled.
Related SObject Sync — master-detail child of Real Time Sync, used when you need to send related-list child records alongside the parent.
Celigo Queued Message — backs the auto-retry pipeline (exponential backoff if IO returns 5xx).
The package itself ships no active triggers — it's an Apex library of public entry points. The trigger code is generated by integrator.io on the export page; you deploy it into your org (one trigger per sObject, 75% test coverage to push to production).
On insert/update/delete the trigger calls into the library, which reads the metadata from those SObjects, builds a payload, and makes an outbound HTTP callout to integrator.io. The callout is dispatched through Salesforce's Apex job queue, so it runs async to the originating transaction.
So it's an Apex-trigger + async-callout model, not a streaming subscription. Practical implications:
You're bound by Salesforce's Apex callout governor limits, not Streaming Event limits.
Bulk DML fans out into batched callouts — BatchSize on the Real Time Sync record controls how many records per callout.
Works for both standard and custom SObjects.
5xx failures from IO are retried automatically with exponential backoff via the Queued Message mechanism.
3. HTTP connector + gRPC / Protocol Buffers
HTTP/REST only — the HTTP connector does not natively speak gRPC or Protocol Buffers.
Thank you, Tyler, for the detailed explanation—this is very helpful.
One followup question - Our Salesforce source for events is a Platform Event object (rtms__AccountingOutboundEvent__e), not a regular SObject. Does the Integrator Distributed Adaptor managed package support generating triggers on Platform Event objects? If not, what's the recommended pattern for subscribing to Platform Events?"