Shared drive in Google Drive

Is there any way to select a shared drive instead of the default "My Drive" on the "Directory path" field?

I'm using the Google Drive "Transfer files into destination application".

@juliensauvageau Moved this to the Troubleshoot custom flows topic for better visibility. To answer your question, you can set the Directory path to the Google Drive folder that contains the files to be transferred. integrator.io will transfer all files from the folder path specified, and delete them from the folder once the transfer completes. You can (optionally) configure integrator.io to leave files in the folder, or to transfer files that match a certain starts with or ends with file name pattern. Hope that answers your question.

Is there an example where I can specify the Shared Drive name in the Directory path?

@juliensauvageau it's not currently possible to specify a shared drive in the directory path. What I've done before is make a "integration staging folder" within your own drive, place the file there, then have a step after placing the file to move it to shared drive. In order to add the second step, choose HTTP from the drop down connection list, then select your google drive connection.

/drive/v3/files/{{record.newFile.id}}?addParents={{settings.flowGrouping.officeHoursRecordingDriveFolderId}}&removeParents={{settings.flowGrouping.integrationStagingFolderId}}&supportsAllDrives=true

The removeParents parameter is the folder id of your folder in your own drive, then addParents is the folder id within the shared drive you want to move the file to. You can find these folder ids by looking in the url of your web browser.

@tylerlamparter, sorry to hijack this thread but I have a question closely related to the answer you gave.

Thanks for the assist on how to save to a shared folder. I have my flow set up near identical to what you have defined. However, in my scenario I'm retrieving data and then pushing it to Google Drive as a CSV. This works great, but when I add the response mapping to my Google Drive step I get the following error

I have no idea what this error means, and I can't find it anywhere when I search. Could you offer assistance here?

Thanks!

@bryancarroll can you screenshot your flow? This seems to be working fine for me.

@tylerlamparter, thanks for the quick reply! This is my flow, fairly simple:

It simply exports from an API and adds a transformation field map to the results, converts those results into a CSV and then uploads the file to Google Drive. The file upload step works great, the config and mapping for the "Import file" step are below:

However, when I add the response mapping to the import file step as you outlined I get the error message. This is the response map:

Totally baffled here. If it looks like this is going to be a lengthy discussion we can certainly move this to a different thread and save the OP some grief in receiving notifications for a thread they may not care about anymore.

Thanks for your attention! Hopefully this is just something I'm missing.

@bryancarroll I would suggest rebuilding the google drive step and keep it simple with simple mappings and simple file names. If it's still not working then, I would create a support ticket as I'm not able to reproduce your issue.

Will do, thanks @tylerlamparter

Hello, I got the flow built based on Tyler’s example of moving data from Google My Drive to Shared Drive, and the file is created on the shared drive successfully after help from Chris Tran, BUT the flow keeps running after the successful creation of the file on the shared drive, and I eventually have to manually cancel the flow because it won’t stop on it’s own. Examining the errors, they are below…I probably will have to open a support ticket…

Message:

{
    "error": {
        "code": 500,
        "message": "Internal Error",
        "errors": [
            {
                "message": "Internal Error",
                "domain": "global",
                "reason": "internalError"
            }
        ]
    }
}

Code: 500
Source: Google Drive
Timestamp: 2025-09-24T14:32:04.369Z
Error ID: 3845595604
Classification : Intermittent
Trace key : 5020

Hi Chris, if I remember correctly on the call, I think this has to do with you generating the file directly in Google drive from a report, which is aggregating multiple rows (records) into one. Each row from your report is a seen as a record, and is processed individually.

I can see two options you can try here.

  1. Group the results from your report using a large enough page size so that the export becomes a single object with an array containing each row. Then using many to one on your import step where you generate the file, pointing to that array. This will allow the following step to just look at that single record to move file to the shared folder, instead of running this step on all rows.
  2. Do the move to a shared folder on a secondary flow, instead of the last step on this flow. Your new flow will first fetch the file ID in Gdrive then the next step will be the one that Tyler outlined above.

Thanks, I tried number 1), and it did speed it up so that last step only took 1 minute. I will think about how to do 2) to see how that works

Here's what I did, is that what you were suggesting for number 1)?:

You shouldn't need to use HTTP batch. The flow should look something like this:

  1. You export all data from a source application, and on your export, you set a high page size because this makes the aggregation of data faster. Additionally, you add a preSavePage script to your export that adds the pageIndex and recordIndex to the data.
  2. You use a Google Drive step to "Transfer files into the destination application," where all pages of data will aggregate to create a single file.
  3. You then have a final import step to move the file to the shared drive. Here, you only want this to run once, so you apply an input filter for pageIndex = 0 and recordIndex = 0. Since the prior step is aggregating, all data first aggregates there, then uploads, and then the pages of data proceed to this last import step. You could additionally have a second flow to run after this where it moves the file, but then you would need two flows.

Here is a sample script:

function pageIndexAdder (options) {
  
  for (let [index,d] of options.data.entries()) {
    d.pageIndex = options.pageIndex;
    d.recordIndex = index;
  }
  
  return {
    data: options.data,
    errors: options.errors,
    abort: false,
    newErrorsAndRetryData: []
  }
}

Thank you! made some tweaks based on your suggestions. I will look into the others, but it is performing fast enough now. I will get into some large extracts and wll look at implementing all of this then once I see how that performs.