Would someone please help me understand how to deal with JSON responses in the import mapper?
JSON has a variable schema, so one record/JSON Object might have 3 key values, and the next Object might only have 2.
If I map all 3 keys to database columns, then when that second object tries to insert into the table it shifts my 3rd column value to my second column (assuming the first or second key was missing from that second JSON Object).
Please help me understand how to deal with these types of changes that are common in JSON Responses.
It is difficult to help without seeing your flow configs. Can you share screenshots of your flow, the import settings drawer, the mappings drawer, etc...? Please also share screenshots of the resulting errors, or malformed data if possible?
At a high level, it is super common to map variable JSON to database tables, and it is very common for JSON fields to be missing, and it should just work; but maybe you are doing something unique and that is where pics will help set context.
Initially this problem did cause some unexpected errors complaining about incorrect data types when inserting into snowflake, but after reducing the number of columns, I was able to get the flow working, but the resulting data in the table shows that the missing keys in the JSON response caused a shift in the table data:
Here in snowflake we can see that for some records there was no ordertotal_currency code or ordertotal_amount, so the numberofitemsshipped was moved over to the left.
This shifting that you are reporting here is likely what caused the error in your other post. i.e. "Numeric value 'Other' is not recognized"
I will ask my team to see if there is a bug in our product when JSON fields being mapped are missing, and is it possible that we allow columns to shift.
In the meantime, you can work around this with any of the following solutions.
See pic below. You can use one of the mapping options for when a value is missing.
Instead o f#1, I would highly recommend mapping the entire JSON record into a single variant column, and then use Snowflake views to break the data into columns. This is way more flexible/powerful.
You an use a preSavePage JavaScript hook on your source bubble in your flow to make sure that all fields always have a default value, else set one, etc...
Im not sure how to tell the mapping to select the whole record (maybe the word "this")?
Also I think I realize what you were saying in the other post, that jsonserialize is a helper, and that either this is the column to serialize or it is the entire record. Right?
Also, not sure if this will help, but instead of doing "one to many" in the Snowflake import bubble, you can also specify the path to the individual records in your Amazon source export bubble, and then no need to do "one to many" or deal with arrays of orders in the imports, etc...
i.e. Open your source export and look for this field