Importing / exporting a 'structured csv' file
I am using SmartConnect to integrate csv files to and from GP. As you’ll see below there are a different number of fields between the header ‘H’ and lines ‘T’. They were described as ‘structured csv.
Obviously this causes an issue when mapping for importing and exporting. Have you come across this and if so do you have advice on how to handle this please ?
When reading a file, the number of columns in each row should be the same. It will read the file if there are a different number of columns in each row and different data types but will have unintended consequences when trying to send that data to the destination without a lot of calculated fields. The best option is to use a pre-map .NET task to transform the data into something that is structured.
As for writing a CSV file, SmartConnect is only able to create every row with the same number of columns. In this case, if you need different row lengths a Post Document task/Post Map Task would be needed to modify any exported file.
Thanks Lorren. I did suspect that some pre and post tasks would be required but wanted confirmation
I would agree with Lorren they _should_ be the same.
However in my limited experience in that they don’t have to be.
The ODBC driver is looking at the file and will put in empty column data for the “short” rows.
I just re-tested this and it seems to read it fine.
If you would like to submit an answer or comment, please sign in to the eOne portal.