Approach 1 :

Export Operation at TD level :

1)  Write a Teradata TPT script which will export the table data into Text files
Shell script will can be automated to pull the list of tables from a table inside the Teradata to populate to produce a text files .

Operation in Unix :

2) Text files are populated based on Table name as directory
3) The generated text files are split  inside the directory with number of parts to make equal sizes .
4) The total directory need to compressed after the splitting is done .  

Operation to Move to HDFS :

5) Create a GCP command ( gcloud) to move the zipped text files to GCP .

Approach 2 :

Export to on premise HDFS :

1) Write a sqoop script for a table to derive them to HDFS as file and external table in hive.
2) Each Table output can be zipped in the form of ORC while importing into HDFS .
3) Using Google Cloud storage connector for Hadoop, we need to move the ORC zipped files to GCP.
4) Hive External table can be constructed in top of files which are ORC zipped.

Approach 3 :
TD migration to Cloud :

1) TD to be migrated to GCP .
2) Sqoop which is already present in GCP to pull the data from TD to HDFS .
3) Sqoop pulls the text file in zipped format ORC.
4) Hive to construct external tables on top of Zipped ORC files.


Approach 4 :

Import from onpremise TD to GCP :

1) Sqoop on GCP to pull onpremise Teradata data .
2) Pulled data to GCP as external table with ORC zipping.
3) Gsutil to move the data to bucket .
4) Construct external tables on the files in hive.