Design, implement, and maintain scalable and efficient data pipelines across platforms like Netezza, Hadoop, Snowflake, DBT, and DataStage to process and transform large volumes of data. Consolidate and integrate data from multiple source systems, including enterprise platforms such as SAP, Oracle, Kafka, on prem and cloud database systems, and messaging queues, ensuring data consistency and availability. Design and implement hybrid data solutions using AWS, Azure, and GCP, leveraging their services for scalable storage, processing, and analytics. Create and optimize Extract, Transform, Load (ETL) workflows using tools like DBT and DataStage, ensuring efficiency and reliability in data processing pipelines. Implement rigorous data quality checks and validation mechanisms to ensure accuracy and reliability in integrated datasets using Python, SQL and DBT. Work under supervision. Travel and/or relocation to unanticipated client sites throughout USA is required.