Epicareer Might not Working Properly
Learn More

Senior Data Engineer (Informatica and MS Fabric)

$ 6,500 - $ 7,500 / month

Checking job availability...

Original
Simplified

Job Description

Job Requirement:

  • Data Engineers with experience inInformatica,Microsoft Fabrictypically have responsibilities centered around designing, implementing, and managing data pipelines and architectures that support data integration, transformation, and analytics within a cloud-based or hybrid environment.
  • Design and implement ETL (Extract, Transform, Load) workflows using Informatica PowerCenter, Informatica Cloud, or Informatica Intelligent Cloud Services (IICS)
  • Build and manage ETL and ELT data pipelines using Azure Data Factory (ADF) to orchestrate data movement and transformation across on-premises and cloud environments.
  • Create and optimize Spark-based data pipelines in Databricks, using notebooks or jobs to ingest, clean, and transform large datasets for analytics and reporting.
  • Work with Microsoft Fabric to integrate and manage dataflows, datasets, and transformation processes across its unified data platform, helping build scalable data pipelines.

Job Description

Job Requirement:

  • Data Engineers with experience inInformatica,Microsoft Fabrictypically have responsibilities centered around designing, implementing, and managing data pipelines and architectures that support data integration, transformation, and analytics within a cloud-based or hybrid environment.
  • Design and implement ETL (Extract, Transform, Load) workflows using Informatica PowerCenter, Informatica Cloud, or Informatica Intelligent Cloud Services (IICS)
  • Build and manage ETL and ELT data pipelines using Azure Data Factory (ADF) to orchestrate data movement and transformation across on-premises and cloud environments.
  • Create and optimize Spark-based data pipelines in Databricks, using notebooks or jobs to ingest, clean, and transform large datasets for analytics and reporting.
  • Work with Microsoft Fabric to integrate and manage dataflows, datasets, and transformation processes across its unified data platform, helping build scalable data pipelines.