Epicareer Might not Working Properly
Learn More

Data Engineer (Contract)

Salary undisclosed

Apply on


Original
Simplified

Key Responsibilities:

  • Develop, maintain, and optimize ETL processes using AWS Glue, PySpark, and SQL to support data-driven initiatives and data warehousing needs.
  • Work with various database engines (Oracle, PostgreSQL, Amazon S3, PostgreSQL, and Athena) and interact with external APIs to extract and load data.
  • Leverage AWS services including AWS Glue, CloudFormation, and Lambda to manage, monitor, and scale data workflows effectively.
  • Write modular, testable, and well-documented code following Python, AWS Glue, and Lambda best practices.
  • Manage CI/CD pipelines and ensure strong testing, validation, and code quality assurance.
  • Collaborate with cross-functional teams to define data requirements, troubleshoot issues, and support business objectives.
  • Implement data governance, metadata management, and data quality practices.
  • Participate in Agile-Scrum teams, contributing to sprints and iterative development processes.
  • Excellent teamwork and communication skills, with the ability to work effectively in a collaborative environment.

Qualifications:

  • Bachelor’s degree in Computer Science or Data Analytics, or a related field.
  • Minimum of 3 years of hands-on experience with AWS services, including extensive experience with AWS Glue, CloudFormation, and Lambda.
  • Minimum of 3 years of experience in Python/PySpark and SQL for data processing, transformation, and querying.
  • Proven experience in pulling and integrating data using APIs, with a preference for GraphQL API knowledge.
  • Strong knowledge of Git, GitLab, and Bitbucket for code versioning and maintainability.
  • Excellent analytical and problem-solving skills with a keen attention to detail.