Epicareer Might not Working Properly
Learn More

Azure Data Engineer (Databricks)

$ 7,500 - $ 10,000 / month

Checking job availability...

Original
Simplified

The impact you will have:

  • Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to’s and productionalizing customer use cases
  • Work with engagement managers to scope variety of professional services work with input from the customer
  • Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications
  • Consult on architecture and design, bootstrap or implement customer projects which leads to a customers’ successful understanding, evaluation and adoption of Databricks.
  • Support customer operational issues with an escalated level of support.
  • Ensure that the technical components of the engagement are delivered to meet customer’s needs by working with the Project Manager, Architect, and Customer teams.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
  • Mentor and provide guidance to junior data engineers and team members.

What would help make your case:

  • 5+ years’ experience in data engineering, data architecture, data platforms & analytics
  • At least 3+ years experience with Azure Databricks, Informatica, PySpark, Python, and SQL.
  • Consulting / customer facing experience, working with external clients across a variety of industry markets
  • Comfortable writing code in both Python and SQL
  • Proficiency in SQL and experience with data warehousing solutions
  • Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one.
  • Strong understanding of data modelling, ETL processes, and data architecture principles.
  • Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals
  • Familiarity with CI/CD for production deployments – GitHub, Azure DevOps, Azure Pipelines
  • Working knowledge of MLOps methodologies
  • Design and deployment of performant end-to-end data architectures
  • Experience with technical project delivery – managing scope and timeline
  • Experience working with clients and managing conflicts
  • Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects
  • Good to have Databricks Certifications
  • Strong communication and collaboration skills
  • Ability to travel up to 30% when needed

The impact you will have:

  • Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to’s and productionalizing customer use cases
  • Work with engagement managers to scope variety of professional services work with input from the customer
  • Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications
  • Consult on architecture and design, bootstrap or implement customer projects which leads to a customers’ successful understanding, evaluation and adoption of Databricks.
  • Support customer operational issues with an escalated level of support.
  • Ensure that the technical components of the engagement are delivered to meet customer’s needs by working with the Project Manager, Architect, and Customer teams.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
  • Mentor and provide guidance to junior data engineers and team members.

What would help make your case:

  • 5+ years’ experience in data engineering, data architecture, data platforms & analytics
  • At least 3+ years experience with Azure Databricks, Informatica, PySpark, Python, and SQL.
  • Consulting / customer facing experience, working with external clients across a variety of industry markets
  • Comfortable writing code in both Python and SQL
  • Proficiency in SQL and experience with data warehousing solutions
  • Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one.
  • Strong understanding of data modelling, ETL processes, and data architecture principles.
  • Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals
  • Familiarity with CI/CD for production deployments – GitHub, Azure DevOps, Azure Pipelines
  • Working knowledge of MLOps methodologies
  • Design and deployment of performant end-to-end data architectures
  • Experience with technical project delivery – managing scope and timeline
  • Experience working with clients and managing conflicts
  • Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects
  • Good to have Databricks Certifications
  • Strong communication and collaboration skills
  • Ability to travel up to 30% when needed