Apply on
Original
Simplified
Description
Responsibilities:
- Design, build, and maintain scalable data pipelines to enable real-time and batch data processing.
- Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
- Optimize and enhance data storage solutions, including data lakes and data warehouses.
- Ensure data quality and integrity by implementing appropriate testing and monitoring strategies.
- Conduct data profiling and analysis to improve data structure and accessibility.
- Stay abreast of industry trends and evaluate new data technologies and architectures.
- Participate in architectural reviews and provide input on system design.
Requirements
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 5+ years of experience in data engineering or a related role.
- Proficiency in SQL and databases (e.g., MySQL, PostgreSQL, or NoSQL solutions like MongoDB).
- Experience with data processing frameworks such as Apache Spark or Hadoop.
- Solid understanding of ETL processes and tools (e.g., Airflow, Talend).
- Familiarity with cloud platforms (AWS, GCP, or Azure) for data management.
- Strong programming skills in Python, Java, or Scala.
- Exceptional analytical and problem-solving abilities.
- Excellent communication skills and a collaborative mindset.
(EA Reg No: 20C0312)
Please email a copy of your detailed resume to [email protected] for immediate processing.
Only shortlisted candidates will be notified.
Benefits
- Permanent role
Similar Jobs