
Senior Data Development Engineer
$ 9,000 - $ 13,000 / month
Checking job availability...
Original
Simplified
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines for structured and unstructured data.
- Optimize and manage ETL/ELT processes for efficient data integration and transformation.
- Develop and maintain data warehouses, lakes, and real-time streaming solutions.
- Ensure data quality, consistency, and governance across platforms.
- Work with SQL and NoSQL databases to support business analytics.
- Implement cloud-based data architectures (AWS, Azure, GCP) and automation.
- Optimize big data processing frameworks (Spark, Hadoop, Flink) for high performance.
- Collaborate with cross-functional teams to support data-driven decision-making.
- Ensure security, compliance, and performance tuning of data systems.
- Troubleshoot and resolve data-related issues in production environments.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- 5+ years of experience in data development, engineering, or analytics.
- Proficiency in SQL, Python, Java, or Scala for data processing.
- Experience with ETL tools (Apache NiFi, Airflow, Talend, DBT, etc.).
- Hands-on experience with big data frameworks (Spark, Hadoop, Flink).
- Strong knowledge of data modeling and warehousing (Snowflake, Redshift, BigQuery).
- Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- Familiarity with real-time data processing (Kafka, Pulsar, or similar).
- Ability to troubleshoot, optimize, and scale data systems efficiently.
- Strong analytical and problem-solving skills.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines for structured and unstructured data.
- Optimize and manage ETL/ELT processes for efficient data integration and transformation.
- Develop and maintain data warehouses, lakes, and real-time streaming solutions.
- Ensure data quality, consistency, and governance across platforms.
- Work with SQL and NoSQL databases to support business analytics.
- Implement cloud-based data architectures (AWS, Azure, GCP) and automation.
- Optimize big data processing frameworks (Spark, Hadoop, Flink) for high performance.
- Collaborate with cross-functional teams to support data-driven decision-making.
- Ensure security, compliance, and performance tuning of data systems.
- Troubleshoot and resolve data-related issues in production environments.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- 5+ years of experience in data development, engineering, or analytics.
- Proficiency in SQL, Python, Java, or Scala for data processing.
- Experience with ETL tools (Apache NiFi, Airflow, Talend, DBT, etc.).
- Hands-on experience with big data frameworks (Spark, Hadoop, Flink).
- Strong knowledge of data modeling and warehousing (Snowflake, Redshift, BigQuery).
- Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- Familiarity with real-time data processing (Kafka, Pulsar, or similar).
- Ability to troubleshoot, optimize, and scale data systems efficiently.
- Strong analytical and problem-solving skills.