
Data Engineer
$ 5,000 - $ 7,500 / month
Checking job availability...
Original
Simplified
Responsibilities:
- Design, build, and maintain scalable and efficient data pipelines for collecting, transforming, and loading data (ETL processes).
- Integrate data from various sources, including APIs, databases, and third-party services into a centralized data warehouse.
- Ensure the data is stored efficiently and securely in databases or cloud-based storage solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Ensure the data pipelines are optimized for performance and scalability.
- Monitor data quality, resolve issues, and implement data validation processes to ensure consistency and accuracy.
- Work with data scientists, analysts, and business stakeholders to ensure data is accessible and valuable for analysis.
- Automate repetitive tasks and improve data workflow efficiency.
- Implement and ensure adherence to data security practices, compliance, and governance protocols.
- Document processes, pipelines, and data workflows for transparency and knowledge sharing.
Requirements:
- Degree in Computer Science, Information Technology, Data Science, or a related field, or equivalent practical experience.
- 2-5 years of experience in a data engineering role or similar.
- Strong SQL skills for querying and managing relational databases.
- Experience with programming languages such as Python, Java, or Scala to build and maintain data pipelines.
- Experience with ETL tools like Apache Airflow, Talend, or custom-built solutions.
- Familiarity with big data tools and frameworks such as Hadoop, Spark, Kafka, or similar.
- Knowledge of cloud-based data storage and services such as AWS, Google Cloud, or Azure.
- Experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Proficiency with Git or similar version control systems.
- Understanding of data modeling principles for organizing and structuring data in databases.
- Strong troubleshooting and debugging skills to identify and resolve issues with data pipelines and architecture.
- Ability to communicate effectively with technical and non-technical teams.
- Understanding of how to integrate data pipelines with machine learning models preferred.
- Familiarity with data privacy, security policies, and compliance requirements (e.g., GDPR, HIPAA) preferred.
Please send your detailed resume in MS Word format to with
- Education Level
- Working experiences
- Each employment background
- Reason for leaving each employment
- Last drawn salary
- Expected salary
- Date of availability
Responsibilities:
- Design, build, and maintain scalable and efficient data pipelines for collecting, transforming, and loading data (ETL processes).
- Integrate data from various sources, including APIs, databases, and third-party services into a centralized data warehouse.
- Ensure the data is stored efficiently and securely in databases or cloud-based storage solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Ensure the data pipelines are optimized for performance and scalability.
- Monitor data quality, resolve issues, and implement data validation processes to ensure consistency and accuracy.
- Work with data scientists, analysts, and business stakeholders to ensure data is accessible and valuable for analysis.
- Automate repetitive tasks and improve data workflow efficiency.
- Implement and ensure adherence to data security practices, compliance, and governance protocols.
- Document processes, pipelines, and data workflows for transparency and knowledge sharing.
Requirements:
- Degree in Computer Science, Information Technology, Data Science, or a related field, or equivalent practical experience.
- 2-5 years of experience in a data engineering role or similar.
- Strong SQL skills for querying and managing relational databases.
- Experience with programming languages such as Python, Java, or Scala to build and maintain data pipelines.
- Experience with ETL tools like Apache Airflow, Talend, or custom-built solutions.
- Familiarity with big data tools and frameworks such as Hadoop, Spark, Kafka, or similar.
- Knowledge of cloud-based data storage and services such as AWS, Google Cloud, or Azure.
- Experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Proficiency with Git or similar version control systems.
- Understanding of data modeling principles for organizing and structuring data in databases.
- Strong troubleshooting and debugging skills to identify and resolve issues with data pipelines and architecture.
- Ability to communicate effectively with technical and non-technical teams.
- Understanding of how to integrate data pipelines with machine learning models preferred.
- Familiarity with data privacy, security policies, and compliance requirements (e.g., GDPR, HIPAA) preferred.
Please send your detailed resume in MS Word format to with
- Education Level
- Working experiences
- Each employment background
- Reason for leaving each employment
- Last drawn salary
- Expected salary
- Date of availability