
Data Engineer | Python | SQL
$ 7,000 - $ 7,000 / month
Checking job availability...
Original
Simplified
- Build, test, and maintain data architectures, including databases, data warehouses, and large-scale data processing systems.
- Create data pipelines and systems for data modelling, mining, and production to meet the needs of data analytics teams, stakeholders, and business requirements.
- Ensure that the data architecture supports both routine and ad-hoc data analytics needs across various teams.
- Use a range of programming languages and data processing tools to clean and prepare raw data, making it available for descriptive and predictive modelling.
- Recommend and implement improvements to enhance data quality, reliability, flexibility, and efficiency.
- Organize and store data assets and catalogues efficiently to ensure easy access and retrieval of information.
- Perform SQL and PL/SQL tuning and optimize both new and existing applications for better performance.
- Minimum 3 years' working experience in data architecture, data warehousing, data processing, data modelling and ETL/ELT, familiarity with real-time streaming solutions.
- Experience in Kubernetes-based DevOps practices, with experience in container orchestration, CI/CD pipelines, and microservices deployment.
- Working experience in database development (Oracle SQL/PLSQL)
- Working experience in AWS cloud environment, familiar with solutions such as EC2, S3, EMR, Redshift, Athena, Kinesis
- Programming knowledge in Python, R, SQL for data cleaning, processing and aggregation
- Proficiency in one or more of the following: Java, Hadoop, HDFS, Apache Airflow, Apache Spark, Scala, Hive, Pig
- Basic knowledge of Oracle database architecture
- Contact you about potential opportunities.
- Delete personal data not required at this application stage.
- To withdraw consent, email [email protected].
- All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.
- Build, test, and maintain data architectures, including databases, data warehouses, and large-scale data processing systems.
- Create data pipelines and systems for data modelling, mining, and production to meet the needs of data analytics teams, stakeholders, and business requirements.
- Ensure that the data architecture supports both routine and ad-hoc data analytics needs across various teams.
- Use a range of programming languages and data processing tools to clean and prepare raw data, making it available for descriptive and predictive modelling.
- Recommend and implement improvements to enhance data quality, reliability, flexibility, and efficiency.
- Organize and store data assets and catalogues efficiently to ensure easy access and retrieval of information.
- Perform SQL and PL/SQL tuning and optimize both new and existing applications for better performance.
- Minimum 3 years' working experience in data architecture, data warehousing, data processing, data modelling and ETL/ELT, familiarity with real-time streaming solutions.
- Experience in Kubernetes-based DevOps practices, with experience in container orchestration, CI/CD pipelines, and microservices deployment.
- Working experience in database development (Oracle SQL/PLSQL)
- Working experience in AWS cloud environment, familiar with solutions such as EC2, S3, EMR, Redshift, Athena, Kinesis
- Programming knowledge in Python, R, SQL for data cleaning, processing and aggregation
- Proficiency in one or more of the following: Java, Hadoop, HDFS, Apache Airflow, Apache Spark, Scala, Hive, Pig
- Basic knowledge of Oracle database architecture
- Contact you about potential opportunities.
- Delete personal data not required at this application stage.
- To withdraw consent, email [email protected].
- All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.