
Data Engineer
$ 4,000 - $ 7,000 / month
Checking job availability...
Original
Simplified
Responsibilities:
- Design, develop, and optimise large-scale data pipelines to facilitate efficient data collection, processing, and storage.
- Maintain and monitor existing data pipelines, ensuring high data availability and consistency.
- Develop and refine data processing algorithms capable of handling and analysing vast amounts of data.
- Design and optimise vector database (i.e. Milvus, Postgres) and RAG based LLM application.
- Collaborate with data scientists, analysts, and other stakeholders to accurately understand data requirements and deliver effective solutions.
- Communicate complex data concepts and insights clearly to non-technical stakeholders.
- Stay informed on the latest trends and advancements in big data technologies.
- Continuously evaluate and enhance the existing data infrastructure and processes to improve performance and scalability.
Requirements:
- Bachelor's degree or higher in Computer Science, Data Science, or a related field.
- Proven experience in designing and maintaining data pipelines and storage solutions.
- Strong technical proficiency with vector database, RAG and Docker.
Responsibilities:
- Design, develop, and optimise large-scale data pipelines to facilitate efficient data collection, processing, and storage.
- Maintain and monitor existing data pipelines, ensuring high data availability and consistency.
- Develop and refine data processing algorithms capable of handling and analysing vast amounts of data.
- Design and optimise vector database (i.e. Milvus, Postgres) and RAG based LLM application.
- Collaborate with data scientists, analysts, and other stakeholders to accurately understand data requirements and deliver effective solutions.
- Communicate complex data concepts and insights clearly to non-technical stakeholders.
- Stay informed on the latest trends and advancements in big data technologies.
- Continuously evaluate and enhance the existing data infrastructure and processes to improve performance and scalability.
Requirements:
- Bachelor's degree or higher in Computer Science, Data Science, or a related field.
- Proven experience in designing and maintaining data pipelines and storage solutions.
- Strong technical proficiency with vector database, RAG and Docker.