Software Developer (Computer Science/ Data Science/ AI)
Established in 2010, the Energy Research Institute @ NTU (ERI@N) is a pan-university research institute that focuses on systems-level research for tropical megacities. It performs translational research that covers the energy value chain from generation to innovative end-use solutions, motivated by industrialisation and deployment. ERI@N has multiple Interdisciplinary Research Programmes which focus on translational Research, Development & Deployment which focus on specific area of the energy value chain, and a number of Living labs and Testbeds which facilitate large scale technology deployment enabling validation and demonstration of real-world applications.
For more details, please view
A software developer is crucial in a research institute because they manage and analyze large data sets, automate repetitive tasks, and enhance collaboration among researchers. They develop custom tools and platforms that streamline workflows, improve accuracy, and facilitate the sharing of data and findings. Additionally, developers drive innovation by prototyping new ideas and integrating advanced technologies like AI and big data. They also support the visualization and dissemination of research outputs, making complex data more accessible and impactful. Overall, their expertise significantly boosts the efficiency, accuracy, and innovation of research activities.
Key Responsibilities:
Design, develop, and implement AI and ML models to support research initiatives.
Collaborate with researchers to understand project requirements and translate them into technical solutions.
Develop and maintain data pipelines for efficient data processing and analysis.
Perform data analysis and visualization to derive insights and support decision-making.
Integrate AI and ML models into existing and new applications, ensuring seamless functionality.
Conduct experiments and tests to validate AI and ML models and refine them based on results.
Stay updated with the latest advancements in AI, ML, and data science, and apply them to research projects.
Document development processes, models, and algorithms for future reference and reproducibility.
Ensure compliance with data privacy and security regulations in all development activities.
Be an R&D team lead to define & achieve the project goals
Work closely with researchers, developers, partners and/or external vendors for timely project delivery
Any other ad-hoc duty or responsibility, e.g., project presentation, progress report preparation, procurement and research collaborations etc. assigned by the project supervisor
Job Requirements:
Bachelor’s or master’s degree in computer science, Data Science, AI, or a related field.
Min. Years 8 Years of experience in data engineering, software development, or similar roles
Prior exposure to ML/Data products is a good to have
Work on tasks that are an intersection of Software development, Data engineering & Data science Management/Artificial Intelligence/Machine Learning
Solid knowledge & Strong expertise:
Experience of open-source SQL/NoSQL databases e.g., Postgres, Timescale DB, Cassandra
Experienced in secure RESTful API development
Experienced in Unix/Linux shell scripting
Experienced in containerized Microservice development using Docker, Kubernetes
Familiar with modern data ETL stack components (Apache Airflow, streaming/events via Kinesis/Kafka, etc.)
Familiar with Python data science tools/packages (Pandas, Numpy)
Experience of MLOps is a plus
Code integration (Merge request and conflicting code resolution)
Experience fixing software build issues
Software Code Review techniques and code debugging skill
Writing clean code & structured approach in building software systems
Worked in Agile methodology
Interested in learning AI/ML workflows
Willing to learn new technologies based on the project needs
Proficiency:
Proficient in JavaScript, Python, Go and similar programming languages for backend development
Proficient in SQL
Strong skills in the use of current state of the art machine learning frameworks such as Scikit-Learn, H2O, Keras, Pandas, Numpy, TensorFlow and Spark, etc.
Experience with building and maintaining Big Data, using open-source technologies such as Hadoop and Cassandra
Proficient in React, Angular or similar frontend technologies
Proficient in deploying and maintaining CI/CD pipelines e.g. Gitlab CI/CD.
We regret to inform that only shortlisted candidates will be notified.
Hiring Institution: NTU