Checking job availability...
Original
Simplified
about the company
Our Client is an IT MNC company
about the job
• Build and deploy AI/ML solutions (including LLMs): Translate models into scalable, production-ready software and services.
• Develop and maintain infrastructure for LLMs: Design pipelines for data ingestion, preprocessing, model training (including fine-tuning), deployment, and monitoring.
• Optimize AI models for performance and efficiency: Address speed, scalability, and resource constraints, especially for LLM inference.
• Integrate AI models into systems: Build and manage APIs to deliver AI and LLM capabilities across products.
• Implement MLOps best practices: Manage CI/CD pipelines, testing, deployment, and monitoring, tailored for both ML and LLM workflows.
• Monitor production systems: Identify and resolve performance issues, with a focus on LLM stability and behavior.
• Collaborate across teams: Provide engineering support throughout the model development cycle, especially on LLM deployment feasibility.
• Stay ahead of the curve: Keep up with the latest in AI/ML and LLM research and tools.
• Document systems and architectures: Clearly capture designs, workflows, and deployment strategies, especially for LLM integrations.
• Maintain code quality: Follow software engineering best practices with attention to maintainability and testing of LLM-based systems.
about the manager/team
This role reports for Application director AI/ML
skills and experience required
• 3+ years of experience as a Software Engineer with a demonstrable focus on AI/ML projects.
• Strong proficiency in Python and experience with relevant AI/ML libraries and frameworks such as TensorFlow, PyTorch, scikit-learn, Pandas, and NumPy.
• Experience in deploying and scaling machine learning models, including Large Language Models, in a production environment.
• Solid understanding of cloud computing platforms (e.g., AWS, Azure, GCP) and their AI/ML services, including services relevant to LLM deployment (e.g., managed inference endpoints).
• Hands-on experience with containerization technologies (e.g., Docker, Kubernetes).
• Familiarity with CI/CD pipelines and MLOps principles, with specific understanding of how they apply to LLMs.
• Experience with API development and integration, including building APIs for interacting with LLMs.
• Strong understanding of software development principles, data structures, and algorithms.
• Excellent problem-solving, analytical, and debugging skills, including the ability to troubleshoot issues specific to LLM behavior.
• Strong communication and collaboration skills, with the ability to work effectively in a team environment, including discussing the nuances of LLM capabilities and limitations.
• Have experience working with Large Language Models (LLMs) and Transformer architectures (e.g., GPT, BERT, Llama, DeepSeek). This includes practical experience in prompt engineering, fine-tuning, evaluation, and deployment of LLMs.
To apply online please use the 'apply' function,
(EA: 94C3609/ R1324990 )
skills
Strong proficiency in Python and experience with relevant AI/ML libraries and frameworks such as TensorFlow, PyTorch, scikit-learn, Pandas, and NumPy.
qualifications
diploma or bachelor's degree holder in IT, Data science & ML, engineering, accounting or any other relevant field.
education
Associate Degree/Diploma
Our Client is an IT MNC company
about the job
• Build and deploy AI/ML solutions (including LLMs): Translate models into scalable, production-ready software and services.
• Develop and maintain infrastructure for LLMs: Design pipelines for data ingestion, preprocessing, model training (including fine-tuning), deployment, and monitoring.
• Optimize AI models for performance and efficiency: Address speed, scalability, and resource constraints, especially for LLM inference.
• Integrate AI models into systems: Build and manage APIs to deliver AI and LLM capabilities across products.
• Implement MLOps best practices: Manage CI/CD pipelines, testing, deployment, and monitoring, tailored for both ML and LLM workflows.
• Monitor production systems: Identify and resolve performance issues, with a focus on LLM stability and behavior.
• Collaborate across teams: Provide engineering support throughout the model development cycle, especially on LLM deployment feasibility.
• Stay ahead of the curve: Keep up with the latest in AI/ML and LLM research and tools.
• Document systems and architectures: Clearly capture designs, workflows, and deployment strategies, especially for LLM integrations.
• Maintain code quality: Follow software engineering best practices with attention to maintainability and testing of LLM-based systems.
about the manager/team
This role reports for Application director AI/ML
skills and experience required
• 3+ years of experience as a Software Engineer with a demonstrable focus on AI/ML projects.
• Strong proficiency in Python and experience with relevant AI/ML libraries and frameworks such as TensorFlow, PyTorch, scikit-learn, Pandas, and NumPy.
• Experience in deploying and scaling machine learning models, including Large Language Models, in a production environment.
• Solid understanding of cloud computing platforms (e.g., AWS, Azure, GCP) and their AI/ML services, including services relevant to LLM deployment (e.g., managed inference endpoints).
• Hands-on experience with containerization technologies (e.g., Docker, Kubernetes).
• Familiarity with CI/CD pipelines and MLOps principles, with specific understanding of how they apply to LLMs.
• Experience with API development and integration, including building APIs for interacting with LLMs.
• Strong understanding of software development principles, data structures, and algorithms.
• Excellent problem-solving, analytical, and debugging skills, including the ability to troubleshoot issues specific to LLM behavior.
• Strong communication and collaboration skills, with the ability to work effectively in a team environment, including discussing the nuances of LLM capabilities and limitations.
• Have experience working with Large Language Models (LLMs) and Transformer architectures (e.g., GPT, BERT, Llama, DeepSeek). This includes practical experience in prompt engineering, fine-tuning, evaluation, and deployment of LLMs.
To apply online please use the 'apply' function,
(EA: 94C3609/ R1324990 )
skills
Strong proficiency in Python and experience with relevant AI/ML libraries and frameworks such as TensorFlow, PyTorch, scikit-learn, Pandas, and NumPy.
qualifications
diploma or bachelor's degree holder in IT, Data science & ML, engineering, accounting or any other relevant field.
education
Associate Degree/Diploma