Job Description
Senior Machine Learning Engineer - GenAI
San Mateo, CA / Remote
The Role:
We are seeking a highly skilled and experienced Senior Machine Learning Engineer with a strong focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), Fine-Tuning and general software development background. This role is pivotal in supporting the development and enhancement of our AI-powered solutions, particularly in cloud environments and with technologies such as Kubernetes, AWS Sagemaker and Langchain/Llamaindex.
Our ideal candidate will have a proven track record of building and leveraging Gen-AI solutions to solve complex enterprise use-cases, while demonstrating both expertise and a deep passion for innovation in the field.
What you will do:
• RAG and Agent Optimization: Design, develop, and optimize standard RAG (via Corrective-RAG or Graph-Rag) and Agentic solutions (e.g. Langgraph) to solve enterprise usecases and enhance our existing products with Gen-AI.
• Model Deployment and Inference: Leverage ML-specific kubernetes workloads (e.g. kubeflow) or cloud managed solutions (e.g. AWS Sagemaker) to manage LLM deployment and serving workloads. Strive to continually optimize the LLM infrastructure and inference engine to keep improving inference quality and speed.
• Model Fine-Tuning: Lead data preparation and fine-tuning of LLMs via one or more of the following techniques - Supervised Fine-Tuning (SFT), Reinforced Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO).
• Prompt Engineering: Develop and fine-tune prompts to optimize the performance and accuracy of LLM integrations in various applications.
• Monitoring and Logging: Set up robust monitoring and logging for LLM inference and training infrastructure to ensure performance and reliability.
• LLM and RAG Evaluation: Design and wire up appropriate evaluation frameworks to compare and evaluate different LLMs, Embedding Models and RAG configurations.
• Security: Implement best practices for security in LLM and Embedding Model deployments and infrastructure management.
• Productize and Iterate: Work closely with Developer-Experience Engineers to test, iterate and productize new LLM capabilities to power different products and capabilities (e.g., chat, search, recommendation, agentic workflows and so on).
• Research and Innovation: Stay ahead of latest advancements in LLM and RAG space and continually explore opportunities to enhance our products and services.
What you will bring:
• Expert programming skills in Python, plus good working knowledge of at least one additional language, preferably Go, Rust or Javascript. Candidates should be able to write clean, efficient, and well-documented code.
• Substantial experience in designing and integrating RESTful APIs with various front-end and back-end systems, ensuring seamless communication between components and services.
• Proficient in using version control systems, implementing comprehensive testing strategies, debugging complex issues, and conducting effective code reviews to ensure quality and maintainability.
• Proficiency in ML frameworks and libraries commonly used in NLP and Generative AI, such as PyTorch, Transformers and Sentence Transformers.
• Deep conceptual and working knowledge in at least one RAG framework (e.g., Langchain, Llamaindex, Haystack).
• Substantial exposure with leveraging advanced RAG concepts such as Re-Ranking, RAPTOR, Corrective-RAG and GraphRAG to productionize a standard RAG implementation.
• Exposure with designing and wiring up agentic workflows (via Langgraph or similar framework) to intelligently handle user queries or general tasks end-to-end.
• Expertise in prompt engineering and optimizing prompts for LLMs.
• Excellent problem-solving skills and attention to detail.
• Exceptional communication and teamwork skills.
Good-to-have:
• Proficiency with operating in cloud and container-orchestration platforms (AWS and Kubernetes).
• Experience in working with automated evaluation frameworks (e.g., DeepEval, RAGAS) to assess LLMs and advanced RAG techniques.
• Expertise in aligning models with fine-tuning frameworks such as RLHF and DPO.
• Experience in productionizing LLM based solutions and monitoring KPIs to assess performance and quality.
Please refer to our Candidate Privacy Notice for more information about how we process your personal information, and your data protection rights.
At SIE, we consider several factors when setting each role's base pay range, including the competitive benchmarking data for the market and geographic location.
Please note that the base pay range may vary in line with our hybrid working policy and individual base pay will be determined based on job-related factors which may include knowledge, skills, experience, and location.
In addition, this role is eligible for SIE's top-tier benefits package that includes medical, dental, vision, matching 401(k), paid time off, wellness program and coveted employee discounts for Sony products. This role also may be eligible for a bonus package. Click here to learn more.
This is a flexible role that can be remote, with varying pay ranges based on geographic location. For example, if you are based out of Seattle, the estimated base pay range for this role is listed below.
$187,700
- $281,500 USD
Equal Opportunity Statement:
Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender (including gender identity, gender expression and gender reassignment), race (including colour, nationality, ethnic or national origin), religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.
We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.
PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment.
Jobcode: Reference SBJ-gp9jyo-18-188-219-131-42 in your application.