Neon
Principal Research Scientist - AI Scaling & Optimization
Explore roles
Neon
Principal Research Scientist - AI Scaling & Optimization
$270,000/hour - $350,000/hour
San Francisco, California
Hybrid
RECENTLY POSTED
Graduate
Junior
Mid
Senior
Leader
Description

Principal Research Scientist - AI Scaling & Optimization Neon Software Engineering, Data Science San Francisco, CA, USA Mountain View, CA, USA USD 270k-350k / year + Equity Posted on May 1, 2026 Apply now Principal Research Scientist - AI Scaling & Optimization P-1227 About Databricks AI At Databricks, we are obsessed with enabling data teams to solve the world's toughest problems, from security threat detection to cancer drug development, by building and running the world's best data and AI platform. The Databricks AI Research organization enables companies to develop AI models and systems using their own data; from pre-training LLMs from scratch to state-of-the-art retrieval-augmented generation by producing novel science and putting it into production. We believe a company's AI models are a core part of their IP, and that highquality AI models should be available to all. About the Scaling Research Team The Databricks AI Scaling team focuses on pushing the boundaries of large language model (LLM) training and inference efficiency beyond what is required to support existing models. The team explores novel avenues for scaling and efficiency improvements across algorithms, systems, and infrastructure, requiring researchers who can both drive independent research agendas and dive deep into lowlevel implementation details with engineering partners. Role Summary As a Principal Research Scientist - Scaling, you will lead a team of worldclass researchers and engineers to advance the state of the art in largescale machine learning, focusing on post-training, RL and inference efficiency, optimization, and scaling. You will define and execute a research roadmap that advances the Databricks AI platform and delivers tangible improvements to how customers train, serve, and adapt LLMs at scale, working closely with product, data, and engineering leaders to bring cuttingedge methods into production. The Impact You Will Have Lead and grow a multidisciplinary research team focused on foundational and applied AI problems, with a particular emphasis on LLM scaling, efficiency, and systems performance. Define the scaling research roadmap in alignment with Databricks' strategic objectives, prioritizing advances in foundation model efficiency and largescale training and inference. Drive algorithmic innovations for largescale neural network training and inference, including novel optimizers, lowprecision techniques, and model adaptation methods, and guide your team in rigorous empirical validation against stateoftheart approaches. Optimize endtoend ML systems for distributed training and RL, memory efficiency, and compute efficiency through close collaboration with core systems and platform teams, ensuring that research ideas translate into performant, reliable infrastructure. Partner with product and engineering to translate research breakthroughs, especially around scaling and efficiency, into customerimpacting capabilities in the Databricks AI platform. Foster a culture of scientific excellence and openness, including highquality research practices, reproducible experimentation, and effective internal knowledge sharing across Databricks AI. Represent Databricks AI research externally through toptier publications, conference talks, and collaborations with academia and the opensource community, with a focus on optimization and efficiency for largescale models. Mentor and develop talent, providing both technical guidance (research agendas, experimentation, implementation) and career development support for research scientists and engineers. What You Will Do Define and lead independent research programs on foundation model efficiency, covering topics such as optimizer design, lowprecision training/inference, scalable model architectures, and efficient adaptation methods. Oversee the design and execution of largescale experiments, including benchmarking against stateoftheart methods and evaluating tradeoffs in quality, latency, throughput, and cost. Work handson with your team on highquality, efficient code in Python and PyTorch for research implementation, rapid prototyping, and integration with Databricks' production systems. Collaborate with distributed systems and infra teams to push the limits of distributed training, parallelism strategies, memory management, and hardware utilization for LLMs and other large models. Establish metrics, evaluation protocols, and best practices for scalingfocused research (e.g., training efficiency, inference cost, energy usage) and drive their adoption across Databricks AI. Champion responsible and robust deployment of scaling innovations, ensuring that model behavior, reliability, and safety remain firstclass considerations. What We Look For Proven ability to lead a research team to develop novel techniques for foundation model efficiency and related topics, with a strong track record of industry impact. Deep expertise in at least one of: generative AI, LLMs, distributed ML systems, model optimization, or responsible AI, with a strong emphasis on scaling and efficiency for largescale neural networks. Hands on leadership - strong programming skills and demonstrated ability to write highquality, efficient code in Python and PyTorch for research implementation and experimentation. Demonstrated ability to translate research innovation into scalable product capabilities in partnership with product and engineering teams. Excellent communication, leadership, and stakeholder management skills, with experience influencing crossfunctional roadmaps and aligning research with business impact. Nice to Have Prior work at the intersection of systems and ML, such as distributed training frameworks, compiler and kernel optimization for deep learning workloads, or memory/computeefficient model design. Strong industry and academic network in largescale ML, with ongoing collaborations or service (e.g., PC/area chair) at top conferences in ML and systems. A strong record of research impactsuch as firstauthor publications at top ML/systems conferences (e.g., ICLR, ICML, NeurIPS, MLSys), influential opensource contributions, or widely used deployed systemsespecially in optimization or efficiency. Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $270,000 $350,000 USD About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide including Comcast, Conde Nast, Grammarly, and over 50% of the Fortune 500 rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Apply now See more open positions at Neon

Role tech stack
Neon
Principal Research Scientist - AI Scaling & Optimization$270,000/hour - $350,000/hour
Share role