Python / PyTorch Developer Frontend Inference Compiler
Cerebras Systems
Posted 30+ days ago
Send me Jobs like this
Nationality
Any Nationality
Gender
Not Mentioned
Vacancy
1 Vacancy
Job Description
Roles & Responsibilities
Would you like to participate in creating the fastest Generative Models inference in the world? Join the Cerebras Inference Team to participate in development of unique Software and Hardware combination that sports best inference characteristics in the market while running largest models available.
Cerebras wafer scale inference platform allows running Generative models with unprecedented speed thanks to unique hardware architecture that provides fastest access to local memory, ultra-fast interconnect and huge amount of available compute.
You will be part of the team that works with latest open and closed generative AI models to optimize for the Cerebras inference platform. Your responsibilities will include working on model representation, optimization and compilation stack to produce the best results on Cerebras current and future platforms.
Responsibilities:
- Analysis of new models from generative AI field and understanding of impacts on compilation stack
- Develop and maintain model definition framework that consists of model building blocks to represent large language models based on PyTorch and Cerebras dialects ready to be deployed on Cerebras hardware.
- Develop and maintain the frontend compiler infrastructure that ingests PyTorch models and produces an intermediate representation (IR).
- Extend and optimize PyTorch FX / TorchScript / TorchDynamo-based tooling for graph capture, transformation, and analysis.
- Collaboration with other teams throughout feature implementation
- Research on new methods for model optimization to improve Cerebras inference
Desired Candidate Profile
Qualifications:
- Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability
- Strong Python programming skills and in-depth experience with PyTorch internals (e.g., TorchScript, FX, or Dynamo).
- Solid understanding of computational graphs, tensor operations, and model tracing.
- Experience building or extending compilers, interpreters, or ML graph optimization frameworks.
- Experience working with PyTorch and HuggingFace Transformers library
- Knowledge and experience working with Large Language Models (understanding Transformer architecture variations, generation cycle, etc.)
- Strong C++ programming skills.
- Knowledge of MLIR based compilation stack
Preferred Qualifications
- Prior experience contributing to PyTorch, TensorFlow XLA, TVM, ONNX RT, or similar compiler stacks.
- Knowledge of hardware accelerators, quantization, or runtime scheduling.
- Experience with multi-target inference compilation (e.g., CPU, GPU, custom ASICs).
- Understanding of numerical precision trade-offs and operator lowering.
- Contributions to open-source ML compiler projects.
Company Industry
Department / Functional Area
Keywords
- Python / PyTorch Developer Frontend Inference Compiler
Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com
Cerebras Systems
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras , to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
https://job-boards.greenhouse.io/cerebrassystems/jobs/7513711003
Similar Jobs
Developer
OU
- 0 - 6 Years
- Dubai - United Arab Emirates (UAE)
Financial Analyst
Confidential Company
- 2 - 7 Years
- Port Louis - Mauritius
Software Programmer
Software AWD
- 0 - 2 Years
- Abu Dhabi , Ajman - United Arab Emirates (UAE)
Python Developer
Confidential Company
- 0 - 1 Year
- Abu Dhabi , Ajman , Dubai - United Arab Emirates (UAE)
Python Developer
CIIL
- 1 - 2 Years
- Abu Dhabi - United Arab Emirates (UAE)