Skip to content

OpenXLA project

An open ecosystem of portable and extensible ML infrastructure projects that simplify ML development by defragmenting the tools between frontend frameworks and hardware backends. Built by industry leaders in ML software and hardware.

OpenXLA project

StableHLO

An operation set spec that provides a shared abstraction for ML programs between different ML frameworks and compilers.

  • drafts Portable All major ML frameworks (JAX, PyTorch, TensorFlow) can produce models in StableHLO.

  • more_time Stable StableHLO programs can be serialized into MLIR bytecode that provides long-term stability and backward-compatibility guarantees.

XLA Compiler

An ML compiler that optimizes models for high-performance execution across hardware platforms including GPUs, CPUs, and ML accelerators.

  • terminal Builds anywhere Build and compile your models optimally across leading ML frameworks such as TensorFlow, PyTorch, and JAX.

  • speed Scales your performance Maximize and scale performance through a wide range of production-tested optimization passes and automated partitioning for model parallelism.

  • developer_board Runs anywhere Run your models anywhere with support for all leading ML backends including GPUs, CPUs, and ML accelerators.

  • home_repair_service Simplifies your tools Eliminate the complexity of managing diverse domain-specific compilers. OpenXLA leverages the power of MLIR to bring the best capabilities into a single compiler toolchain.

IREE

A next-generation compiler technology focused on low-overhead, latency-sensitive ML compilation and serving.

  • extension Modular Provides a reusable and extensible architecture, built from the ground up in MLIR.

  • open_with Scalable Scales up to meet the needs of datacenters and down to meet the constraints of mobile and embedded systems.

Contributors

The OpenXLA project is developed collaboratively by leading ML hardware and software organizations.

  • Alibaba
  • Amazon Web Services
  • AMD
  • Apple
  • Arm
  • Google
  • Intel
  • Meta
  • NVIDIA

We welcome contributions to any of our projects on GitHub. If you'd like to contribute, check out our OpenXLA community resources.