github.com/johnnyeven/libtools@v0.0.0-20191126065708-61829c1adf46/third_party/mlir/README.md (about)

     1  # Multi-Level Intermediate Representation Overview
     2  
     3  The MLIR project aims to define a common intermediate representation (IR) that
     4  will unify the infrastructure required to execute high performance machine
     5  learning models in TensorFlow and similar ML frameworks. This project will
     6  include the application of HPC techniques, along with integration of search
     7  algorithms like reinforcement learning. This project aims to reduce the cost to
     8  bring up new hardware, and improve usability for existing TensorFlow users.
     9  
    10  Note that this repository contains the core of the MLIR framework. The
    11  TensorFlow compilers we are building on top of MLIR will be part of the
    12  main TensorFlow repository soon.
    13  
    14  # How to Contribute
    15  
    16  Thank you for your interest in contributing to MLIR! If you want to contribute
    17  to MLIR, be sure to review the [contribution guidelines](CONTRIBUTING.md).
    18  
    19  ## More resources
    20  
    21  For more information on MLIR, please see:
    22  
    23  *   [The MLIR draft specification](g3doc/LangRef.md), which describes the IR
    24      itself.
    25  *   [The MLIR rationale document](g3doc/Rationale.md), covering motivation
    26      behind some decisions.
    27  *   Previous external [talks](#mlir-talks).
    28  
    29  Join the [MLIR mailing list](https://groups.google.com/a/tensorflow.org/forum/#!forum/mlir)
    30  to hear about announcements and discussions.
    31  Please be mindful of the [TensorFlow Code of Conduct](https://github.com/tensorflow/tensorflow/blob/master/CODE_OF_CONDUCT.md),
    32  which pledges to foster an open and welcoming environment.
    33  
    34  ## What is MLIR for?
    35  
    36  MLIR is intended to be a hybrid IR which can support multiple different
    37  requirements in a unified infrastructure. For example, this includes:
    38  
    39  *   The ability to represent all TensorFlow graphs, including dynamic shapes,
    40      the user-extensible op ecosystem, TensorFlow variables, etc.
    41  *   Optimizations and transformations typically done on a TensorFlow graph, e.g.
    42      in Grappler.
    43  *   Quantization and other graph transformations done on a TensorFlow graph or
    44      the TF Lite representation.
    45  *   Representation of kernels for ML operations in a form suitable for
    46      optimization.
    47  *   Ability to host high-performance-computing-style loop optimizations across
    48      kernels (fusion, loop interchange, tiling, etc) and to transform memory
    49      layouts of data.
    50  *   Code generation "lowering" transformations such as DMA insertion, explicit
    51      cache management, memory tiling, and vectorization for 1D and 2D register
    52      architectures.
    53  *   Ability to represent target-specific operations, e.g. the MXU on TPUs.
    54  
    55  MLIR is a common IR that also supports hardware specific operations. Thus,
    56  any investment into the infrastructure surrounding MLIR (e.g. the compiler
    57  passes that work on it) should yield good returns; many targets can use that
    58  infrastructure and will benefit from it.
    59  
    60  MLIR is a powerful representation, but it also has non-goals. We do not try to
    61  support low level machine code generation algorithms (like register allocation
    62  and instruction scheduling). They are a better fit for lower level optimizers
    63  (such as LLVM). Also, we do not intend MLIR to be a source language that
    64  end-users would themselves write kernels in (analogous to CUDA C++). While we
    65  would love to see a kernel language happen someday, that will be an independent
    66  project that compiles down to MLIR.
    67  
    68  ## Compiler infrastructure
    69  
    70  We benefited from experience gained from building other IRs (HLO, LLVM and SIL)
    71  when building MLIR. We will directly adopt existing best practices, e.g. writing
    72  and maintaining an IR spec, building an IR verifier, providing the ability to
    73  dump and parse MLIR files to text, writing extensive unit tests with the
    74  [FileCheck](https://llvm.org/docs/CommandGuide/FileCheck.html) tool, and
    75  building the infrastructure as a set of modular libraries that can be combined
    76  in new ways. We plan to use the infrastructure developed by the XLA team for
    77  performance analysis and benchmarking.
    78  
    79  Other lessons have been incorporated and integrated into the design in subtle
    80  ways. For example, LLVM has non-obvious design mistakes that prevent a
    81  multithreaded compiler from working on multiple functions in an LLVM module at
    82  the same time. MLIR solves these problems by having per-function constant pools
    83  and by making references explicit with `function_ref`.
    84  
    85  # Getting started with MLIR
    86  
    87  The following instructions for compiling and testing MLIR assume that you have
    88  `git`, [`ninja`](https://ninja-build.org/), and a working C++ toolchain. In the
    89  future, we aim to align on the same level of platform support as
    90  [LLVM](https://llvm.org/docs/GettingStarted.html#requirements). For now, MLIR
    91  has been tested on Linux and macOS, with recent versions of clang and with
    92  gcc 7.
    93  
    94  ```sh
    95  git clone https://github.com/llvm/llvm-project.git
    96  git clone https://github.com/tensorflow/mlir llvm-project/llvm/projects/mlir
    97  mkdir llvm-project/build
    98  cd llvm-project/build
    99  cmake -G Ninja ../llvm -DLLVM_BUILD_EXAMPLES=ON -DLLVM_TARGETS_TO_BUILD="host"
   100  cmake --build . --target check-mlir
   101  ```
   102  
   103  To compile and test on Windows using Visual Studio 2017:
   104  
   105  ```bat
   106  REM In shell with Visual Studio environment set up, e.g., with command such as
   107  REM   $visual-studio-install\Auxiliary\Build\vcvarsall.bat" x64
   108  REM invoked.
   109  git clone https://github.com/llvm/llvm-project.git
   110  git clone https://github.com/tensorflow/mlir llvm-project\llvm\projects\mlir
   111  mkdir llvm-project\build
   112  cd llvm-project\build
   113  cmake ..\llvm -G "Visual Studio 15 2017 Win64" -DLLVM_BUILD_EXAMPLES=ON -DLLVM_TARGETS_TO_BUILD="host" -DCMAKE_BUILD_TYPE=Release -Thost=x64
   114  cmake --build . --target check-mlir
   115  ```
   116  
   117  As a starter, you may try [the tutorial](g3doc/Tutorials/Toy/Ch-1.md) on
   118  building a compiler for a Toy language.
   119  
   120  # MLIR talks
   121  
   122  * "[MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law](https://ai.google/research/pubs/pub48035.pdf)"
   123    * Chris Lattner & Jacques Pienaar, Google at
   124      [Compilers for Machine Learning](https://www.c4ml.org/) workshop at
   125      [CGO 2019](http://cgo.org/cgo2019/)
   126  * "[MLIR: Multi-Level Intermediate Representation for Compiler
   127      Infrastructure](https://llvm.org/devmtg/2019-04/talks.html#Keynote_1)"
   128    * Tatiana Shpeisman & Chris Lattner, Google at
   129      [EuroLLVM 2019](https://llvm.org/devmtg/2019-04)
   130  * "[Tutorial: Building a Compiler with MLIR](https://llvm.org/devmtg/2019-04/talks.html#Tutorial_1)"
   131    * Mehdi Amini, Jacques Pienaar, Nicolas Vasilache, Google at
   132      [EuroLLVM 2019](https://llvm.org/devmtg/2019-04)