Skip to content

Commit

Permalink
Update build documentation (pytorch#2935)
Browse files Browse the repository at this point in the history
Summary:
X-link: facebookresearch/FBGEMM#39

- Update build documentation for folly (followup to D60430228)

Pull Request resolved: pytorch#2935

Reviewed By: spcyppt

Differential Revision: D60776475

Pulled By: q10

fbshipit-source-id: 673cc9891e736dd7aa6634f7a909f56ef9b6a14c
  • Loading branch information
q10 authored and facebook-github-bot committed Aug 5, 2024
1 parent b0c16d7 commit fc212d4
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 18 deletions.
4 changes: 2 additions & 2 deletions .github/scripts/nova_prescript.bash
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ print_conda_info
# Display GPU Info
print_gpu_info

# Install C/C++ Compilers
install_cxx_compiler "${BUILD_ENV_NAME}" clang
# Install C/C++ Compilers and Set libstdc++ Preload Option
SET_GLIBCXX_PRELOAD=1 install_cxx_compiler "${BUILD_ENV_NAME}"

# Install Build Tools
install_build_tools "${BUILD_ENV_NAME}"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/fbgemm_gpu_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
- name: Create Conda Environment
run: . $PRELUDE; create_conda_environment $BUILD_ENV ${{ matrix.python-version }}

- name: Install C/C++ Compilers and Set libstdc++ Preload Options
- name: Install C/C++ Compilers and Set libstdc++ Preload Option
run: . $PRELUDE; SET_GLIBCXX_PRELOAD=1 install_cxx_compiler $BUILD_ENV

- name: Install Build Tools
Expand Down
8 changes: 6 additions & 2 deletions fbgemm_gpu/docs/src/fbgemm-development/BuildInstructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,13 @@ C/C++ Compiler

For Linux and macOS platforms, follow the instructions in
:ref:`fbgemm-gpu.build.setup.tools.install.compiler.gcc` to install the GCC
toolchain. For Clang-based builds, follow the instructions in
toolchain. The recommended version of GCC for installation is **10.4.0**, or
higher. Note, however, that newer versions of GCC have been observed to run
into build issues when built through Bazel.

For Clang-based builds, follow the instructions in
:ref:`fbgemm-gpu.build.setup.tools.install.compiler.clang` to install the Clang
toolchain.
toolchain. The recommended version of Clang is **15 or higher**.

For builds on Windows machines, Microsoft Visual Studio 2019 or newer is
recommended. Follow the installation instructions provided by Microsoft
Expand Down
43 changes: 30 additions & 13 deletions fbgemm_gpu/docs/src/fbgemm_gpu-development/BuildInstructions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -245,11 +245,16 @@ symbols with ``GLIBCXX`` when compiling FBGEMM_CPU:

.. code:: sh
# Set GCC to 10.4.0 to keep compatibility with older versions of GLIBCXX
# Set GCC to 11.4, as the packaged libstdc++ version is the minimum version
# that references GLIBCXX_3.4.30, which is required for libfolly runtime
# compatibility.
#
# A newer versions of GCC also works, but will need to be accompanied by an
# For compatibility with older versions of GLIBCXX, 10.4.0 will be needed, but
# libfolly-based code wlll need to be disabled.
#
# A newer versions of GCC also works, but may need to be accompanied by an
# appropriate updated version of the sysroot_linux package.
gcc_version=10.4.0
gcc_version=11.4.0
conda install -n ${env_name} -c conda-forge -y gxx_linux-64=${gcc_version} sysroot_linux-64=2.17
Expand Down Expand Up @@ -280,7 +285,7 @@ toolchain **that supports C++20**:

.. code:: sh
# Use a recent version of LLVM+Clang
# Minimum LLVM+Clang version required for FBGEMM_GPU
llvm_version=16.0.6
# NOTE: libcxx from conda-forge is outdated for linux-aarch64, so we cannot
Expand Down Expand Up @@ -335,6 +340,7 @@ Install the other necessary build tools such as ``ninja``, ``cmake``, etc:
conda install -n ${env_name} -y \
click \
cmake \
folly \
hypothesis \
jinja2 \
make \
Expand Down Expand Up @@ -525,15 +531,19 @@ For CPU-only builds, the ``--cpu_only`` flag needs to be specified.
# !! Run in fbgemm_gpu/ directory inside the Conda environment !!
FOLLY_LIB_PATH=/path/to/libfolly.so
# Build the wheel artifact only
python setup.py bdist_wheel \
--package_variant=cpu \
--python-tag="${python_tag}" \
--plat-name="${python_plat_name}"
--plat-name="${python_plat_name}" \
--folly_lib_path=${FOLLY_LIB_PATH}
# Build and install the library into the Conda environment (GCC)
python setup.py install \
--package_variant=cpu
--package_variant=cpu \
--folly_lib_path=${FOLLY_LIB_PATH}
To build using Clang + ``libstdc++`` instead of GCC, simply append the
``--cxxprefix`` flag:
Expand All @@ -547,17 +557,22 @@ To build using Clang + ``libstdc++`` instead of GCC, simply append the
--package_variant=cpu \
--python-tag="${python_tag}" \
--plat-name="${python_plat_name}" \
--cxxprefix=$CONDA_PREFIX
--cxxprefix=$CONDA_PREFIX \
--folly_lib_path=${FOLLY_LIB_PATH}
# Build and install the library into the Conda environment (Clang)
python setup.py install \
--package_variant=cpu
--cxxprefix=$CONDA_PREFIX
--package_variant=cpu \
--cxxprefix=$CONDA_PREFIX \
--folly_lib_path=${FOLLY_LIB_PATH}
Note that this presumes the Clang toolchain is properly installed along with the
GCC toolchain, and is made available as ``${cxxprefix}/bin/cc`` and
``${cxxprefix}/bin/c++``.

To enable runtime debug features, such as device-side assertions in CUDA and
HIP, simply append the ``--debug`` flag when invoking ``setup.py``.

.. _fbgemm-gpu.build.process.cuda:

CUDA Build
Expand Down Expand Up @@ -609,13 +624,15 @@ toolchains have been properly installed.
--plat-name="${python_plat_name}" \
--nvml_lib_path=${NVML_LIB_PATH} \
--nccl_lib_path=${NCCL_LIB_PATH} \
--folly_lib_path=${FOLLY_LIB_PATH} \
-DTORCH_CUDA_ARCH_LIST="${cuda_arch_list}"
# Build and install the library into the Conda environment
python setup.py install \
--package_variant=cuda \
--nvml_lib_path=${NVML_LIB_PATH} \
--nccl_lib_path=${NCCL_LIB_PATH} \
--folly_lib_path=${FOLLY_LIB_PATH} \
-DTORCH_CUDA_ARCH_LIST="${cuda_arch_list}"
.. _fbgemm-gpu.build.process.genai:
Expand All @@ -637,13 +654,15 @@ experimental modules are the same as those for a CUDA build, but with specifying
--plat-name="${python_plat_name}" \
--nvml_lib_path=${NVML_LIB_PATH} \
--nccl_lib_path=${NCCL_LIB_PATH} \
--folly_lib_path=${FOLLY_LIB_PATH} \
-DTORCH_CUDA_ARCH_LIST="${cuda_arch_list}"
# Build and install the library into the Conda environment
python setup.py install \
--package_variant=genai \
--nvml_lib_path=${NVML_LIB_PATH} \
--nccl_lib_path=${NCCL_LIB_PATH} \
--folly_lib_path=${FOLLY_LIB_PATH} \
-DTORCH_CUDA_ARCH_LIST="${cuda_arch_list}"
Note that currently, only CUDA is supported for the experimental modules.
Expand Down Expand Up @@ -677,15 +696,13 @@ presuming the toolchains have been properly installed.
--python-tag="${python_tag}" \
--plat-name="${python_plat_name}" \
-DHIP_ROOT_DIR="${ROCM_PATH}" \
-DCMAKE_C_FLAGS="-DTORCH_USE_HIP_DSA" \
-DCMAKE_CXX_FLAGS="-DTORCH_USE_HIP_DSA"
--folly_lib_path=${FOLLY_LIB_PATH}
# Build and install the library into the Conda environment
python setup.py install \
--package_variant=rocm \
-DHIP_ROOT_DIR="${ROCM_PATH}" \
-DCMAKE_C_FLAGS="-DTORCH_USE_HIP_DSA" \
-DCMAKE_CXX_FLAGS="-DTORCH_USE_HIP_DSA"
--folly_lib_path=${FOLLY_LIB_PATH}
Post-Build Checks (For Developers)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
8 changes: 8 additions & 0 deletions fbgemm_gpu/docs/src/general/documentation/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,14 @@ correctly. Follow the instructions in
:ref:`fbgemm-gpu.build.setup.tools.install`, followed by
:ref:`fbgemm-gpu.build.process.cpu`, to build FBGEMM_GPU (CPU variant).

After installing the C/C++ compiler, ``LD_PRELOAD`` will need to be updated with
the path to the version of ``libstdc++.so`` packaged with the compiler so that
FBGEMM_GPU can be correctly pre-loaded under Sphinx:

.. code:: sh
export LD_PRELOAD=/path/to/libstdc++.so:$LD_PRELOAD
Set Up the Documentation Toolchain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down

0 comments on commit fc212d4

Please sign in to comment.