Skip to content

Optimize the fp-dequantizer to get high memory-BW utilization #12387

Optimize the fp-dequantizer to get high memory-BW utilization

Optimize the fp-dequantizer to get high memory-BW utilization #12387

Workflow file for this run

name: Formatting
on:
workflow_dispatch:
pull_request:
branches:
'**'
merge_group:
branches: [ master ]
schedule:
- cron: "0 0 * * *"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# formatting and basic install on cpu-only machine
unit-tests:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- name: environment
run: |
which python
python --version
- name: Install dependencies
run: |
# Previously we would do pip install .[dev] but this is causing out of
# space errors start with torch 2.1.0 release
grep -E "clang-format|pre-commit" requirements/requirements-dev.txt | xargs pip install
- name: Formatting checks
run: |
pip show pre-commit clang-format
pre-commit run --all-files