Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

πŸš€πŸš€πŸš€ Transformers.js V3 πŸš€πŸš€πŸš€ #545

Merged
merged 498 commits into from
Oct 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
498 commits
Select commit Hold shift + click to select a range
0dba266
Early dereferencing for performance boosts
xenova Jul 2, 2024
5e4e20f
cleanup
xenova Jul 2, 2024
dd6af93
Move quantization logic to `quantize.py`
xenova Jul 3, 2024
04af3d5
update deps
xenova Jul 3, 2024
9128651
Fix q4 quantization
xenova Jul 3, 2024
83cbb21
save q4 quantization
xenova Jul 4, 2024
eb61344
Add decode ASR test
xenova Jul 4, 2024
cec2400
Do not process last chunk unnecessarily
xenova Jul 4, 2024
c835b54
fp16 disable_shape_infer if model is too large
xenova Jul 4, 2024
45cd8d4
Use `check_and_save_model` for saving fp16 model
xenova Jul 4, 2024
88f3e44
Reorder functions
xenova Jul 4, 2024
23440f0
formatting
xenova Jul 4, 2024
b411e9f
Remove debug log
xenova Jul 4, 2024
04a334a
Fix q8 quantization for models > 2GB
xenova Jul 4, 2024
cd1ea69
correct attribute
xenova Jul 4, 2024
a167f6e
Fix `TextGenerationPipeline`
xenova Jul 4, 2024
ea73289
Fix pauses in whisper word-level timestamps
xenova Jul 4, 2024
344af32
Formatting
xenova Jul 4, 2024
c305c38
Sort added tokens by length to avoid early partial matches
xenova Jul 5, 2024
d6f6fd4
Add new tokenizer test
xenova Jul 8, 2024
1557b8d
Only finish with newline if running in Node.js
xenova Jul 8, 2024
9ac7ceb
Consider token timestamps when selecting longest common sequence
xenova Jul 9, 2024
79ed46e
Create whisper word-level timestamps demo
xenova Jul 10, 2024
8da6886
cleanup
xenova Jul 10, 2024
d709bd0
Fallback to WASM if WebGPU not supported
xenova Jul 10, 2024
9ef3a6d
Reload model for each quantization mode
xenova Jul 12, 2024
9787b75
Update converstion script requirements
xenova Jul 12, 2024
974f086
Separate IO and Quantization args
xenova Jul 12, 2024
d042868
Use `const` where possible
xenova Jul 16, 2024
1b4d242
Add `InterruptableStoppingCriteria`
xenova Jul 16, 2024
31101c8
`@xenova/transformers` -> `@huggingface/transformers`
xenova Jul 17, 2024
e84322b
Override semver version
xenova Jul 17, 2024
bd94334
Add support for pyannote models
xenova Jul 17, 2024
3dbc633
Update README.md
xenova Jul 17, 2024
858e55d
Add listed support for pyannote
xenova Jul 17, 2024
8bf0349
Add pyannote example code
xenova Jul 17, 2024
c52618c
Support specifying `min_num_frames`
xenova Jul 17, 2024
96f19b0
Support simultaneous instantiation of multiple inference sessions
xenova Jul 20, 2024
4ad43e2
Support broadcasting encoder outputs over decoder inputs
xenova Jul 22, 2024
c6aeb4b
Fix test
xenova Jul 22, 2024
6d3ea4b
fix bundler config for latest ORT
fs-eire Jul 25, 2024
38a3bf6
Only check fp16 support for webgpu device
xenova Jul 29, 2024
9df84c4
Remove default chat templates
xenova Aug 7, 2024
fc3d860
Add support for gemma2
xenova Aug 7, 2024
939920d
Add gemma2 generation test
xenova Aug 7, 2024
5bb93a0
Update gemma2 config mapping
xenova Aug 7, 2024
72ec168
Prioritize high-performance adapter when possible
xenova Aug 7, 2024
9068a53
Set defaults for `tools` and `documents` in `apply_chat_template`
xenova Aug 7, 2024
824538b
bump `@huggingface/jinja` -> 0.3.0
xenova Aug 7, 2024
836c0af
Add `apply_chat_template` default parameters unit test
xenova Aug 7, 2024
487d8b2
Merge branch 'v3' into @huggingface/transformers
xenova Aug 7, 2024
1f6e0e1
Add prettier
xenova Aug 7, 2024
55494d1
prettier format config files
xenova Aug 7, 2024
5a68461
remove incorrect comment
xenova Aug 7, 2024
437cb34
Merge branch 'pr/864' into @huggingface/transformers
xenova Aug 7, 2024
5a6c926
Update onnxruntime-web version
xenova Aug 7, 2024
b19251b
Update webpack.config.js
xenova Aug 7, 2024
820c1e2
Fix copy path
xenova Aug 7, 2024
b0dab91
Run `npm ci`
xenova Aug 7, 2024
86b9b62
Fix bundling
xenova Aug 7, 2024
222b94e
Do not set `preferredOutputLocation` if we are proxying
xenova Aug 7, 2024
b326cc9
Merge branch 'v3' into @huggingface/transformers
xenova Aug 7, 2024
ca67092
Update `@webgpu/types`
xenova Aug 7, 2024
42076fd
Update SAM example
xenova Aug 7, 2024
48d3142
Use `??=` operator where possible
xenova Aug 7, 2024
3b1a4fd
Fix commonjs usage
xenova Aug 8, 2024
9a73b5e
Mark `onnxruntime-node` and `sharp` as externals
xenova Aug 8, 2024
9951aa5
Move `externals` into config
xenova Aug 8, 2024
c04d37e
Downgrade to onnxruntime 1.18.0
xenova Aug 8, 2024
d32fe2b
Finalize module/commonjs build
xenova Aug 8, 2024
1530d50
Separate web and node builds
xenova Aug 8, 2024
b4df0e2
[version] Update to 3.0.0-alpha.1
xenova Aug 8, 2024
ab59c51
Default to CDN-hosted .wasm files
xenova Aug 8, 2024
866b219
[version] Update to 3.0.0-alpha.2
xenova Aug 8, 2024
4a3398d
bump versions
xenova Aug 8, 2024
8891a14
[version] Update to 3.0.0-alpha.3
xenova Aug 8, 2024
a315933
Merge branch 'improve-conversion-script' into v3
xenova Aug 8, 2024
12569b8
Consolidate conversion and quantization script
xenova Aug 9, 2024
83f5718
Downgrade `onnxconverter-common`
xenova Aug 9, 2024
6fa5fa6
Link to types in exports
xenova Aug 9, 2024
2f1b210
Update list of supported tasks
xenova Aug 10, 2024
27bc55d
Fixed unit tests
xenova Aug 10, 2024
23d1150
Update imports
xenova Aug 10, 2024
f9070dc
Bump versions to `3.0.0-alpha.4`
xenova Aug 10, 2024
c3494e1
[version] Update to 3.0.0-alpha.4
xenova Aug 10, 2024
973fb0d
Fix "Default condition should be last one"
xenova Aug 12, 2024
7376ecf
Bump versions
xenova Aug 12, 2024
0a04bc0
[version] Update to 3.0.0-alpha.5
xenova Aug 12, 2024
e4603cd
Update next.js client-side demo
xenova Aug 12, 2024
ff1853c
Initial WebNN Support
ibelem Aug 14, 2024
15574bc
Mark fs, path and url as external packages for node build
xenova Aug 15, 2024
7282862
Move content type map outside of `FileResponse` object
xenova Aug 15, 2024
22f7ced
Add GPU support for Node.js
xenova Aug 15, 2024
1e319a4
Bump versions
xenova Aug 15, 2024
d278891
[version] Update to 3.0.0-alpha.6
xenova Aug 15, 2024
3fefa17
Fix conflicts
ibelem Aug 16, 2024
fa6cc70
bump dependency versions
xenova Aug 16, 2024
7fa5326
Add support for device auto-detection
xenova Aug 16, 2024
4ec77c1
Fix default device selection
xenova Aug 16, 2024
5799e30
Merge branch 'pr/ibelem/890-1' into v3
xenova Aug 16, 2024
5b2cac2
Improve WebNN selection
xenova Aug 17, 2024
ad23c50
Skip token callback if `skip_prompt` is set
xenova Aug 17, 2024
5b84b62
Bump versions
xenova Aug 19, 2024
bcf6a86
[version] Update to 3.0.0-alpha.7
xenova Aug 19, 2024
b97ed0d
bump versions
xenova Aug 21, 2024
c5b7083
[version] Update to 3.0.0-alpha.8
xenova Aug 21, 2024
cbeefde
bump versions
xenova Aug 23, 2024
59600f2
[version] Update to 3.0.0-alpha.9
xenova Aug 23, 2024
b2e025a
Add support for Sapiens
xenova Aug 27, 2024
8661d95
Update default ONNX env
xenova Aug 27, 2024
57db34d
Fix types
xenova Aug 27, 2024
1b7f978
Topologically sort fp16 nodes
xenova Aug 27, 2024
45d1526
Add marian unit test
xenova Aug 27, 2024
b903757
Re-order imports
xenova Aug 27, 2024
633976f
Fix `NoBadWordsLogitsProcessor`
xenova Aug 27, 2024
24d8787
Update package.json
xenova Aug 27, 2024
9412ec4
[jest] Disable coverage
xenova Aug 27, 2024
08e7388
Bump versions
xenova Aug 27, 2024
d5a8f87
[version] Update to 3.0.0-alpha.10
xenova Aug 27, 2024
7843ad0
Improve node/web interoperability
xenova Aug 28, 2024
bf093ae
Fix scripts/requirements.txt
xenova Aug 28, 2024
9a5ee42
Bump versions
xenova Aug 28, 2024
535cdfe
[version] Update to 3.0.0-alpha.11
xenova Aug 28, 2024
4e1acf0
Add support for JAIS models (#906)
xenova Aug 28, 2024
488548d
Add JAIS to README
xenova Aug 28, 2024
13aed41
Fix node/web interop (again)
xenova Aug 28, 2024
7655f81
Bump versions
xenova Aug 28, 2024
1c7e226
[version] Update to 3.0.0-alpha.12
xenova Aug 28, 2024
ab6b28b
Set `SapiensForNormalEstimation` to encoder-only
xenova Aug 28, 2024
66c05d5
Implement `sub` tensor operation
xenova Aug 28, 2024
31e8b2a
Bump versions
xenova Aug 28, 2024
bf3f7d5
[version] Update to 3.0.0-alpha.13
xenova Aug 28, 2024
c025356
Improve typing for `wrap` helper function
xenova Aug 28, 2024
7ebdaf2
Update `preferredOutputLocation` type
xenova Aug 28, 2024
3b8ddcb
Make `wrap` type more generic
xenova Aug 28, 2024
a385c6e
Re-use `segmentation_data`
xenova Aug 28, 2024
537e958
Fix `min` type
xenova Aug 28, 2024
bcb28b3
Add support for Hiera models
xenova Aug 29, 2024
d21c87c
Fix reused loop variable (closes #910)
xenova Aug 30, 2024
1d281f6
Add logits processor test file
xenova Aug 30, 2024
ba0427f
Fix test imports
xenova Aug 30, 2024
3bc3e86
Bump versions
xenova Aug 30, 2024
0518960
[version] Update to 3.0.0-alpha.14
xenova Aug 30, 2024
552cdea
Add another `bad_words` logits processor test (closes #913)
xenova Aug 30, 2024
3422a8b
Add support for GroupViT
xenova Aug 30, 2024
3599902
Add zero-shot-image-classification unit test
xenova Aug 30, 2024
5892ee8
Add maskformer model definitions
xenova Aug 30, 2024
c4dac77
Support universal image segmentation in `image-segmentation` pipeline
xenova Aug 30, 2024
f0c47be
Add support for PVT models
xenova Aug 30, 2024
d80d3a4
Add `post_process_instance_segmentation` function template
xenova Aug 30, 2024
844099d
Add `library_name` option to convert.py
xenova Sep 2, 2024
ba5d725
Wrap onnxslim with try block
xenova Sep 2, 2024
b3691c8
Use const where possible
xenova Sep 2, 2024
dcf117f
Use const where possible (again)
xenova Sep 2, 2024
9af026c
Create `MaskFormerFeatureExtractor`
xenova Sep 2, 2024
0f8200c
Add support for MaskFormer
xenova Sep 2, 2024
e278c5e
Improve tool-use chat template detection
xenova Sep 2, 2024
83fa58f
Add object detection pipeline unit test
xenova Sep 2, 2024
86d6da4
Add support for ViTMSN and VitMAE
xenova Sep 2, 2024
93b25fb
Bump ORT versions
xenova Sep 7, 2024
2f680ee
Create `get_chat_template` helper function
xenova Sep 7, 2024
2f9b2ed
Fix CI
xenova Sep 9, 2024
deec350
Run prettier on `tests/**`
xenova Sep 9, 2024
48fa226
move certain tests to utils subfolder
xenova Sep 9, 2024
a10828f
Bump onnxruntime-web version
xenova Sep 9, 2024
ba58ea2
Bump `onnxruntime==1.19.2` in scripts/requirements.txt
xenova Sep 9, 2024
4f17e95
Merge branch 'main' into v3
xenova Sep 9, 2024
c40a151
Merge branch 'main' into v3
xenova Sep 9, 2024
30315b2
Sort `this.added_tokens` before creating regex (`.toSorted` is not av…
xenova Sep 9, 2024
d7df575
Rather make a copy of `this.added_tokens`
xenova Sep 9, 2024
a519379
Fix `.tokenize` with `fuse_unk=true`
xenova Sep 9, 2024
89ddccf
Add blenderbot tokenizer tests
xenova Sep 9, 2024
36ad144
Add t5 tokenizer tests
xenova Sep 9, 2024
4765dd6
Add falcon tokenizer tests
xenova Sep 10, 2024
fd8b9a2
Run prettier
xenova Sep 10, 2024
710816e
Add ESM tokenizer tests
xenova Sep 10, 2024
0d3cd30
Run unit tests in parallel
xenova Sep 10, 2024
cc258c2
Fix `fuse_unk` for tokenizers with `byte_fallback=true` but no byte f…
xenova Sep 10, 2024
4798755
Add llama tokenizer unit tests
xenova Sep 10, 2024
c6c3ae1
Update emoji test string names
xenova Sep 10, 2024
79a7409
Move whisper-specific unit tests to subfolder
xenova Sep 10, 2024
1a38804
Code formatting
xenova Sep 10, 2024
dabe6ae
Bump versions
xenova Sep 10, 2024
54f1f21
[version] Update to 3.0.0-alpha.15
xenova Sep 10, 2024
a912d79
Add emoji tokenizer test cases for LlamaTokenizer
xenova Sep 12, 2024
969d10e
Attempt to fix encoder-decoder memory leak
xenova Sep 17, 2024
072cbbc
Remove unused code
xenova Sep 17, 2024
14b4bd4
Fix BertNormalizer (strip `Mn` unicode characters)
xenova Sep 17, 2024
6797771
Handle ZERO WIDTH JOINER (U+200D) characters
xenova Sep 17, 2024
f148afd
Add more spm normalization characters
xenova Sep 17, 2024
ca4b5b9
Add emoji unit tests for bert/t5
xenova Sep 17, 2024
113c81e
[WebNN] Add support for specifying `free_dimension_overrides` in config
xenova Sep 18, 2024
9005acc
Log warning if webnn is selected by `free_dimension_overrides` is not…
xenova Sep 18, 2024
682c7d0
Fix unigram for multi-byte tokens
xenova Sep 18, 2024
4a31e54
Add gemma tokenizer tests
xenova Sep 22, 2024
7a16065
Allow user to specify device and dtype in config.json
xenova Sep 23, 2024
4c1d21b
Update dependency versions
xenova Sep 23, 2024
3c6a95a
Bump versions
xenova Sep 23, 2024
ac391d2
[version] Update to 3.0.0-alpha.16
xenova Sep 23, 2024
d30d3b7
Add CLIP tokenizer unit tests
xenova Sep 23, 2024
e089ef4
Add more tokenizer tests
xenova Sep 23, 2024
2c9e271
Bump onnxruntime-web version
xenova Sep 27, 2024
ee1e32a
Bump versions
xenova Sep 27, 2024
f41e995
[version] Update to 3.0.0-alpha.17
xenova Sep 27, 2024
9a42cf3
Add support for new `tokenizers>=0.2.0` BPE serialization format
xenova Sep 27, 2024
f534b35
Bump onnxruntime-web version
xenova Sep 29, 2024
0c8b1af
Bump versions
xenova Sep 29, 2024
2ca4178
[version] Update to 3.0.0-alpha.18
xenova Sep 30, 2024
a82e7ef
Keep encoder outputs on GPU
xenova Sep 30, 2024
c37a38c
Update whisper-webgpu demo dependencies
xenova Sep 30, 2024
e1c4fc6
Bump versions
xenova Sep 30, 2024
fe51609
[version] Update to 3.0.0-alpha.19
xenova Sep 30, 2024
b518866
Support to load ONNX APIs based on JS runtime (#947)
kallebysantos Sep 30, 2024
95c8cc5
Allow specification of `use_external_data_format` in custom config
xenova Oct 3, 2024
03eb77b
Update deberta unit tests
xenova Oct 3, 2024
c61a76b
Update roberta tokenizer tests
xenova Oct 3, 2024
32d8df4
Support inferringunigram tokenizer type
xenova Oct 4, 2024
6505abb
Reuse tokenizer tests for original t5-small
xenova Oct 4, 2024
9619218
Remove redundant null coalesce
xenova Oct 4, 2024
52c4ce7
Enable unit test coverage reports
xenova Oct 7, 2024
12edaf0
Use `PROBLEMATIC_REGEX_MAP` for bloom tokenizer
xenova Oct 7, 2024
5e7e82b
Improve tokenizer unit tests
xenova Oct 7, 2024
795a61a
Update tokenizer unit tests
xenova Oct 8, 2024
77ebe0d
Remove unused code
xenova Oct 8, 2024
56eda3b
Add m2m_100 tokenizer unit tests
xenova Oct 8, 2024
2040ad5
Add m2m translation pipeline unit test
xenova Oct 8, 2024
8718c17
Add support for Depth Pro models
xenova Oct 9, 2024
a32efa3
Add whisper turbo alignment heads
xenova Oct 9, 2024
8b0d330
Remove in-library list of supported models
xenova Oct 9, 2024
cf3f5c3
Bump versions
xenova Oct 9, 2024
86fe175
[version] Update to 3.0.0-alpha.20
xenova Oct 9, 2024
1c78278
Add function to map tensor data array.
BritishWerewolf Oct 9, 2024
a5e0210
Merge branch 'main' into v3
xenova Oct 9, 2024
9f8fac0
Optimise loop to reduce calls to `this`
BritishWerewolf Oct 9, 2024
1c43e3f
Merge branch 'pr/966' into v3
xenova Oct 10, 2024
7a0f77c
Add back tensor map test
xenova Oct 10, 2024
da03a0a
Add support for granite models
xenova Oct 12, 2024
37effa3
Allow multiple optional configs to be passed (+ reduce code duplication)
xenova Oct 12, 2024
f21b36e
Bump dependencies
xenova Oct 14, 2024
d26a663
Bump versions
xenova Oct 14, 2024
c337c3b
[version] Update to 3.0.0-alpha.21
xenova Oct 14, 2024
92d0dc6
Add support for per-dtype `kv_cache_dtype`
xenova Oct 17, 2024
ea03bf5
Add text streamer unit test
xenova Oct 17, 2024
27a033f
Bump ORT web version
xenova Oct 17, 2024
19277ea
Bump versions
xenova Oct 17, 2024
90a7490
[version] Update to 3.0.0-alpha.22
xenova Oct 17, 2024
38773ea
Update repo name to `@huggingface/transformers.js`
xenova Oct 18, 2024
832b5b7
Update tested node versions
xenova Oct 18, 2024
b871c08
Bump versions
xenova Oct 18, 2024
7a58d6e
[version] Update to 3.0.0
xenova Oct 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
repo_owner: xenova
commit_sha: ${{ github.sha }}
package: transformers.js
path_to_docs: transformers.js/docs/source
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/pr-documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
with:
repo_owner: xenova
commit_sha: ${{ github.sha }}
pr_number: ${{ github.event.number }}
package: transformers.js
Expand Down
15 changes: 8 additions & 7 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,20 @@ on:
pull_request:
branches:
- main

env:
TESTING_REMOTELY: true
types:
- opened
- reopened
- synchronize
- ready_for_review

jobs:
build:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

strategy:
matrix:
node-version: [18.x, latest, node]
node-version: [18, 20, 22]

steps:
- uses: actions/checkout@v4
Expand All @@ -27,11 +30,9 @@ jobs:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm run build
- run: pip install -r tests/requirements.txt

# Setup the testing environment
- run: npm run generate-tests
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/Xenova/t5-small ./models/t5-small
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/hf-internal-testing/tiny-random-T5ForConditionalGeneration ./models/hf-internal-testing/tiny-random-T5ForConditionalGeneration

# Actually run tests
- run: npm run test
8 changes: 8 additions & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Ignore artifacts:
.github
dist
docs
examples
scripts
types
*.md
10 changes: 10 additions & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"overrides": [
{
"files": ["tests/**/*.js"],
"options": {
"printWidth": 10000000
}
}
]
}
99 changes: 67 additions & 32 deletions README.md

Large diffs are not rendered by default.

28 changes: 19 additions & 9 deletions docs/scripts/build_readme.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,29 @@
<p align="center">
<br/>
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/xenova/transformers.js/assets/26504141/bd047e0f-aca9-4ff7-ba07-c7ca55442bc4" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/xenova/transformers.js/assets/26504141/84a5dc78-f4ea-43f4-96f2-b8c791f30a8e" width="500" style="max-width: 100%;">
<img alt="transformers.js javascript library logo" src="https://github.com/xenova/transformers.js/assets/26504141/84a5dc78-f4ea-43f4-96f2-b8c791f30a8e" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-dark.svg" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
<img alt="transformers.js javascript library logo" src="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
</picture>
<br/>
</p>

<p align="center">
<a href="https://www.npmjs.com/package/@xenova/transformers"><img alt="NPM" src="https://img.shields.io/npm/v/@xenova/transformers"></a>
<a href="https://www.npmjs.com/package/@xenova/transformers"><img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@xenova/transformers"></a>
<a href="https://www.jsdelivr.com/package/npm/@xenova/transformers"><img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@xenova/transformers"></a>
<a href="https://github.com/xenova/transformers.js/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/xenova/transformers.js?color=blue"></a>
<a href="https://huggingface.co/docs/transformers.js/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online"></a>
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@huggingface/transformers">
</a>
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@huggingface/transformers">
</a>
<a href="https://www.jsdelivr.com/package/npm/@huggingface/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@huggingface/transformers">
</a>
<a href="https://github.com/huggingface/transformers.js/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/huggingface/transformers.js?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers.js/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
</p>

{intro}
Expand All @@ -42,7 +52,7 @@

Here is the list of all tasks and architectures currently supported by Transformers.js.
If you don't see your task/model listed here or it is not yet supported, feel free
to open up a feature request [here](https://github.com/xenova/transformers.js/issues/new/choose).
to open up a feature request [here](https://github.com/huggingface/transformers.js/issues/new/choose).

To find compatible models on the Hub, select the "transformers.js" library tag in the filter menu (or visit [this link](https://huggingface.co/models?library=transformers.js)).
You can refine your search by selecting the task you're interested in (e.g., [text-classification](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js)).
Expand Down
6 changes: 3 additions & 3 deletions docs/snippets/0_introduction.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ State-of-the-art Machine Learning for the web. Run πŸ€— Transformers directly in

Transformers.js is designed to be functionally equivalent to Hugging Face's [transformers](https://github.com/huggingface/transformers) python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
- πŸ“ **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
- πŸ–ΌοΈ **Computer Vision**: image classification, object detection, and segmentation.
- πŸ—£οΈ **Audio**: automatic speech recognition and audio classification.
- πŸ™ **Multimodal**: zero-shot image classification.
- πŸ–ΌοΈ **Computer Vision**: image classification, object detection, segmentation, and depth estimation.
- πŸ—£οΈ **Audio**: automatic speech recognition, audio classification, and text-to-speech.
- πŸ™ **Multimodal**: embeddings, zero-shot audio classification, zero-shot image classification, and zero-shot object detection.

Transformers.js uses [ONNX Runtime](https://onnxruntime.ai/) to run models in the browser. The best part about it, is that you can easily [convert](#convert-your-models-to-onnx) your pretrained PyTorch, TensorFlow, or JAX models to ONNX using [πŸ€— Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).

Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/1_quick-tour.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ out = pipe('I love transformers!')
<td>

```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
Expand Down
6 changes: 3 additions & 3 deletions docs/snippets/2_installation.snippet
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@

To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0';
</script>
```
24 changes: 12 additions & 12 deletions docs/snippets/3_examples.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ Want to jump straight in? Get started with one of our sample applications/templa
|-------------------|----------------------------------|-------------------------------|
| Whisper Web | Speech recognition w/ Whisper | [code](https://github.com/xenova/whisper-web), [demo](https://huggingface.co/spaces/Xenova/whisper-web) |
| Doodle Dash | Real-time sketch-recognition game | [blog](https://huggingface.co/blog/ml-web-games), [code](https://github.com/xenova/doodle-dash), [demo](https://huggingface.co/spaces/Xenova/doodle-dash) |
| Code Playground | In-browser code completion website | [code](https://github.com/xenova/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/xenova/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
| React | Multilingual translation website | [code](https://github.com/xenova/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/xenova/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
| Browser extension | Text classification extension | [code](https://github.com/xenova/transformers.js/tree/main/examples/extension/) |
| Electron | Text classification application | [code](https://github.com/xenova/transformers.js/tree/main/examples/electron/) |
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
| Node.js | Sentiment analysis API | [code](https://github.com/xenova/transformers.js/tree/main/examples/node/) |
| Demo site | A collection of demos | [code](https://github.com/xenova/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |
| Code Playground | In-browser code completion website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/huggingface/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
| React | Multilingual translation website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/huggingface/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
| Browser extension | Text classification extension | [code](https://github.com/huggingface/transformers.js/tree/main/examples/extension/) |
| Electron | Text classification application | [code](https://github.com/huggingface/transformers.js/tree/main/examples/electron/) |
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
| Node.js | Sentiment analysis API | [code](https://github.com/huggingface/transformers.js/tree/main/examples/node/) |
| Demo site | A collection of demos | [code](https://github.com/huggingface/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |

Check out the Transformers.js [template](https://huggingface.co/new-space?template=static-templates%2Ftransformers.js) on Hugging Face to get started in one click!
7 changes: 3 additions & 4 deletions docs/snippets/4_custom-usage.snippet
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@


By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

### Settings

```javascript
import { env } from '@xenova/transformers';
import { env } from '@huggingface/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
Expand All @@ -22,7 +21,7 @@ For a full list of available settings, check out the [API Reference](./api/env).

### Convert your models to ONNX

We recommend using our [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [πŸ€— Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
We recommend using our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [πŸ€— Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.

```bash
python -m scripts.convert --quantize --model_id <model_name_or_path>
Expand Down
Loading
Loading