Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump huggingface-hub from 0.12.1 to 0.16.4 #336

Closed
wants to merge 1 commit into from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jul 10, 2023

Bumps huggingface-hub from 0.12.1 to 0.16.4.

Release notes

Sourced from huggingface-hub's releases.

v0.16.3: Hotfix - More verbose ConnectionError

Full Changelog: huggingface/huggingface_hub@v0.16.2...v0.16.3

Hotfix to print the request ID if any RequestException happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.

Check out these release notes to learn more about the v0.16 release.

v0.16.2: Inference, CommitScheduler and Tensorboard

Inference

Introduced in the v0.15 release, the InferenceClient got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.

Async client

Asyncio calls are supported thanks to AsyncInferenceClient. Based on asyncio and aiohttp, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")

Text-generation

Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @​OlivierDehaene.

Text generation has 4 modes depending on details (bool) and stream (bool) values. By default, a raw string is returned. If details=True, more information about the generated tokens is returned. If stream=True, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>>     print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
</tr></table>

... (truncated)

Commits

Dependabot compatibility score

You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Note
Automatic rebases have been disabled on this pull request as it has been open for over 30 days.

Bumps [huggingface-hub](https://github.com/huggingface/huggingface_hub) from 0.12.1 to 0.16.4.
- [Release notes](https://github.com/huggingface/huggingface_hub/releases)
- [Commits](huggingface/huggingface_hub@v0.12.1...v0.16.4)

---
updated-dependencies:
- dependency-name: huggingface-hub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jul 10, 2023
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Oct 23, 2023

Superseded by #350.

@dependabot dependabot bot closed this Oct 23, 2023
@dependabot dependabot bot deleted the dependabot/pip/huggingface-hub-0.16.4 branch October 23, 2023 19:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants