Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update readme #401

Merged
merged 10 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 12 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
<a href="https://explosion.ai"><img src="https://explosion.ai/assets/img/logo.svg" width="125" height="125" align="right" /></a>
<a href="https://explosion.ai"><img src="assets/logo.png" width="125" height="125" align="left" style="margin-right:30px" /></a>

# spacy-llm: Integrating LLMs into structured NLP pipelines
<h1 align="center">
<span style="font: bold 38pt'Courier New';">spacy-llm</span>
<br>Structured NLP with LLMs
</h1>
<br><br>

[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/explosion/spacy-llm/test.yml?branch=main)](https://github.com/explosion/spacy-llm/actions/workflows/test.yml)
[![pypi Version](https://img.shields.io/pypi/v/spacy-llm.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/spacy-llm/)
Expand All @@ -16,7 +21,8 @@ This package integrates Large Language Models (LLMs) into [spaCy](https://spacy.
- **[OpenAI](https://platform.openai.com/docs/api-reference/)**
- **[Cohere](https://docs.cohere.com/reference/generate)**
- **[Anthropic](https://docs.anthropic.com/claude/reference/)**
- **[PaLM](https://ai.google/discover/palm2/)**
- **[Google PaLM](https://ai.google/discover/palm2/)**
- **[Microsoft Azure AI](https://azure.microsoft.com/en-us/solutions/ai)**
- Supports open-source LLMs hosted on Hugging Face 🤗:
- **[Falcon](https://huggingface.co/tiiuae)**
- **[Dolly](https://huggingface.co/databricks)**
Expand All @@ -33,10 +39,13 @@ This package integrates Large Language Models (LLMs) into [spaCy](https://spacy.
- Sentiment analysis
- Span categorization
- Summarization
- Entity linking
- Translation
- Raw prompt execution for maximum flexibility
- Soon:
- Entity linking
- Semantic role labeling
- Easy implementation of **your own functions** via [spaCy's registry](https://spacy.io/api/top-level#registry) for custom prompting, parsing and model integrations
- Map-reduce approach for splitting prompts too long for LLM's context window and fusing the results back together

## 🧠 Motivation

Expand Down
Binary file added assets/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion spacy_llm/models/rest/azure/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def __init__(
self._deployment_name = deployment_name
super().__init__(
name=name,
endpoint=endpoint,
endpoint=endpoint or endpoint,
rmitsch marked this conversation as resolved.
Show resolved Hide resolved
config=config,
strict=strict,
max_tries=max_tries,
Expand Down
4 changes: 2 additions & 2 deletions spacy_llm/tasks/textcat/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ def __init__(
self._scorer = scorer

if self._use_binary and not self._exclusive_classes:
msg.warn(
"Binary classification should always be exclusive. Setting "
msg.info(
"Detected binary classification: setting "
"the `exclusive_classes` parameter to True."
)
self._exclusive_classes = True
Expand Down
2 changes: 1 addition & 1 deletion spacy_llm/tests/models/test_anthropic.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
def test_anthropic_api_response_is_correct():
"""Check if we're getting the expected response and we're parsing it properly"""
anthropic = Anthropic(
name="claude-instant-1",
name="claude-2.1",
endpoint=Endpoints.COMPLETIONS.value,
config={"max_tokens_to_sample": 10},
strict=False,
Expand Down