Skip to content

Releases: langgenius/dify

v0.4.1

03 Jan 02:07
4de27d0
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.4.0...0.4.1

v0.4.0

02 Jan 16:20
d70d61b
Compare
Choose a tag to compare

🎉🎉 Dify's Version 0.4 is out now.

We've made some serious under-the-hood changes to how the Model Runtime works, making it more straightforward for our specific needs, and paving the way for smoother model expansions and more robust production use.

What's Changed

  • Model Runtime Rework: We've moved away from LangChain, simplifying the model layer. Now, expanding models is as easy as setting up the model provider in the backend with a bit of YAML.

    For more details, see: https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md

  • App Generation Update: Replacing the old Redis Pubsub queue with threading.Queue for a more reliable, performant, and straightforward workflow.

  • Model Providers Upgraded: Support for both preset and custom models, ideal for adding OpenAI fine-tuned models or fitting into various MaaS platforms. Plus, you can now check out supported models without any initial configuration.

  • Context Size Definition: Introduced distinct context size settings, separate from Max Tokens, to handle the different limits and sizes in models like OpenAI's GPT-4 Turbo.

  • Flexible Model Parameters: Customize your model's behavior with easily adjustable parameters through YAML.

  • GPT-2 Tokenizer Files: Now cached within Dify's codebase, making builds quicker and solving issues related to acquiring tokenizer files in offline source deployments.

  • Model List Display: The App now displays all supported preset models, including details on any that aren't available and how to configure them.

  • New Model Additions: Including Google's Gemini Pro and Gemini Pro Vision models (Vision requires an image input), Azure OpenAI's GPT-4V, and support for OpenAI-API-compatible providers.

  • Expanded Inference Support: Xorbit Inference now includes chat mode models, and there's a wider range of models supporting Agent inference.

  • Updates & Fixes: We've updated other model providers to be in sync with the latest version APIs and features, and squashed a series of minor bugs for a smoother experience.

Catch you in the code,

The Dify Team 🛠️

Change Log

New Contributors

Full Changelog: 0.3.34...0.4.0

v0.3.34

19 Dec 06:22
43741ad
Compare
Choose a tag to compare

Features

  • Annotation Reply, see details: Link
  • Dify Knowledge supports unstructured.io as the file extraction solution.
  • Azure OpenAI add gpt-4-1106-previewgpt-4-vision-preview models support.
  • SaaS services now support replacing the logo of WebApp after subscribing.

Important Upgrade Notice

  • Annotation Reply

    The annotation function can support direct replies to related questions, so we need to assign values to previously unstored questions for the table message_annotations.

    we need doing below command in your api docker container

    docker exec -it docker-api-1 bash
    flask add-annotation-question-field-value
    

    or direct run below command when you launch from source codes.

    cd api
    flask add-annotation-question-field-value
    
  • Unstructured.io Support

    Due to the support of this feature, we have added four new formats of text parsing( msg , eml, ppt, pptx ) and optimized two text parsing formats (text, markdown) in our SAAS erviroment.

    For localhost you need to do below actions to support unstructured.io

    Unstructured Document

    1. docker pull from unstructured's image repository.
    docker pull downloads.unstructured.io/unstructured-io/unstructured-api:latest
    
    1. Once pulled, you can launch the container
    docker run  -d --rm --name unstructured-api downloads.unstructured.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
    
    1. In our docker-compose.yaml, add two new environment variables for the api and worker services.
    ETL_TYPE=Unstructured
    UNSTRUCTURED_API_URL=http://unstructured:8000/general/v0/general
    
    1. Restart the Dify‘s services
    docker-compose up -d
    

What's Changed

New Contributors

Full Changelog: 0.3.33...0.3.34

v0.3.33

08 Dec 05:30
bee0d12
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.3.32...0.3.33

v0.3.32

25 Nov 08:46
3cc6978
Compare
Choose a tag to compare

New Features

  • Supported the rerank model of xinference for local deployment, such as: bge-reranker-large, bge-reranker-base.
  • ChatGLM2/3 support

⚠️ Breaking Change

We've recently switched provider ChatGLM to the OpenAI API protocol, so from now on, we'll only be supporting ChatGLM3 and ChatGLM2. Unfortunately, support for ChatGLM1 has been deprecated.

What's Changed

New Contributors

Full Changelog: 0.3.31...0.3.32

v0.3.31-fix3

22 Nov 11:46
ea35f1d
Compare
Choose a tag to compare

Major Fixes

  • Fix: Hybrid search recall fails when the chosen dataset contains no valid documents.

Important Notice

Before upgrading to 0.3.31, please ensure that all steps outlined in the Important Upgrade Notice for the 0.3.31 release are completed: https://github.com/langgenius/dify/releases/tag/0.3.31

What's Changed

New Contributors

Full Changelog: 0.3.31-fix2...0.3.31-fix3

v0.3.31-fix2

21 Nov 17:54
caa330c
Compare
Choose a tag to compare

New Features

  • add anthropic claude-2.1 support

Major Fixes

  • fix: white page problem in Safari.

Important Notice

Before upgrading to 0.3.31, please ensure that all steps outlined in the Important Upgrade Notice for the 0.3.31 release are completed: https://github.com/langgenius/dify/releases/tag/0.3.31

What's Changed

Full Changelog: 0.3.31-fix1...0.3.31-fix2

v0.3.31-fix1

21 Nov 09:36
778cfb3
Compare
Choose a tag to compare

Releasing a new version due to some issues that made the previous one unavailable.

Important Notice

Before upgrading to 0.3.31, please ensure that all steps outlined in the Important Upgrade Notice for the 0.3.31 release are completed: https://github.com/langgenius/dify/releases/tag/0.3.31

What's Changed

Full Changelog: 0.3.31...0.3.31-fix1

v0.3.31

21 Nov 07:59
d5acfaa
Compare
Choose a tag to compare

New Features

Important Upgrade Notice

Current version adds full-text search and hybrid search functions for datasets, significantly improving search results. If you want to use this feature, you need to make the following configurations:

  • If the vector database you are using is Weaviate
    Please update your version to v1.19.0 or later in /dify/docker/docker-compose.yaml.
    For more details, please refer to the Weaviate release list.
 weaviate:
    image: semitechnologies/weaviate:1.19.0
    restart: always
    volumes:
      # Mount the Weaviate data directory to the container.
      - ./volumes/weaviate:/var/lib/weaviate
    environment:
      # The Weaviate configurations
      # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'none'
      CLUSTER_HOSTNAME: 'node1'
      AUTHENTICATION_APIKEY_ENABLED: 'true'
      AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
      AUTHENTICATION_APIKEY_USERS: '[email protected]'
      AUTHORIZATION_ADMINLIST_ENABLED: 'true'
      AUTHORIZATION_ADMINLIST_USERS: '[email protected]'
  • If the vector database you are using is Qdrant
    1. Since Dify's full-text index relies on the multilingual tokenizer in Qdrant's full-text index, by default, Qdrant does not support Chinese, Japanese, and Korean language tokenization.
    2. Therefore, we provide a full-language tokenizer mirror for Qdrant: langgenius/qdrant:latest, which can be deployed and used directly. You can also build your own Qdrant image using the following command:
    docker buildx build .  --build-arg  FEATURES="multiling-chinese,multiling-japanese,multiling-korean" --tag=langgenius/qdrant
    
    1. Due to the lack of full-text index in previous Qdrant data, you need to execute the following commands in the api container to complete the process:
      1. Start a bash shell inside the running Docker container:
       docker exec -it docker-api-1 bash
      
      1. Execute the command to create the full-text-index index:
        flask add-qdrant-full-text-index
      
  • Milvus/Zilliz temporarily does not support Hybrid search

What's Changed

New Contributors

Full Changelog: 0.3.30...0.3.31

v0.3.30

13 Nov 15:20
8835435
Compare
Choose a tag to compare

New Features

  • App supports image uploading and calls the OpenAI GPT-4V or gpt-4-vision-preview which allows the model to take in images and answer questions about them.

What's Changed

Full Changelog: 0.3.29...0.3.30