Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LocalAI API endpoint not working #129

Open
userbox020 opened this issue Sep 20, 2024 · 8 comments
Open

LocalAI API endpoint not working #129

userbox020 opened this issue Sep 20, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@userbox020
Copy link

Which version of assistant are you using?

1.0

Which version of Nextcloud are you using?

v30

Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.

Firefox and Google Latest

Describe the Bug

I have setting up LocalAI API endpoint working fine, and NextCloud even detect the models list of LocalAI but when I use any of the features to talk with the LLM it doesnt fetch anything, it get the follow error

GET /apps/assistant/chat/check_generation?taskId=20&sessionId=21 HTTP/1.1
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate, br, zstd
Accept-Language: en-US,en;q=0.9
Connection: keep-alive
Cookie: oc_sessionPassphrase=bgHjp8Tyr9s29ZBaTrH7iPAFHR1qe67PWMhdXBo9DKozrPQgxWP2lZyVfWrmlaIuyLddLrkLtacrstoEGQfwx55Sd%2BouuDeo5JZFoeezhAQpd6rLG4tFPCbiDbCQkMRy; nc_sameSiteCookielax=true; nc_sameSiteCookiestrict=true; ocj2thp9x9dg=a39aba2bb49eec84a8c772bb1b874c6f; nc_username=admin; nc_token=5H%2FNfg4NPPo%2BtewCKTl4%2FAtTRQQRviM9; nc_session_id=a39aba2bb49eec84a8c772bb1b874c6f
Host: localhost:8070
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36
X-Requested-With: XMLHttpRequest, XMLHttpRequest
requesttoken: OUgglgKesBwaxlFwTA2Fq4dbST6P5ssrktPuoLYErE4=:SQNNo3DJ6SR9nDQfCUPun98sE2n+sK9Pxb6jxsB1gw0=
sec-ch-ua: "Chromium";v="128", "Not;A=Brand";v="24", "Google Chrome";v="128"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"


HTTP/1.1 417 Expectation Failed
Date: Fri, 20 Sep 2024 09:07:03 GMT
Server: Apache/2.4.62 (Debian)
Referrer-Policy: no-referrer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none
X-Robots-Tag: noindex, nofollow
X-XSS-Protection: 1; mode=block
X-Powered-By: PHP/8.2.23
Content-Security-Policy: default-src 'none';base-uri 'none';manifest-src 'self';frame-ancestors 'none'
X-Request-Id: 1CXMQ5oeDFUkY6qEm2b4
Cache-Control: no-cache, no-store, must-revalidate
Feature-Policy: autoplay 'none';camera 'none';fullscreen 'none';geolocation 'none';microphone 'none';payment 'none'
Content-Length: 17
Keep-Alive: timeout=5, max=66
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8

{"task_status":1}

Expected Behavior

Interact with the LLM normally

To Reproduce

Setup LocalAI, input the URL in AI settings and interact with the AI

@userbox020 userbox020 added the bug Something isn't working label Sep 20, 2024
@userbox020
Copy link
Author

image

@userbox020
Copy link
Author

image

@userbox020
Copy link
Author

 ChattyLLMInputForm.vue:574

{
  "stack": "X@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:33123\njt@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:44544\ng@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:49220\nEventHandlerNonNull*70715/Vt</<@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:49479\n70715/Vt<@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:48808\nhe@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:55714\n_request@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:58541\nrequest@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:57068\n70715/</we.prototype[t]@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:58836\n70715/o/<@http://localhost:8080/custom_apps/assistant/js/assistant-axios-lazy.js?v=fc9be637c540e816beb7:2:27421\n22974/pollGenerationTask/</this.pollMessageGenerationTimerId<@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:75703\nsetInterval handler*22974/pollGenerationTask/<@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:75674\npollGenerationTask@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:75619\nrunGenerationTask@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:74766\nasync*newMessage@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:73843\nasync*handleSubmit@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:70457\nasync*va@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:221685\na@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:214375\nva@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:221685\n85471/e.prototype.$emit@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:243546\nsubmit@http://localhost:8080/custom_apps/assistant/js/assistant-assistant-modal-lazy.js?v=5858c70ce6cd3b1bf029:1:67271\nva@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:221685\na@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:214375\nva@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:221685\n85471/e.prototype.$emit@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:243546\nonEnter@http://localhost:8080/custom_apps/assistant/js/assistant-vendors-node_modules_nextcloud_vue_dist_Components_NcAppNavigationNew_mjs-node_modules_nextcl-28eca6.js?v=8fe271131d67e837ec65:2:567606\n42796/R/<.on.keydown<@http://localhost:8080/custom_apps/assistant/js/assistant-vendors-node_modules_nextcloud_vue_dist_Components_NcAppNavigationNew_mjs-node_modules_nextcl-28eca6.js?v=8fe271131d67e837ec65:2:571115\nva@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:221685\na@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:214454\n85471/Do/s._wrapper@http://localhost:8080/custom_apps/assistant/js/assistant-main.js?v=06ec47aa-0:2:254591\n",
  "message": "Request failed with status code 417",
  "name": "AxiosError",
  "code": "ERR_BAD_REQUEST",
  "config": {
    "transitional": {
      "silentJSONParsing": true,
      "forcedJSONParsing": true,
      "clarifyTimeoutError": false
    },
    "adapter": [
      "xhr",
      "http",
      "fetch"
    ],
    "transformRequest": [
      null
    ],
    "transformResponse": [
      null
    ],
    "timeout": 0,
    "xsrfCookieName": "XSRF-TOKEN",
    "xsrfHeaderName": "X-XSRF-TOKEN",
    "maxContentLength": -1,
    "maxBodyLength": -1,
    "env": {},
    "headers": {
      "Accept": "application/json, text/plain, */*",
      "requesttoken": "tSslCxlJ4AS/CVIa0wZOUpo1uRefeiiG48WlInf+J0g=:mnFuOkMmrUjUXhhVvEAHBMIA+CX6CRDljbbXVAKWYhk=",
      "X-Requested-With": "XMLHttpRequest"
    },
    "params": {
      "taskId": 1,
      "sessionId": 1
    },
    "method": "get",
    "url": "/apps/assistant/chat/check_generation"
  },
  "request": {},
  "response": {
    "data": {
      "task_status": 1
    },
    "status": 417,
    "statusText": "Expectation Failed",
    "headers": {
      "cache-control": "no-cache, no-store, must-revalidate",
      "connection": "Keep-Alive",
      "content-length": "17",
      "content-security-policy": "default-src 'none';base-uri 'none';manifest-src 'self';frame-ancestors 'none'",
      "content-type": "application/json; charset=utf-8",
      "date": "Fri, 20 Sep 2024 21:04:42 GMT",
      "feature-policy": "autoplay 'none';camera 'none';fullscreen 'none';geolocation 'none';microphone 'none';payment 'none'",
      "keep-alive": "timeout=5, max=35",
      "referrer-policy": "no-referrer",
      "server": "Apache/2.4.62 (Debian)",
      "x-content-type-options": "nosniff",
      "x-frame-options": "SAMEORIGIN",
      "x-permitted-cross-domain-policies": "none",
      "x-powered-by": "PHP/8.2.23",
      "x-request-id": "uUTJzffketuPRiXStzqx",
      "x-robots-tag": "noindex, nofollow",
      "x-xss-protection": "1; mode=block"
    },
    "config": {
      "transitional": {
        "silentJSONParsing": true,
        "forcedJSONParsing": true,
        "clarifyTimeoutError": false
      },
      "adapter": [
        "xhr",
        "http",
        "fetch"
      ],
      "transformRequest": [
        null
      ],
      "transformResponse": [
        null
      ],
      "timeout": 0,
      "xsrfCookieName": "XSRF-TOKEN",
      "xsrfHeaderName": "X-XSRF-TOKEN",
      "maxContentLength": -1,
      "maxBodyLength": -1,
      "env": {},
      "headers": {
        "Accept": "application/json, text/plain, */*",
        "requesttoken": "tSslCxlJ4AS/CVIa0wZOUpo1uRefeiiG48WlInf+J0g=:mnFuOkMmrUjUXhhVvEAHBMIA+CX6CRDljbbXVAKWYhk=",
        "X-Requested-With": "XMLHttpRequest"
      },
      "params": {
        "taskId": 1,
        "sessionId": 1
      },
      "method": "get",
      "url": "/apps/assistant/chat/check_generation"
    },
    "request": {}
  },
  "status": 417
}

@ctft1
Copy link

ctft1 commented Sep 21, 2024

I confirm there is also something wrong on my Local-ai setup with Nextcloud. Everything works fine when is using the GUI from Local-AI, or even a curl API command : results come really fast (1-2 second, I have 2x GPU with CUDA enabled)

But when querying through the assistant the answer takes a very long time (usually at least 2-3 minutes). I had a look on a Local-AI with Debug enabled, and meanwhile Local-AI does... nothing. It just waits... and at some points return the answer really quickly.

I had a look to the Nextcloud Assistant page, and I can also see this 417 answer doing nothing, and finally gets a 200 answer (the top "new_message" is when the Chat Button is pressed to send the message)

Screenshot 2024-09-21

@ctft1
Copy link

ctft1 commented Sep 21, 2024

I forgot to mention my versions:

  • Nextcloud v30.0,0 from AIO Docker on Ubuntu (fresh install)
  • Nextcloud Hub 9
  • Nextcloud Assistant v2.0.4
  • OpenAI and LocalAI integration v3.1.0

@ctft1
Copy link

ctft1 commented Sep 21, 2024

I found more information about my issue.

The "Chat with AI" (and also other AI sections) queries are only processed once every 5 minutes (at 9:00 / 9:05 / 9:10 / ...), that explains the time before getting an answer... which of course is not the expected behavior.

Can someone have a look to reproduce? Thank you

@userbox020
Copy link
Author

@ctft1 its totally nextcloud or assistant app problem bro, because localAI working perfect and it integrates to any other frameworks with its API without any problem.
Hope we can have some help with this problem from nextcloud or assistant app before they release assistant v2

@kinimodmeyer
Copy link

kinimodmeyer commented Oct 7, 2024

@ctft1 that it get processed faster then the 5 minutes (which is your default cronjob schedule) you need to ALWAYS run:

occ background-job:worker 'OC\TaskProcessing\SynchronousBackgroundJob'

But i can confirm the Expectation Failed issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants