Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to Upload Data in Neo4j LLM Builder (Read-Only Mode) #996

Closed
ALKASERAM opened this issue Jan 8, 2025 · 4 comments
Closed

Unable to Upload Data in Neo4j LLM Builder (Read-Only Mode) #996

ALKASERAM opened this issue Jan 8, 2025 · 4 comments
Assignees
Labels
duplicate This issue or pull request already exists

Comments

@ALKASERAM
Copy link

ALKASERAM commented Jan 8, 2025

Hi there,

I'm trying to use Neo4j LLM Builder with my local Ollama3 setup and Neo4j Community Edition, both running in Docker containers. Here's what I've done so far:

  1. I started up the frontend and backend of the LLM Builder, as well as the Neo4j instance. Everything is up and running.
  2. I connected to my local Neo4j instance from the frontend, and the connection is successful.

However, after connecting, I noticed the following issues:

  • At the top left, there's a header that says:
    Neo4j connection (Read-only Mode)
    bolt://neo4j_empty:7687
    No Graph Schema configured
    
  • The left panel for uploading data doesn't appear. Instead, I see this message:
    This user account does not have permission to access or manage data sources.
    

I tested the same Neo4j user account in the Neo4j browser, and it works perfectly—I can create nodes, run queries, and perform other operations without any issues.

I've been troubleshooting this for two days and can't seem to resolve it. Below, I've included my .env and docker-compose files for both LLM Builder and Neo4j.

Any help would be greatly appreciated! 🙏

Screenshot from 2025-01-08 16-48-09

env file:
# Optional Backend
EMBEDDING_MODEL="all-MiniLM-L6-v2"
IS_EMBEDDING="true"
KNN_MIN_SCORE="0.94"
# Enable Gemini (default is False) | Can be False or True
GEMINI_ENABLED=False
LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"

# Enable Google Cloud logs (default is False) | Can be False or True
GCP_LOG_METRICS_ENABLED=False
NUMBER_OF_CHUNKS_TO_COMBINE=6
UPDATE_GRAPH_CHUNKS_PROCESSED=20
NEO4J_URI="neo4j://neo4j_empty:7687"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="devpassword"
LANGCHAIN_API_KEY=""
LANGCHAIN_PROJECT=""
LANGCHAIN_TRACING_V2="true"
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
GCS_FILE_CACHE=False
ENTITY_EMBEDDING=True

# Optional Frontend
VITE_BACKEND_API_URL="http://localhost:8000"
VITE_BLOOM_URL="https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true"
VITE_REACT_APP_SOURCES="local,youtube,wiki,s3,web"
VITE_ENV="DEV"
VITE_TIME_PER_PAGE=50
VITE_CHUNK_SIZE=5242880
VITE_GOOGLE_CLIENT_ID=""
VITE_CHAT_MODES=""
VITE_BATCH_SIZE=2
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"

neo4j compose file:
services:
  neo4j_empty:
    image: neo4j:latest
    volumes:
      - ./neo4j_empty/logs:/logs
      - ./neo4j_empty/config:/config
      - ./neo4j_empty/data:/data
      - ./neo4j_empty/plugins:/plugins
    environment:
      - NEO4J_AUTH=neo4j/devpassword
      - NEO4JLABS_PLUGINS=["apoc"]
      - NEO4J_dbms_security_procedures_unrestricted=apoc.*
      - NEO4J_dbms_security_procedures_allowlist=apoc.*
      - NEO4J_dbms_mode=SINGLE
      - NEO4J_dbms_default_database=neo4j

    ports:
      - "7479:7474"
      - "7689:7687"
    restart: always
    networks:
      - neo4j_net

networks:
  neo4j_net:

    external: true

llm builder:
version: "3"

services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    volumes:
      - ./backend:/code
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY-}
      - DIFFBOT_API_KEY=${DIFFBOT_API_KEY-}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-all-MiniLM-L6-v2}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT-}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY-}
      - KNN_MIN_SCORE=${KNN_MIN_SCORE-0.94}
      - IS_EMBEDDING=${IS_EMBEDDING-true}
      - GEMINI_ENABLED=${GEMINI_ENABLED-False}
      - GCP_LOG_METRICS_ENABLED=${GCP_LOG_METRICS_ENABLED-False}
      - UPDATE_GRAPH_CHUNKS_PROCESSED=${UPDATE_GRAPH_CHUNKS_PROCESSED-20}
      - NUMBER_OF_CHUNKS_TO_COMBINE=${NUMBER_OF_CHUNKS_TO_COMBINE-6}
      - ENTITY_EMBEDDING=${ENTITY_EMBEDDING-False}
      - GCS_FILE_CACHE=${GCS_FILE_CACHE-False}
#      - LLM_MODEL_CONFIG_anthropic_claude_35_sonnet=${LLM_MODEL_CONFIG_anthropic_claude_35_sonnet-}
#      - LLM_MODEL_CONFIG_fireworks_llama_v3_70b=${LLM_MODEL_CONFIG_fireworks_llama_v3_70b-}
#      - LLM_MODEL_CONFIG_azure_ai_gpt_4o=${LLM_MODEL_CONFIG_azure_ai_gpt_4o-}
#      - LLM_MODEL_CONFIG_azure_ai_gpt_35=${LLM_MODEL_CONFIG_azure_ai_gpt_35-}
#      - LLM_MODEL_CONFIG_groq_llama3_70b=${LLM_MODEL_CONFIG_groq_llama3_70b-}
#      - LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet=${LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet-}
#     - LLM_MODEL_CONFIG_fireworks_qwen_72b=${LLM_MODEL_CONFIG_fireworks_qwen_72b-}
      - LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-}
    # env_file:
    #   - ./backend/.env
    container_name: backend
    extra_hosts:
      - host.docker.internal:host-gateway
    ports:
      - "8000:8000"
    networks:
      - net
      - neo4j_net
  frontend:
    depends_on:
      - backend
    build:
      context: ./frontend
      dockerfile: Dockerfile
      args:
        - VITE_BACKEND_API_URL=${VITE_BACKEND_API_URL-http://localhost:8000}
        - VITE_REACT_APP_SOURCES=${VITE_REACT_APP_SOURCES-local,wiki,s3}
        - VITE_GOOGLE_CLIENT_ID=${VITE_GOOGLE_CLIENT_ID-}
        - VITE_BLOOM_URL=${VITE_BLOOM_URL-https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true}
        - VITE_TIME_PER_PAGE=${VITE_TIME_PER_PAGE-50}
        - VITE_CHUNK_SIZE=${VITE_CHUNK_SIZE-5242880}
        - VITE_LARGE_FILE_SIZE=${VITE_LARGE_FILE_SIZE-5242880}
        - VITE_ENV=${VITE_ENV-DEV}
        - VITE_CHAT_MODES=${VITE_CHAT_MODES-}
        - VITE_BATCH_SIZE=${VITE_BATCH_SIZE-2}
        - VITE_LLM_MODELS=${VITE_LLM_MODELS-}
        - VITE_LLM_MODELS_PROD=${VITE_LLM_MODELS_PROD-openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash}
        - DEPLOYMENT_ENV=local
    volumes:
      - ./frontend:/app
      - /app/node_modules
    #env_file:
    #  - ./frontend/.env
    container_name: frontend
    ports:
      - "8080:8080"
    networks:
      - net
      - neo4j_net

networks:
  net:
  neo4j_net:
         external: true
@kartikpersistent
Copy link
Collaborator

Hi @ALKASERAM you can refer #839 issue and you will get the idea we are removing the roles check for the community edition soon

@kartikpersistent kartikpersistent added the duplicate This issue or pull request already exists label Jan 9, 2025
@ALKASERAM
Copy link
Author

Thanks @kartikpersistent I tried the work around (by returning false in the exception) and now it doesn't show read only mode.

@kartikpersistent
Copy link
Collaborator

Welcome if there are no issues you can close this issue

@ALKASERAM
Copy link
Author

At the moment I using a workaround by returning true in check_account_access exception at backend/src/graphDB_dataAccess.py

@ALKASERAM ALKASERAM closed this as not planned Won't fix, can't repro, duplicate, stale Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants