======================================= Starting LightRAG Production System ======================================= Checking for existing server on port 3015... No existing server found on port 3015. Starting LightRAG server... Starting LightRAG server with fixed encoding... Command: C:\Program Files\Python311\python.exe -m lightrag.api.lightrag_server --port 3015 --host 0.0.0.0 --working-dir rag_storage --input-dir ../inputs --key jleu1212 --auto-scan-at-startup --llm-binding openai --embedding-binding ollama --rerank-binding jina Starting server on http://0.0.0.0:3015 Server output: -------------------------------------------------- WARNING:root:>> Forcing workers=1 in uvicorn mode(Ignoring workers=2) DEBUG: Authentication disabled - using guest access only DEBUG: Final accounts (disabled): {} LightRAG log file: C:\aaWORK\railseek6\LightRAG-main\logs\lightrag.log  ???????????????????????????????????????????????????????????????? ? LightRAG Server v1.4.8.1/0222 ? ? Fast, Lightweight RAG Server Implementation ? ????????????????????????????????????????????????????????????????   ? Server Configuration:  ?? Host: 0.0.0.0  ?? Port: 3015  ?? Workers: 1  ?? Timeout: 300  ?? CORS Origins: *  ?? SSL Enabled: False  ?? Ollama Emulating Model: lightrag:latest  ?? Log Level: INFO  ?? Verbose Debug: False  ?? History Turns: 0  ?? API Key: Set  ?? JWT Auth: Disabled  ? Directory Configuration:  ?? Working Directory: C:\aaWORK\railseek6\LightRAG-main\rag_storage  ?? Input Directory: C:\aaWORK\railseek6\inputs  ? LLM Configuration:  ?? Binding: openai  ?? Host: https://api.openai.com/v1  ?? Model: deepseek-chat  ?? Max Async for LLM: 4  ?? Summary Context Size: 12000  ?? LLM Cache Enabled: True  ?? LLM Cache for Extraction Enabled: True  ? Embedding Configuration:  ?? Binding: ollama  ?? Host: http://localhost:11434  ?? Model: bge-m3:latest  ?? Dimensions: 1024  ?? RAG Configuration:  ?? Summary Language: English  ?? Entity Types: ['Person', 'Organization', 'Location', 'Event', 'Concept', 'Method', 'Content', 'Data', 'Artifact', 'NaturalObject']  ?? Max Parallel Insert: 2  ?? Chunk Size: 1200  ?? Chunk Overlap Size: 100  ?? Cosine Threshold: 0.2  ?? Top-K: 40  ?? Force LLM Summary on Merge: 8  ? Storage Configuration:  ?? KV Storage: JsonKVStorage  ?? Vector Storage: NanoVectorDBStorage  ?? Graph Storage: NetworkXStorage  ?? Document Status Storage: JsonDocStatusStorage  ?? Workspace: -  ? Server starting up...   ? Server Access Information:  ?? WebUI (local): http://localhost:3015  ?? Remote Access: http://:3015  ?? API Documentation (local): http://localhost:3015/docs  ?? Alternative Documentation (local): http://localhost:3015/redoc  ? Note:  Since the server is running on 0.0.0.0: - Use 'localhost' or '127.0.0.1' for local access - Use your machine's IP address for remote access - To find your IP address: ? Windows: Run 'ipconfig' in terminal ? Linux/Mac: Run 'ifconfig' or 'ip addr' in terminal   ?? Security Notice:  API Key authentication is enabled. Make sure to include the X-API-Key header in all your requests.  INFO: OpenAI LLM Options: {} INFO: Ollama Embedding Options: {} INFO: Reranking is enabled: jina-reranker-v2-base-multilingual using jina provider INFO: [_] Loaded graph from C:\aaWORK\railseek6\LightRAG-main\rag_storage\graph_chunk_entity_relation.graphml with 616 nodes, 0 edges INFO: Started server process [30776] INFO: Waiting for application startup. INFO: [_] Process 30776 KV load full_docs with 2 records INFO: [_] Process 30776 KV load text_chunks with 24 records INFO: [_] Process 30776 KV load full_entities with 2 records INFO: [_] Process 30776 KV load full_relations with 0 records INFO: [_] Process 30776 KV load llm_response_cache with 0 records INFO: [_] Process 30776 doc status load doc_status with 2 records INFO: Process 30776 auto scan task started at startup. INFO: Found 0 files to index. INFO: No upload file found, check if there are any documents in the queue... INFO: Application startup complete. INFO: No documents to process INFO: Uvicorn running on http://0.0.0.0:3015 (Press CTRL+C to quit)