116 lines
5.1 KiB
Plaintext
116 lines
5.1 KiB
Plaintext
=======================================
|
||
Starting LightRAG Production System
|
||
=======================================
|
||
Checking for existing server on port 3015...
|
||
No existing server found on port 3015.
|
||
Starting LightRAG server...
|
||
Starting LightRAG server with fixed encoding...
|
||
Command: C:\Program Files\Python311\python.exe -m lightrag.api.lightrag_server --port 3015 --host 0.0.0.0 --working-dir rag_storage --input-dir ../inputs --key jleu1212 --auto-scan-at-startup --llm-binding openai --embedding-binding ollama --rerank-binding jina
|
||
Starting server on http://0.0.0.0:3015
|
||
|
||
Server output:
|
||
--------------------------------------------------
|
||
WARNING:root:>> Forcing workers=1 in uvicorn mode(Ignoring workers=2)
|
||
DEBUG: Authentication disabled - using guest access only
|
||
DEBUG: Final accounts (disabled): {}
|
||
|
||
LightRAG log file: C:\aaWORK\railseek6\LightRAG-main\logs\lightrag.log
|
||
|
||
[36m
|
||
????????????????????????????????????????????????????????????????
|
||
? LightRAG Server v1.4.8.1/0222 ?
|
||
? Fast, Lightweight RAG Server Implementation ?
|
||
????????????????????????????????????????????????????????????????
|
||
[0m
|
||
[35m
|
||
? Server Configuration:[0m
|
||
[37m ?? Host: [0m[33m0.0.0.0[0m
|
||
[37m ?? Port: [0m[33m3015[0m
|
||
[37m ?? Workers: [0m[33m1[0m
|
||
[37m ?? Timeout: [0m[33m300[0m
|
||
[37m ?? CORS Origins: [0m[33m*[0m
|
||
[37m ?? SSL Enabled: [0m[33mFalse[0m
|
||
[37m ?? Ollama Emulating Model: [0m[33mlightrag:latest[0m
|
||
[37m ?? Log Level: [0m[33mINFO[0m
|
||
[37m ?? Verbose Debug: [0m[33mFalse[0m
|
||
[37m ?? History Turns: [0m[33m0[0m
|
||
[37m ?? API Key: [0m[33mSet[0m
|
||
[37m ?? JWT Auth: [0m[33mDisabled[0m
|
||
[35m
|
||
? Directory Configuration:[0m
|
||
[37m ?? Working Directory: [0m[33mC:\aaWORK\railseek6\LightRAG-main\rag_storage[0m
|
||
[37m ?? Input Directory: [0m[33mC:\aaWORK\railseek6\inputs[0m
|
||
[35m
|
||
? LLM Configuration:[0m
|
||
[37m ?? Binding: [0m[33mopenai[0m
|
||
[37m ?? Host: [0m[33mhttps://api.openai.com/v1[0m
|
||
[37m ?? Model: [0m[33mdeepseek-chat[0m
|
||
[37m ?? Max Async for LLM: [0m[33m4[0m
|
||
[37m ?? Summary Context Size: [0m[33m12000[0m
|
||
[37m ?? LLM Cache Enabled: [0m[33mTrue[0m
|
||
[37m ?? LLM Cache for Extraction Enabled: [0m[33mTrue[0m
|
||
[35m
|
||
? Embedding Configuration:[0m
|
||
[37m ?? Binding: [0m[33mollama[0m
|
||
[37m ?? Host: [0m[33mhttp://localhost:11434[0m
|
||
[37m ?? Model: [0m[33mbge-m3:latest[0m
|
||
[37m ?? Dimensions: [0m[33m1024[0m
|
||
[35m
|
||
?? RAG Configuration:[0m
|
||
[37m ?? Summary Language: [0m[33mEnglish[0m
|
||
[37m ?? Entity Types: [0m[33m['Person', 'Organization', 'Location', 'Event', 'Concept', 'Method', 'Content', 'Data', 'Artifact', 'NaturalObject'][0m
|
||
[37m ?? Max Parallel Insert: [0m[33m2[0m
|
||
[37m ?? Chunk Size: [0m[33m1200[0m
|
||
[37m ?? Chunk Overlap Size: [0m[33m100[0m
|
||
[37m ?? Cosine Threshold: [0m[33m0.2[0m
|
||
[37m ?? Top-K: [0m[33m40[0m
|
||
[37m ?? Force LLM Summary on Merge: [0m[33m8[0m
|
||
[35m
|
||
? Storage Configuration:[0m
|
||
[37m ?? KV Storage: [0m[33mJsonKVStorage[0m
|
||
[37m ?? Vector Storage: [0m[33mNanoVectorDBStorage[0m
|
||
[37m ?? Graph Storage: [0m[33mNetworkXStorage[0m
|
||
[37m ?? Document Status Storage: [0m[33mJsonDocStatusStorage[0m
|
||
[37m ?? Workspace: [0m[33m-[0m
|
||
[32m
|
||
? Server starting up...
|
||
[0m
|
||
[35m
|
||
? Server Access Information:[0m
|
||
[37m ?? WebUI (local): [0m[33mhttp://localhost:3015[0m
|
||
[37m ?? Remote Access: [0m[33mhttp://<your-ip-address>:3015[0m
|
||
[37m ?? API Documentation (local): [0m[33mhttp://localhost:3015/docs[0m
|
||
[37m ?? Alternative Documentation (local): [0m[33mhttp://localhost:3015/redoc[0m
|
||
[35m
|
||
? Note:[0m
|
||
[36m Since the server is running on 0.0.0.0:
|
||
- Use 'localhost' or '127.0.0.1' for local access
|
||
- Use your machine's IP address for remote access
|
||
- To find your IP address:
|
||
? Windows: Run 'ipconfig' in terminal
|
||
? Linux/Mac: Run 'ifconfig' or 'ip addr' in terminal
|
||
[0m
|
||
[33m
|
||
?? Security Notice:[0m
|
||
[37m API Key authentication is enabled.
|
||
Make sure to include the X-API-Key header in all your requests.
|
||
[0m
|
||
INFO: OpenAI LLM Options: {}
|
||
INFO: Ollama Embedding Options: {}
|
||
INFO: Reranking is enabled: jina-reranker-v2-base-multilingual using jina provider
|
||
INFO: [_] Loaded graph from C:\aaWORK\railseek6\LightRAG-main\rag_storage\graph_chunk_entity_relation.graphml with 616 nodes, 0 edges
|
||
INFO: Started server process [30776]
|
||
INFO: Waiting for application startup.
|
||
INFO: [_] Process 30776 KV load full_docs with 2 records
|
||
INFO: [_] Process 30776 KV load text_chunks with 24 records
|
||
INFO: [_] Process 30776 KV load full_entities with 2 records
|
||
INFO: [_] Process 30776 KV load full_relations with 0 records
|
||
INFO: [_] Process 30776 KV load llm_response_cache with 0 records
|
||
INFO: [_] Process 30776 doc status load doc_status with 2 records
|
||
INFO: Process 30776 auto scan task started at startup.
|
||
INFO: Found 0 files to index.
|
||
INFO: No upload file found, check if there are any documents in the queue...
|
||
INFO: Application startup complete.
|
||
INFO: No documents to process
|
||
INFO: Uvicorn running on http://0.0.0.0:3015 (Press CTRL+C to quit)
|