Spaces:
Runtime error
Runtime error
File size: 2,493 Bytes
bedb8e2 7bd7366 385fdad 7bd7366 2bae1d8 7a7b1d3 a8ee91f bedb8e2 114361d 43fc611 86c5368 bedb8e2 ecc9e42 afd68f9 7a7b1d3 d0252db 7a7b1d3 f9b0b58 7765906 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
# llm-backend
This project provides a simple async interface to interact with an Ollama model
and demonstrates basic tool usage. Chat histories are stored in a local SQLite
database using Peewee. Histories are persisted per user and session so
conversations can be resumed with context. One example tool is included:
* **execute_terminal** – Executes a shell command inside a persistent Linux VM
with network access. Use it to read uploaded documents under ``/data`` or run
other commands. Output from ``stdout`` and ``stderr`` is captured and
returned. Commands run asynchronously so the assistant can continue
responding while they execute. The VM is created when a chat session starts
and reused for all subsequent tool calls.
Sessions share state through an in-memory registry so that only one generation
can run at a time. Messages sent while a response is being produced are
ignored unless the assistant is waiting for a tool result—in that case the
pending response is cancelled and replaced with the new request.
The application injects a robust system prompt on each request. The prompt
guides the model to plan tool usage, execute commands sequentially and
verify results before replying. It is **not** stored in the chat history but is
provided at runtime so the assistant can orchestrate tool calls in sequence to
fulfil the user's request reliably.
## Usage
```bash
python run.py
```
The script will instruct the model to run a simple shell command and print the result. Conversations are automatically persisted to `chat.db` and are now associated with a user and session.
Uploaded files are stored under the `uploads` directory and mounted inside the VM at `/data`. Call ``upload_document`` on the chat session to make a file available to the model:
```python
async with ChatSession() as chat:
path_in_vm = chat.upload_document("path/to/file.pdf")
async for part in chat.chat_stream(f"Summarize {path_in_vm}"):
print(part)
```
When using the Discord bot, attach one or more text files to a message to
upload them automatically. The bot responds with the location of each document
inside the VM so they can be referenced in subsequent prompts.
## Discord Bot
Create a `.env` file with your Discord token:
```bash
DISCORD_TOKEN="your-token"
```
Then start the bot:
```bash
python -m bot.discord_bot
```
Any attachments sent to the bot are uploaded to the VM and the bot replies with
their paths so they can be used in later messages.
|