Skip to content

Conversation

@grillazz
Copy link
Owner

@grillazz grillazz commented May 3, 2025

No description provided.

@grillazz grillazz requested a review from Copilot May 3, 2025 11:26
@grillazz grillazz self-assigned this May 3, 2025
@grillazz grillazz linked an issue May 3, 2025 that may be closed by this pull request
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new streaming endpoint for chat-based interactions with a local large language model (LLM) and updates related dependencies, Docker compose settings, and documentation.

  • Added a new asynchronous chat client test in tests/chat.py.
  • Introduced a new LLM streaming service via app/services/llm.py and its corresponding API endpoint in app/api/ml.py, with integration updates in app/main.py and README.md.
  • Upgraded project dependencies in pyproject.toml and adjusted Docker networking for app, db, and redis containers in compose.yml.

Reviewed Changes

Copilot reviewed 8 out of 9 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
tests/chat.py Added a basic async chat client for testing the streaming endpoint.
pyproject.toml Updated project version and bumped several dependency versions.
compose.yml Configured host networking for containers.
app/services/llm.py Added a new LLM service for streaming chat responses.
app/main.py Introduced the new ml router and updated API version.
app/api/ml.py Created a StreamingResponse endpoint for LLM chat.
README.md Documented the new LLM integration and usage instructions.
Files not reviewed (1)
  • .env: Language not supported
Comments suppressed due to low confidence (1)

compose.yml:4

  • Using host networking may reduce container isolation and could expose additional security risks. Please confirm that this configuration is intentional and acceptable for the deployment environment.
network_mode: host

Comment on lines +45 to +46
except Exception:
pass
Copy link

Copilot AI May 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid silently passing exceptions using a bare 'except Exception:' block. Consider logging the error details or handling the exception explicitly to aid in debugging.

Suggested change
except Exception:
pass
except Exception as e:
logging.exception("Error processing streamed line: %s", line)

Copilot uses AI. Check for mistakes.
async with httpx.AsyncClient() as client:
while True:
# Get user input
prompt = input("\nYou: ")
Copy link

Copilot AI May 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the synchronous input() call inside an async function may block the event loop. Consider using an asynchronous input strategy or executing the blocking call in a separate thread to avoid potential performance issues.

Suggested change
prompt = input("\nYou: ")
prompt = await anyio.to_thread.run_sync(input, "\nYou: ")

Copilot uses AI. Check for mistakes.
@grillazz grillazz merged commit b2711dc into main May 3, 2025
2 checks passed
@grillazz grillazz deleted the 201-ml-streaming-endpoint branch May 3, 2025 11:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ml streaming endpoint

2 participants