Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Title: n8n with Ollama
Description
This pull request adds a complete Docker-based setup for running n8n integrated with a Tailscale sidecar for secure network access, and includes support for Ollama (local LLM inference) integration.
The update introduces:
docker-compose.ymlthat securely proxies n8n through Tailscale..envfile for environment variables and Tailscale authentication.config/serve.jsonenabling HTTPS proxying via Tailscale Serve.README.mddocumenting setup, usage, and troubleshooting steps for n8n and Ollama.This configuration allows users to:
Related Issues
Type of Change
How Has This Been Tested?
The configuration was tested on Docker Desktop (v4.33+) and Ubuntu Server (22.04) with Tailscale v1.66.
docker compose configto validate YAML and environment variable resolution.docker compose up -dto verify bothn8nandtailscaleservices start correctly.https://<TAILSCALE-HOSTNAME>loads the n8n web interface.http://ollama:11434).Checklist
README.md)Screenshots (if applicable)