Garden Agent

A local-first AI garden planning assistant that uses conversational AI to manage plants, seeds, locations, and planting schedules — all running locally with Ollama.

Next.jsReactTailwind CSSshadcn/uiPostgreSQLPrismaOllamaVercel AI SDKDocker
GitHub

February 2026

Overview

Garden Agent is a local-first AI assistant for planning and managing a home garden. Instead of navigating traditional CRUD forms, users chat with an AI agent that can create plants, track seed inventory, manage garden locations, and schedule plantings — all through natural conversation. The entire stack runs locally via Docker Compose with no cloud API dependencies, using Ollama with the Qwen3 32B model for inference on a local NVIDIA GPU.

Key Features

  • Conversational garden management — Add plants, track seeds, plan locations, and schedule plantings through natural language chat
  • Manual CRUD alongside AI — Traditional add/edit/delete buttons and form dialogs are available on every page for users who prefer direct control
  • 17 AI tools — Full CRUD coverage across plants, seeds, locations, and plantings, plus dashboard summaries and natural language page navigation
  • Anti-hallucination design — Real database state is injected into every AI request so the model never invents IDs or references non-existent records
  • Fully local stack — Ollama, PostgreSQL, and the Next.js app all run in Docker Compose with NVIDIA GPU passthrough — no data leaves your machine
  • Chat persistence — Sessions are saved to the database with auto-summarization after extended conversations

Technical Details

The app follows a Next.js App Router architecture with Server Components for pages and Client Components for interactive forms and chat. The chat API endpoint builds a database context snapshot (all entities with real IDs) and injects it into the system prompt before streaming to Ollama via the Vercel AI SDK. The model has access to 17 tools that execute server-side actions directly against PostgreSQL through Prisma. Docker Compose orchestrates three services — PostgreSQL 17, Ollama with automatic model pulling, and the Next.js dev server with Turbopack.

What I Learned

Grounding LLMs with database state prevents hallucination — injecting a full snapshot of real entity IDs into the system prompt eliminates invented references almost entirely. Tool design matters more than prompt engineering; well-structured tool schemas with clear descriptions and Zod validation did more for agent reliability than tweaking the system prompt. Running Qwen3 32B locally with Ollama proved that local LLMs are viable for agentic workflows, providing fast enough tool-calling performance for a real-time chat experience without any cloud API costs or privacy trade-offs.

Demo

Loading demo...