Hi team! I’d like to propose a discussion around building or integrating AI developer tools within the Laravel ecosystem to support faster development workflows, smarter code insights, and enhanced documentation assistance.
Use Cases/Ideas:
AI-powered code completion and suggestions tailored for Laravel best practices
Automated generation of controller, model, and migration boilerplate
Natural language query support for documentation lookup (e.g., “How do I use Eloquent relationships?”)
AI assistant for error explanations with potential fixes
Integration examples for chat-based assistance within Laravel Nova or other admin UIs
Questions:
Has the team considered supporting AI-embedded development tools for the community?
Are there existing plugins/extensions that could serve as a foundation?
What would be the best approach for ensuring Laravel-specific recommendations (rather than generic code suggestions)?
Happy to help draft RFCs or build early prototypes! Looking forward to thoughts from maintainers and community members.
1 Like
I’m not familiar with Laravel, but it seems like there might be a promising approach via MCP? :
Background: why “Laravel-specific AI dev tools” is different from generic AI
Generic coding assistants tend to fail in Laravel projects for predictable reasons:
- They lack project truth (routes, container bindings, schema, policies, package versions).
- They answer from “generic PHP patterns” instead of Laravel conventions.
- They can’t verify changes (tests/static analysis) unless you build that loop.
- They need a standard tool interface to safely inspect and act in a codebase—this is exactly what MCP (Model Context Protocol) is meant to solve. (Laravel)
Laravel now has first-party pieces explicitly targeting these gaps:
- Laravel Boost: a Laravel-specific MCP server (15+ tools), version/package-aware AI guidelines, and a semantic-search Documentation API over 17k+ pieces of ecosystem docs. (Laravel)
- Laravel MCP: a Laravel package to build MCP servers (Tools / Resources / Prompts), with familiar Laravel auth patterns (Passport/Sanctum). (Laravel)
- Laravel AI SDK: first-party SDK for building AI features inside Laravel apps (agents, tool calling, conversation storage tables). (Laravel)
This means your proposal is best positioned as: extend and standardize around these primitives, plus proven community building blocks.
Mapping your use cases to practical foundations
| Use case |
Best foundation(s) |
Why this works |
| Laravel-aware code completion & suggestions |
Laravel Boost + IDE tooling (Laravel Idea, IDE Helper) |
Boost provides framework/project context to agents; IDE tooling improves symbol/type accuracy. (Laravel) |
| Boilerplate generation (controller/model/migration) |
Blueprint (spec → code) + Boost tools |
AI generates a reviewable spec (YAML), Blueprint generates consistent Laravel artifacts. (Blueprint) |
| Natural-language docs lookup |
Boost Documentation API (semantic search) |
Version/package-aware retrieval reduces “wrong version” answers. (Laravel) |
| Error explanations + fixes |
Ignition AI solutions + optional MCP context |
Ignition already supports AI suggestions with constrained error context; MCP can add project truth. (Spatie) |
| Chat assistant in Nova/admin UI |
Nova integration packages + AI SDK (for in-app agents) |
Community packages demonstrate UI + persistence; AI SDK gives first-party agent structure. (GitHub) |
Questions
1) Has the team considered supporting AI-embedded dev tools?
Yes—Laravel has already shipped first-party support in the direction you’re proposing:
- Boost (developer workflow / IDE agent enablement): MCP tools + guidelines + docs retrieval. (Laravel)
- Laravel MCP (standard integration surface): build MCP servers in Laravel with Laravel auth/middleware patterns. (Laravel)
- AI SDK (in-app assistants/features): agent classes, tool calling, storage tables for conversations. (Laravel)
So the “discussion” can move from whether to do this to what reference implementations, conventions, and safety constraints the ecosystem should standardize.
2) Existing plugins/extensions that can serve as a foundation
IDE intelligence (pre-AI, but crucial for AI quality)
- Laravel Idea (PhpStorm): Laravel-aware IDE features; now free for PhpStorm users. (Laravel)
- barryvdh/laravel-ide-helper: generates helper files/PHPDocs for accurate autocompletion. (GitHub)
Deterministic scaffolding (recommended for boilerplate)
- Laravel Shift Blueprint: generate models, migrations, controllers, routes, etc., from a YAML draft (
draft.yaml, blueprint:build). (Blueprint)
Error explanation
- Spatie Ignition AI powered solutions: optional OpenAI client, AI suggestions; includes security guidance and debug-mode considerations. (Spatie)
Admin UI (Nova) chat patterns
- outl1ne/nova-openai: OpenAI SDK + stores communications + presents them in Nova. (GitHub)
- naif/chatgpt: Nova tool that asks ChatGPT, stores Q/A and token usage. (Packagist)
- iamgerwin/nova-ai-context-aware-input: Nova field for context-aware suggestions. (Packagist)
“Bring your own OpenAI client” (app-side)
- openai-php/laravel: popular Laravel wrapper; publishes config via
php artisan openai:install. (GitHub)
Static analysis as an AI “correctness gate”
- Larastan: PHPStan extension for Laravel; catches bug classes early. (GitHub)
3) Best approach to ensure Laravel-specific recommendations (not generic suggestions)
Treat “Laravel-specific” as an engineering constraint with four layers:
Layer A — Versioned retrieval (docs-first answers)
- Use Boost’s Documentation API for semantic search over version/package-specific docs. (Laravel)
- Require assistants to cite retrieved snippets (or at least link to retrieved sections) before offering advice.
Layer B — Project grounding via tools (MCP)
Layer C — Deterministic verification loop
-
After AI proposes changes, run:
- test suite
- static analysis (Larastan)
- formatting/linting
-
Feed failures back into the agent for a second pass. (This is where generic assistants often fall apart without tooling.)
Layer D — Repository conventions (portable across agents)
- Add repo-level instructions so agents follow project-specific build/test/verify rules and conventions (naming, architecture, testing style). (GitHub Docs)
Implementation detail that reduces breakage: strict structured outputs
- For tool calls and codegen specs (Blueprint YAML, migration schemas), use JSON Schema enforced outputs (“structured outputs”) to avoid malformed JSON and brittle parsing. (OpenAI Developers)
Practical RFC/prototype shape (what to propose)
A. Standardize a “Laravel AI Tools Baseline”
A community-agreed minimal set of MCP tools/resources/prompts that most apps can expose, e.g.:
- Read-only: routes/schema/config/package versions
- Developer actions: run tests, run Larastan, run migrations (gated)
- Doc search: Boost docs API as the default retrieval backend (or a documented alternative)
This is aligned with Boost’s positioning (tools + guidelines + docs). (Laravel)
B. Define “safe-by-default” guardrails
If tools can change state (migrations, file writes, queue actions), adopt a threat model:
- Prompt injection and unsafe tool execution are top risks in OWASP’s LLM Top 10 (Prompt Injection, Insecure Output Handling, Excessive Agency, etc.). (OWASP)
- Concretely: least privilege tools, confirmation gates, schema validation, and audit logging.
C. Ship one reference workflow end-to-end
Examples:
- Docs Q&A that cites Boost retrieval + project inspection (lowest risk, high value)
- “NL → Blueprint YAML → blueprint:build → tests/Larastan” (best for boilerplate)
Known pitfalls to plan for (observed in the wild)
If your discussion includes “integration support,” these recurring issues matter:
- “MCP connected but no tools available” (especially with IDE clients / Sail) (GitHub)
- Sail/Docker execution context: MCP commands may need to run via
./vendor/bin/sail instead of host php (GitHub)
- Invalid JSON output from tool process (often stdout pollution / protocol mismatch) (GitHub)
- Need to customize MCP server configs for multi-container setups (GitHub)
- Installer/config edge cases (overwriting existing MCP config) (GitHub)
A useful “supportable” deliverable is a client-by-client troubleshooting guide plus a mcp:inspector-style health check (Boost users already lean on this pattern). (GitHub)