How to Use the Laravel AI SDK (2026 Guide)
If you've been wiring up AI in Laravel, you've probably dealt with the same thing I have. Installing one client library for OpenAI, another for Anthropic, writing wrapper classes to make them somewhat interchangeable, and ending up with code that doesn't really feel like Laravel.
Laravel has now released laravel/ai, a first-party SDK that gives you a single, unified API for OpenAI, Anthropic, Gemini, and a dozen other providers. I've been building with it and honestly it's one of those packages where you forget you're even talking to an external API. It just feels like writing Laravel.
In this guide I'll walk you through everything from your first agent to tools, structured output, streaming, image generation, embeddings, and testing. I've tested every code example in here myself so you should be able to follow along without any surprises.
Installation and Setup
1composer require laravel/ai2php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider"
The publish command creates a config/ai.php file and a database migration. The migration is only needed if you plan to use the RemembersConversations trait for automatic conversation persistence. If you just want to use agents without built-in memory, you can skip it. Otherwise, run:
1php artisan migrate
Add at least one provider API key to your .env. You only need the providers you plan to use:
1OPENAI_API_KEY=your-key2ANTHROPIC_API_KEY=your-key3GEMINI_API_KEY=your-key
The config/ai.php file controls the default provider for each capability (text, images, audio, embeddings) and where API keys are read from. The full list of supported providers: OpenAI, Anthropic, Gemini, Azure, Groq, xAI, DeepSeek, Mistral, Ollama, OpenRouter, ElevenLabs, Cohere, Jina, and VoyageAI.
Your First Agent in 5 Minutes
Let's build something that works. We'll create an agent that summarises text, wire it to a route, and call it.
Step 1: Generate the Agent
1php artisan make:agent Summariser
This creates app/Ai/Agents/Summariser.php. Open it and set the instructions:
1<?php 2 3namespace App\Ai\Agents; 4 5use Laravel\Ai\Contracts\Agent; 6use Laravel\Ai\Promptable; 7 8class Summariser implements Agent 9{10 use Promptable;11 12 public function instructions(): string13 {14 return 'You are a concise summariser. Given any text, produce a clear 2-3 sentence summary that captures the key points.';15 }16}
The Promptable trait gives your agent the make(), prompt(), stream(), queue(), and fake() methods. Everything you need to interact with it.
Step 2: Wire It to a Route
1use App\Ai\Agents\Summariser;2 3Route::post('/summarise', function (Request $request) {4 $response = Summariser::make()->prompt($request->input('text'));5 6 return response()->json([7 'summary' => (string) $response,8 ]);9});
That's it. POST some text to /summarise and you'll get a JSON summary back. The agent uses whichever provider is set as the default in config/ai.php (OpenAI by default).
Step 3: Override the Provider or Model
If you want to use a specific provider or model for a request, pass them directly:
1use Laravel\Ai\Enums\Lab;2 3$response = Summariser::make()->prompt(4 $request->input('text'),5 provider: Lab::Anthropic,6 model: 'claude-sonnet-4-5-20250929',7 timeout: 120,8);
The Lab enum contains all supported providers: Lab::OpenAI, Lab::Anthropic, Lab::Gemini, and so on.
Configuring Agents with Attributes
Rather than passing the provider and model on every call, you can pin them to the agent class using PHP attributes:
1use Laravel\Ai\Attributes\Provider; 2use Laravel\Ai\Attributes\Model; 3use Laravel\Ai\Attributes\MaxSteps; 4use Laravel\Ai\Attributes\MaxTokens; 5use Laravel\Ai\Attributes\Temperature; 6use Laravel\Ai\Attributes\Timeout; 7use Laravel\Ai\Enums\Lab; 8 9#[Provider(Lab::Anthropic)]10#[Model('claude-sonnet-4-5-20250929')]11#[MaxSteps(10)]12#[MaxTokens(4096)]13#[Temperature(0.7)]14#[Timeout(120)]15class Summariser implements Agent16{17 use Promptable;18 19 // ...20}
MaxSteps controls how many tool-call rounds the agent can make before stopping (more on tools shortly). Temperature controls randomness, where lower values give more deterministic responses.
There are also shortcut attributes when you don't care about the specific model:
1use Laravel\Ai\Attributes\UseCheapestModel;2use Laravel\Ai\Attributes\UseSmartestModel;3 4#[UseCheapestModel] // Picks the cheapest available model, which is good for simple tasks5class Summariser implements Agent { /* ... */ }6 7#[UseSmartestModel] // Picks the most capable model, which is good for complex reasoning8class CodeReviewer implements Agent { /* ... */ }
Structured Output
Sometimes you need the AI to return data in a specific shape (a score, a category, a set of fields) not just freeform text. This is what structured output is for.
1php artisan make:agent SalesCoach --structured
Implement the HasStructuredOutput interface and define a schema() method:
1use Illuminate\Contracts\JsonSchema\JsonSchema; 2use Laravel\Ai\Contracts\HasStructuredOutput; 3 4class SalesCoach implements Agent, HasStructuredOutput 5{ 6 use Promptable; 7 8 public function instructions(): string 9 {10 return 'Analyse sales call transcripts. Provide constructive feedback and a score from 1 to 10.';11 }12 13 public function schema(JsonSchema $schema): array14 {15 return [16 'feedback' => $schema->string()->required(),17 'score' => $schema->integer()->min(1)->max(10)->required(),18 ];19 }20}
Now the response is an array you can access directly:
1$response = SalesCoach::make()->prompt('Analyse this transcript...');2 3$response['feedback']; // "The opening was strong but the closing lacked urgency..."4$response['score']; // 7
This is useful any time you need to pipe AI output into your application logic: saving to a database, triggering notifications based on a score, or feeding into another agent.
Tools: Letting Agents Call Your Code
Tools are where agents get powerful. A tool is a PHP class that the AI can choose to invoke during a conversation. You define what the tool does and what parameters it accepts; the AI decides when to use it based on the description.
This is the mechanism behind features like "search the knowledge base", "look up order details", or "check inventory". The AI reads the tool's description, decides it's relevant to the user's question, and calls it.
Creating a Tool
1php artisan make:tool GetOrderStatus
This creates app/Ai/Tools/GetOrderStatus.php:
1<?php 2 3namespace App\Ai\Tools; 4 5use App\Models\Order; 6use Illuminate\Contracts\JsonSchema\JsonSchema; 7use Laravel\Ai\Contracts\Tool; 8use Laravel\Ai\Tools\Request; 9 10class GetOrderStatus implements Tool11{12 public function description(): string13 {14 return 'Look up the current status of an order by its order number.';15 }16 17 public function schema(JsonSchema $schema): array18 {19 return [20 'order_number' => $schema->string()->required(),21 ];22 }23 24 public function handle(Request $request): string25 {26 $order = Order::where('order_number', $request['order_number'])->first();27 28 if (! $order) {29 return 'Order not found.';30 }31 32 return "Order {$order->order_number}: {$order->status}. Placed on {$order->created_at->format('M j, Y')}.";33 }34}
The description() tells the AI what the tool does. The schema() defines the parameters the AI must provide. The handle() method runs your actual logic and returns a string result that feeds back into the conversation.
Giving Tools to an Agent
Register tools by implementing HasTools:
1use Laravel\Ai\Contracts\HasTools; 2 3class SupportAgent implements Agent, HasTools 4{ 5 use Promptable; 6 7 public function instructions(): string 8 { 9 return 'You are a customer support agent. Use the available tools to look up order information when customers ask about their orders.';10 }11 12 public function tools(): iterable13 {14 return [15 new GetOrderStatus,16 ];17 }18}
Now when a user asks "Where is my order #12345?", the agent will call GetOrderStatus with order_number: "12345", get the result, and use it to compose a natural response.
Built-in Provider Tools
The SDK ships with tools that use provider-level capabilities:
1use Laravel\Ai\Providers\Tools\WebSearch; 2use Laravel\Ai\Providers\Tools\WebFetch; 3use Laravel\Ai\Providers\Tools\FileSearch; 4 5public function tools(): iterable 6{ 7 return [ 8 (new WebSearch)->max(5)->allow(['laravel.com', 'php.net']), 9 (new WebFetch)->max(3)->allow(['docs.laravel.com']),10 new FileSearch(stores: ['store_abc123']),11 ];12}
WebSearch lets the agent search the web. WebFetch lets it retrieve a URL's content. FileSearch searches documents you've uploaded to a provider's vector store (covered later in this guide).
Similarity Search Tool
For RAG (Retrieval-Augmented Generation) workflows where you want the agent to search your own database by semantic similarity:
1use Laravel\Ai\Tools\SimilaritySearch; 2 3public function tools(): iterable 4{ 5 return [ 6 SimilaritySearch::usingModel( 7 model: Document::class, 8 column: 'embedding', 9 minSimilarity: 0.7,10 limit: 10,11 query: fn ($query) => $query->where('published', true),12 )->withDescription('Search published documentation for relevant articles.'),13 ];14}
The agent will generate an embedding from the user's query, find similar documents in your database, and use the results to answer the question.
Conversations and Memory
By default, each prompt() call is stateless. The agent doesn't remember previous interactions. There are two ways to add memory.
Manual Conversation History
If you already store chat history in your own database, implement Conversational and return the messages. An agent can implement multiple interfaces. Here we're adding Conversational alongside HasTools from the earlier example:
1use Laravel\Ai\Contracts\Conversational; 2use Laravel\Ai\Messages\Message; 3 4class SupportAgent implements Agent, HasTools, Conversational 5{ 6 use Promptable; 7 8 public function __construct(public User $user) {} 9 10 public function messages(): iterable11 {12 return History::where('user_id', $this->user->id)13 ->latest()14 ->limit(50)15 ->get()16 ->reverse()17 ->map(fn ($message) => new Message($message->role, $message->content))18 ->all();19 }20}
Automatic Conversation Memory
If you'd rather not manage message storage yourself, the RemembersConversations trait handles it automatically using the database (this is what the migration from installation is for):
1use Laravel\Ai\Concerns\RemembersConversations; 2 3class SupportAgent implements Agent, Conversational 4{ 5 use Promptable, RemembersConversations; 6 7 public function instructions(): string 8 { 9 return 'You are a helpful support agent.';10 }11}
Start a conversation and continue it later by ID:
1// Start a new conversation2$response = SupportAgent::make()->forUser($user)->prompt('Hello!');3$conversationId = $response->conversationId;4 5// Continue it later (e.g. the next request)6$response = SupportAgent::make()7 ->continue($conversationId, as: $user)8 ->prompt('Tell me more about that.');
Attachments
Agents can receive files alongside the text prompt. This is useful for document analysis, image recognition, or any task where the AI needs to see a file.
1use Laravel\Ai\Files; 2 3// Documents (PDFs, text files, etc.) 4$response = SalesCoach::make()->prompt( 5 'Analyse the attached sales transcript.', 6 attachments: [ 7 Files\Document::fromStorage('transcript.pdf'), 8 Files\Document::fromPath('/tmp/transcript.md'), 9 $request->file('transcript'),10 ]11);12 13// Images14$response = ImageAnalyser::make()->prompt(15 'What is in this image?',16 attachments: [17 Files\Image::fromStorage('photo.jpg'),18 Files\Image::fromUrl('https://example.com/photo.jpg'),19 ]20);
Streaming
For chat interfaces or any situation where you want the response to appear in real-time rather than waiting for the full completion, use stream():
1use App\Ai\Agents\SupportAgent;2 3Route::post('/chat', function (Request $request) {4 return SupportAgent::make()->stream($request->input('message'));5});
The response is a streamed HTTP response where text arrives as it's generated. You can attach a callback that runs after the stream completes:
1use Laravel\Ai\Responses\StreamedAgentResponse; 2 3return SupportAgent::make() 4 ->stream($request->input('message')) 5 ->then(function (StreamedAgentResponse $response) { 6 // Save the complete response, log token usage, etc. 7 History::create([ 8 'content' => $response->text, 9 'tokens' => $response->usage->outputTokens,10 ]);11 });
If your frontend uses the Vercel AI SDK (common with Next.js or React), there's built-in support for its streaming protocol:
1return SupportAgent::make()2 ->stream($request->input('message'))3 ->usingVercelDataProtocol();
Broadcasting and Queuing
Broadcasting over WebSockets
For real-time dashboards or collaborative features, you can broadcast each streaming event over a channel:
1use Illuminate\Broadcasting\Channel;2 3SupportAgent::make()->broadcastOnQueue(4 'Analyse this transcript...',5 new Channel('analysis-channel'),6);
This queues the prompt, streams the response, and broadcasts each chunk to the channel. Your frontend picks it up via Laravel Echo or similar.
Queuing Heavy Workloads
For tasks that don't need an immediate response (batch processing, background analysis), queue the prompt:
1use Laravel\Ai\Responses\AgentResponse; 2use Throwable; 3 4SalesCoach::make() 5 ->queue('Analyse this transcript...') 6 ->then(function (AgentResponse $response) { 7 // Store the result, send a notification, etc. 8 }) 9 ->catch(function (Throwable $e) {10 // Handle failure11 });
The prompt runs on your queue worker. The then callback fires when it completes; catch handles failures.
Image Generation
Generate images using providers that support it (OpenAI, Gemini, xAI):
1use Laravel\Ai\Image;2 3$image = Image::of('A minimal logo for a Laravel package')4 ->quality('high')5 ->square()6 ->generate();7 8$path = $image->storePubliclyAs('generated-logo.png');
You can pass reference images for style transfer or editing:
1use Laravel\Ai\Files;2 3$image = Image::of('Update this photo to be in the style of an impressionist painting.')4 ->attachments([5 Files\Image::fromStorage('original-photo.jpg'),6 ])7 ->landscape()8 ->generate();
Other aspect ratio methods: ->portrait(), ->square(), ->landscape().
Like agents, image generation can be queued:
1use Laravel\Ai\Responses\ImageResponse;2 3Image::of('A scenic mountain landscape')4 ->portrait()5 ->queue()6 ->then(function (ImageResponse $image) {7 $image->storePubliclyAs('mountain.png');8 });
Audio: Text-to-Speech and Transcription
Generate Speech
1use Laravel\Ai\Audio;2 3$audio = Audio::of('Welcome to our application.')4 ->female()5 ->instructions('Speak in a warm, professional tone.')6 ->generate();7 8$audio->storeAs('welcome.mp3');
Transcribe Audio
1use Laravel\Ai\Transcription;2 3$transcript = Transcription::fromStorage('meeting-recording.mp3')4 ->diarize()5 ->generate();6 7return (string) $transcript;
The diarize() method identifies different speakers in the recording, useful for meeting transcripts.
Embeddings and Vector Search
Embeddings are numerical representations of text that let you search by meaning rather than keywords. The SDK integrates this into Eloquent.
Generating Embeddings
1use Laravel\Ai\Embeddings; 2use Illuminate\Support\Str; 3 4// Single input via Stringable 5$embedding = Str::of('Laravel is a PHP framework.')->toEmbeddings(); 6 7// Multiple inputs with caching 8$response = Embeddings::for([ 9 'Laravel is a PHP framework.',10 'React is a JavaScript library.',11])->dimensions(1536)->cache()->generate();12 13$response->embeddings; // [[0.123, 0.456, ...], [0.789, 0.012, ...]]
Setting Up a Vector Column
1// In a migration 2Schema::ensureVectorExtensionExists(); 3 4Schema::create('documents', function (Blueprint $table) { 5 $table->id(); 6 $table->string('title'); 7 $table->text('content'); 8 $table->vector('embedding', dimensions: 1536); 9 $table->index('embedding');10 $table->timestamps();11});
Querying by Similarity
1// With a pre-computed embedding vector 2$documents = Document::query() 3 ->whereVectorSimilarTo('embedding', $queryEmbedding, minSimilarity: 0.4) 4 ->limit(10) 5 ->get(); 6 7// With a plain text query (the SDK generates the embedding for you) 8$documents = Document::query() 9 ->whereVectorSimilarTo('embedding', 'best PHP frameworks')10 ->limit(10)11 ->get();
This is the foundation for building RAG applications. Store your content with embeddings, then query by semantic similarity.
Anonymous Agents
For one-off interactions where creating a full agent class is overkill, use the agent() helper:
1use function Laravel\Ai\{agent};2 3$response = agent(4 instructions: 'You are a helpful coding assistant.',5)->prompt('How do I create a migration in Laravel?');
You can pass tools, messages, and schema just like a class-based agent.
Failover
Pass an array of providers to automatically fall back if the primary one fails or times out:
1use Laravel\Ai\Enums\Lab;2 3$response = SalesCoach::make()->prompt(4 'Analyse this transcript...',5 provider: [Lab::OpenAI, Lab::Anthropic],6);
This works across all capabilities: agents, images, audio, embeddings.
Middleware
For cross-cutting concerns like logging, analytics, or rate limiting, agents support middleware:
1use Laravel\Ai\Contracts\HasMiddleware; 2 3class SalesCoach implements Agent, HasMiddleware 4{ 5 use Promptable; 6 7 public function middleware(): array 8 { 9 return [new LogPrompts];10 }11 12 // ...13}
1use Closure; 2use Laravel\Ai\Prompts\AgentPrompt; 3use Laravel\Ai\Responses\AgentResponse; 4 5class LogPrompts 6{ 7 public function handle(AgentPrompt $prompt, Closure $next) 8 { 9 Log::info('Prompting agent', ['prompt' => $prompt->prompt]);10 11 return $next($prompt)->then(function (AgentResponse $response) {12 Log::info('Agent responded', ['text' => $response->text]);13 });14 }15}
The pattern mirrors Laravel's HTTP middleware. Receive the prompt, optionally modify it, call $next, and optionally act on the response.
Reranking
When you have a list of search results and want to re-score them against a query for better relevance:
1use Laravel\Ai\Reranking; 2 3$response = Reranking::of([ 4 'Django is a Python web framework.', 5 'Laravel is a PHP web application framework.', 6 'React is a JavaScript library.', 7])->rerank('PHP frameworks'); 8 9$response->first()->document; // "Laravel is a PHP web application framework."10$response->first()->score; // 0.95
This also works on Eloquent collections:
1$posts = Post::all()->rerank('body', 'Laravel tutorials');
File and Vector Store Management
For providers that support hosted file storage (like OpenAI), you can upload files and create searchable vector stores:
1use Laravel\Ai\Files\Document; 2use Laravel\Ai\Stores; 3 4// Upload a file to the provider 5$response = Document::fromStorage('manual.pdf')->put(); 6$fileId = $response->id; 7 8// Create a vector store and add files to it 9$store = Stores::create('Knowledge Base');10$store->add(Document::fromStorage('guide.pdf'), metadata: [11 'author' => 'Jonathan Bird',12 'category' => 'documentation',13]);
Then give your agent access to the store using the FileSearch tool:
1use Laravel\Ai\Providers\Tools\FileSearch;2 3public function tools(): iterable4{5 return [6 new FileSearch(stores: ['your-store-id']),7 ];8}
Testing
The SDK provides a complete testing API. Every capability can be faked and asserted against, following the same patterns as Laravel's Http::fake(), Queue::fake(), etc.
Testing an Agent
1use App\Ai\Agents\SalesCoach; 2use Laravel\Ai\Prompts\AgentPrompt; 3use Tests\TestCase; 4 5class SalesCoachTest extends TestCase 6{ 7 public function test_it_analyses_transcripts(): void 8 { 9 SalesCoach::fake(['Great opening, but the close needs work.']);10 11 $response = $this->postJson('/analyse', [12 'transcript' => 'Hi, I wanted to talk about pricing...',13 ]);14 15 $response->assertOk();16 17 SalesCoach::assertPrompted(function (AgentPrompt $prompt) {18 return str_contains($prompt->prompt, 'pricing');19 });20 }21}
Testing Images, Audio, and Embeddings
The same pattern applies to every capability:
1use Laravel\Ai\Image; 2use Laravel\Ai\Audio; 3use Laravel\Ai\Embeddings; 4use Laravel\Ai\Prompts\ImagePrompt; 5 6// Fake and assert image generation 7Image::fake(); 8// ...run your code... 9Image::assertGenerated(function (ImagePrompt $prompt) {10 return $prompt->contains('logo') && $prompt->isSquare();11});12 13// Fake audio14Audio::fake();15 16// Fake embeddings17Embeddings::fake();
Preventing Stray Calls
To ensure no unexpected AI calls slip through in your test suite:
1SalesCoach::fake()->preventStrayPrompts();2Image::fake()->preventStrayImages();3Audio::fake()->preventStrayAudio();4Embeddings::fake()->preventStrayEmbeddings();
If any code path triggers an unfaked AI call, the test fails immediately. This works the same way as Http::preventStrayRequests().
Conclusion
I've been building with Laravel for years and this is one of those packages that just clicks. The API feels like it belongs in the framework. You're not learning some new paradigm or fighting a third-party client library. You're writing Laravel code that happens to talk to AI.
What impressed me most is the testing story. Being able to fake() any AI call and assert against it the same way you would with Http::fake() or Queue::fake() means you can actually ship AI features with confidence. That's not something most AI libraries even think about.
The SDK is still early (v0.x at the time of writing) so expect things to evolve, but the foundation is solid and the Laravel team is actively developing it.
If you want to get started, run composer require laravel/ai, check out the official documentation, and start building. I'd love to hear what you come up with.
Syntax highlighting by Torchlight
More articles
Website Maintenance: A Complete Guide for 2026
A practical guide to website maintenance for organisations. What it actually involves beyond clicking update, the real costs, common types of maintenance, how often it should happen, and what to look for when choosing a provider.
Read articleHow to Fix the 419 Page Expired Error in Laravel 12 (2026 Guide)
The 419 Page Expired error in Laravel is almost always a CSRF token issue. This guide covers the common causes and how to fix them in Laravel 12 & 13, including missing tokens, session configuration, AJAX requests, and excluding webhook routes.
Read article