AI inference, inside Solana programs.
Solana LLM Oracle lets programs create agents, chats, and AI inferences — without APIs.
// Install solana-llm-oracle
cargo add solana-llm-oracle --features cpi
use anchor_lang::prelude::*;
use solana_llm_oracle::cpi::{
accounts::CreateChat,
create_chat,
};
const AGENT_DESC: &str =
"You are a helpful assistant.";How it works
Three simple steps to integrate AI inference into your Solana program
Initialize a chat context for your AI agent with a system prompt
use anchor_lang::prelude::*;
use solana_llm_oracle::cpi::{
accounts::CreateChat,
create_chat,
};
const AGENT_DESC: &str = "You are a helpful assistant.";
pub fn initialize(ctx: Context<Initialize>, seed: u8) -> Result<()> {
// Store the chat context on your agent account
ctx.accounts.agent.chat_context = ctx.accounts.chat_context.key();
ctx.accounts.agent.bump = ctx.bumps.agent;
let cpi_program = ctx.accounts.oracle_program.to_account_info();
let cpi_accounts = CreateChat {
user: ctx.accounts.signer.to_account_info(),
chat_context: ctx.accounts.chat_context.to_account_info(),
system_program: ctx.accounts.system_program.to_account_info(),
};
let cpi_ctx = CpiContext::new(cpi_program, cpi_accounts);
create_chat(cpi_ctx, AGENT_DESC.to_string(), seed)?;
Ok(())
}See the full implementation in the DeFi Score example
View ExampleAI as a native on-chain primitive
Solana LLM Oracle is an oracle that enables LLM inference from within Solana programs. Not a SaaS wrapper — real infrastructure for the next generation of on-chain applications.
No API Keys
Direct on-chain execution without external API dependencies or key management.
No Centralized Trust
Eliminate off-chain trust assumptions. Everything is verifiable on Solana.
Fully Composable
Native integration with existing Solana programs and DeFi protocols.
Oracle-Powered
LLM inference delivered through a decentralized oracle network.
What exists today
This is not simulation — these capabilities are live and working on Solana devnet right now.
Create AI Agents
Define agents with system prompts directly inside your Solana program.
Start Persistent Chats
Maintain conversation context on-chain across multiple interactions.
Execute Inferences
Run AI inferences that are fully composable with program logic.
Program-First Design
Everything executes from within your Solana program — no external calls.
What you can build with SLO
Unlock new design spaces for Solana applications with native AI capabilities.
AI-Driven Eligibility Engines
Gate protocol access based on intelligent wallet analysis.
Agent-Based Token Minters
Create tokens with AI-determined parameters and rules.
On-Chain Decision Systems
Build autonomous systems that make verifiable decisions.
AI-Reactive Games
Games that adapt and respond using on-chain AI logic.
Composable AI Logic
AI primitives that integrate across protocols seamlessly.
DeFi Score Program
Live on GitHub
DeFi Score Program
A live example that evaluates wallets using off-chain signals like X activity. Protocols can use this score to gate access, determine eligibility, or power AI-driven allowlists.
What's coming
A clear path toward trustless, verifiable AI execution on Solana.
Foundation
- Agent creation from programs
- Persistent chat contexts
- Basic inference execution
- DeFi Score example
Native Integration
- Native AI calls inside Solana programs
- Chat UI powered by SLO
- JavaScript / TypeScript SDK
- Enhanced developer tooling
Advanced Execution
- MagicBlock ephemeral layer integration
- TEE-based oracle execution
- Trustless, verifiable AI inference
- Production-grade security
Why this matters
AI should be composable
Just like DeFi primitives compose into complex protocols, AI inference should be a building block any program can use.
APIs break decentralization
External API calls introduce single points of failure, trust assumptions, and censorship vectors.
On-chain AI unlocks new design space
Applications that were impossible before — autonomous agents, AI-reactive protocols, intelligent smart contracts.
Solana's speed makes this possible
Only Solana's performance characteristics enable practical on-chain AI inference at scale.