KitchenAI is a control plane for AI implementations — designed to bridge the gap between application developers and AI teams. Our platform simplifies AI integration with a loosely coupled, modular architecture that delivers production-grade reliability while letting your teams focus on what they do best.
- Explore Our Interactive Playground:
Try KitchenAI in Action - Take a Guided Tour:
Watch the Guided Tour
- Bento Boxes: Package your AI workflows into independent "bento boxes" that encapsulate complex logic.
- Flexibility: Update, replace, or scale individual modules without disrupting your overall system.
- Clear Separation: Let AI teams build advanced logic in a reproducible and swappable space, while app developers enjoy a simple, stable API.
- Powered by NATS:
- Lightning-fast, reliable communication between AI modules
- Dynamic service discovery and routing
- Robust support for event-driven workflows in distributed environments
- Plug & Play:
- No vendor lock-in—integrate with any AI framework or model
- Native support for LangChain, LlamaIndex, and custom implementations
- Future-proof your AI infrastructure with flexible integration options
KitchenAI’s three-layer architecture makes it easy to manage your AI workflows:
-
Application Layer:
Your business applications call a simple, unified API (just like using OpenAI’s Chat Completions). -
NATS Messaging Layer:
This is our high-performance backbone for routing messages and discovering services dynamically. -
Bento Boxes Layer:
Modular AI implementations where your AI team builds the complex logic (be it LLM logic, RAG, agents, or custom workflows).
Your code remains clean and simple:
# Simple integration using OpenAI's Chat Completions
response = await openai_client.chat.completions.create(
model="@llama-index-agents/query", #your bento box client id
messages=[{"role": "user", "content": data.query}]
)
Focus on building powerful AI code:
@kitchen.query.handler("query")
async def query_handler(data: WhiskQuerySchema) -> WhiskQueryBaseResponseSchema:
# Advanced RAG implementation with best practices built-in
index = VectorStoreIndex.from_vector_store(vector_store)
query_engine = index.as_query_engine(
chat_mode="best",
filters=filters,
llm=llm,
verbose=True
)
KitchenAI is designed to be self-hosted. You can deploy the control plane and the bento boxes separately.
- Clone the KitchenAI repository
git clone https://github.com/epuerta9/kitchenai.git
- Bring up the control plane and dependencies
docker compose up -d
- Creating the Bucket for media. KitchenAI uses S3 for media. For local development, the compose file has a minio container. This only needs to be done the first time.
you will need to login to the minio container and create a bucket called kitchenai
.
endpoint: http://localhost:9001 username: minioadmin password: minioadmin
bucket name: kitchenai
- Bring up the bento boxes using this demo demo notebooks
- Version Control & Rollback: Safely iterate and revert as needed.
- Monitoring & Observability Hooks: Integrate with your favorite tools.
- Plugin Ecosystem: Extend KitchenAI with additional capabilities.
- Security Integrations: Designed with production-grade best practices.
KitchenAI is still in beta—we're excited to have early adopters help shape the platform.
- Join the Waitlist: Get Early Access
- Play in Our Playground: Try it out now
KitchenAI is released under the Apache 2.0 License.
Built with ❤️ by the KitchenAI Team