The easiest tool for fine-tuning LLM models, synthetic data generation, and collaborating on datasets.
CI | |
Package | |
Meta | |
Apps |
- 🚀 Intuitive Desktop Apps: One-click apps for Windows, MacOS, and Linux. Truly intuitive design.
- 🎛️ Fine Tuning: Zero-code fine-tuning for Llama, GPT4o, and Mixtral. Automatic serverless deployment of models.
- 🤖 Synthetic Data Generation: Generate training data with our interactive visual tooling.
- 🤝 Team Collaboration: Git-based version control for your AI datasets. Intuitive UI makes it easy to collaborate with QA, PM, and subject matter experts on structured data (examples, prompts, ratings, feedback, issues, etc.).
- 📝 Auto-Prompts: Generate a variety of prompts from your data, including chain-of-thought, few-shot, and multi-shot.
- 🌐 Wide Model and Provider Support: Use any model via Ollama, OpenAI, OpenRouter, Fireworks, Groq, AWS, or any OpenAI compatible API.
- 🧑💻 Open-Source Library and API: Our Python library and OpenAPI REST API are MIT open source.
- 🔒 Privacy-First: We can't see your data. Bring your own API keys or run locally with Ollama.
- 🗃️ Structured Data: Build AI tasks that speak JSON.
- 💰 Free: Our apps are free, and our library is open-source.
In this demo, I create 9 fine-tuned models (including Llama 3.x, Mixtral, and GPT-4o-mini) in just 18 minutes, achieving great results for less than $6 total cost. See details.
The Kiln desktop app is completely free. Available on MacOS, Windows and Linux.
Our open-source python library allows you to integrate Kiln datasets into your own workflows, build fine tunes, use Kiln in Notebooks, build custom tools, and much more! Read the docs.
pip install kiln-ai
- Fine Tuning LLM Models
- Synthetic Data Generation
- Collaborating with Kiln - How to share Kiln projects with your team.
- Using the Kiln Python Library - Includes how to load datasets into Kiln, or using Kiln datasets in your own projects/notebooks.
- Model Support - Included models, and how to add more.
Products don’t naturally have “datasets”, but Kiln helps you create one.
Every time you use Kiln, we capture the inputs, outputs, human ratings, feedback, and repairs needed to build high quality models for use in your product. The more you use it, the more data you have.
Your model quality improves automatically as the dataset grows, by giving the models more examples of quality content (and mistakes).
If your product goals shift or new bugs are found (as is almost always the case), you can easily iterate the dataset to address issues.
When building AI products, there’s usually a subject matter expert who knows the problem you are trying to solve, and a different technical team assigned to build the model. Kiln bridges that gap as a collaboration tool.
Subject matter experts can use our easy to use desktop apps to generate structured datasets and ratings, without coding or using technical tools. No command line or GPU required.
Data-scientists can consume the dataset created by subject matter experts, using the UI, or dive deep with our python library.
QA and PM can easily identify issues sooner and help generate the dataset content needed to fix the issue at the model layer.
The dataset file format is designed to be be used with Git for powerful collaboration and attribution. Many people can contribute in parallel; collisions are avoided using UUIDs, and attribution is captured inside the dataset files. You can even share a dataset on a shared drive, letting completely non-technical team members contribute data and evals without knowing Git.
There are new models and techniques emerging all the time. Kiln makes it easy to try a variety of approaches, and compare them in a few clicks, without writing code. These can result in higher quality, or improved performance (smaller/cheaper/faster models at the same quality).
Our current beta supports:
- Various prompting techniques: basic, few-shot, multi-shot, repair & feedback
- Many models: GPT, Llama, Claude, Gemini, Mistral, Gemma, Phi
- Chain of thought prompting, with optional custom “thinking” instructions
In the future, we plan to add more powerful no-code options like fine tuning, lora, evals, and RAG. For experienced data-scientists, you can create these techniques today using Kiln datasets and our python library.
We prioritize data correctness, which makes integrating into AI products easier. No data gets into the dataset without first passing validation, which keeps the dataset clean.
Our easy to use schema UI lets you create and use structured schemas, without knowing JSON-schema formatting. For technical users, we support any valid JSON-schema for inputs and outputs.
Your data stays completely private and local to your machine. We never collect or have access to:
- Datasets / Training Data
- API keys
- Model inputs/outputs (runs)
You can run completely locally using Ollama, or bring your own keys for OpenAI, OpenRouter, Groq, AWS, etc.
Note: We collect anonymous usage metrics via Posthog analytics (never including dataset content or PII). This can be blocked with standard ad-blockers.
We offer a self-hostable REST API for Kiln based on FastAPI. Read the docs.
The REST API supports OpenAPI, so you can generate client libraries for almost any language.
pip install kiln_server
See CONTRIBUTING.md for information on how to setup a development environment and contribute to Kiln.
- Python Library: MIT License
- Python REST Server/API: MIT License
- Desktop App: free to download and use under our EULA, and source-available. License
- The Kiln names and logos are trademarks of Chesterfield Laboratories Inc.
Copyright 2024 - Chesterfield Laboratories Inc.