CodeReview AI reads your team's actual documentation — security policies, style guides, architecture standards — and checks every pull request against them. Findings cite the exact doc and line range. No guessing.
The docs exist. The standards are written down. But none of it is wired into the review process where it actually matters.
Senior engineers repeat the same feedback on every pull request. The naming convention, the auth pattern, the testing requirement. It's all in the docs, but nobody checks.
Onboarding docs, contributing guides, security policies — they exist in wikis and markdown files. New team members push code that violates standards nobody told them about.
When senior engineers are the only ones who know the rules, every PR waits in their queue. The obvious stuff should be flagged before a human opens the review.
Install the GitHub App. CodeReview scans your repo, finds your docs, classifies them into review gates, and starts reviewing PRs. That's it.
One-click GitHub App install. Pick your repos. No tokens to manage, no webhooks to configure.
CodeReview scans your repo tree, finds documentation files, and classifies each into review gates: Security, Architecture, Style, Legal, Onboarding.
You see what it found and confirm. Multi-topic docs get decomposed into gate-specific sections with exact line ranges. You're in control.
Open a PR. CodeReview posts findings as a comment — traffic light summary, per-gate breakdown, doc citations. Done in seconds.
getData uses camelCase. Your style guide requires snake_case for Python.
📖 docs/STYLE_GUIDE.md, lines 31–35
Other tools check against generic rules. CodeReview AI checks against your team's actual documentation. And it tells you which is which.
Finding came from your documentation. Cites the exact file, section, and line range. You can verify it in 10 seconds. This is a rule your team wrote down — the system is just enforcing it.
Finding is the model's own suggestion based on general best practices. Clearly labeled so your team knows it's not from your docs. Useful context, but never presented as your policy.
A second AI pass re-reads every finding against the source document before posting. If it can't prove the finding from what's actually written, the finding gets killed. The system doesn't fabricate policy violations.
When a comment comes up that isn't in the docs yet — update the doc. CodeReview picks it up automatically before the next review. That comment never needs to be made again. The docs become a system, not shelf-ware.
We're a small team asking you to install a GitHub App on your private repos. We take that seriously. Here's how the system is built.
If the system can't prove a finding from your docs, the finding doesn't ship. If GitHub is unreachable, the review posts an error comment rather than failing silently. Every decision path has a fallback, and every fallback logs why.
CodeReview AI reads your code diffs and documentation to produce review comments. It does not modify your code, merge PRs, or access anything outside the repositories you explicitly grant.
Code diffs are processed in memory during the review and are not persisted. Your documentation is cached for review performance and refreshed on each scan. You can re-scan or disconnect at any time.
Structured logging with request IDs runs through the entire pipeline. Every scan, classification, review, and verification step is traceable. You can see exactly what happened and why.
No credit card required. No contracts. Cancel anytime.
CodeReview AI is in active beta. It works — it's been tested against production repos including Cal.com, Stripe, Next.js, and Supabase. But we're still learning from every install. If you're on a compliance-heavy team and willing to try it, we'd genuinely appreciate the feedback.
Try CodeReview AI →