Navigate The
AI transformation.

Run AI-native engineering on data. Built by CTOs who got tired of guessing.

Start for free
Overview / AI Transformation

AI Transformation - Year over Year

Engineering performance grew 128% YoY → Headcount grew 28% (34 devs).

Avg. developer performance
+78%
24.4 ETV from 13.7 ETV in Q1'25
Total performance
+128%
Total Performance grew from 1,658 ETV to 3,780 ETV YoY
Engineering headcount
+28%
121 → 155 year over year.
Performance vs Headcount
Headcount (left axis, dashed) vs Total Performance (right axis, solid). Labels show performance (ETV) change vs Q1'25 baseline.
← Before AIAfter AI →Devs200150100500ETV6k4.5k3k1.5k0−17%+37%+96%+128%Q1'25Q2'25Q3'25Q4'25Q1'26
HeadcountTotal Performance

Performance is measured the way a senior engineer would do it: by reading the story behind each PR. We use a mix of LLMs, ML, and algorithms to do this at scale. Our white paper at research.navigara.com. Performance is measured in ETV (Engineering Throughput Value). Each commit is classified into Growth, Maintenance, or Fixes.

02Researchresearch.navigara.com

Built on top of research.Navigara's measurement layer is grounded in the Open-Source Engineering Performance. A study of public repositories across six AI-forward organizations. The same model that benchmarks Cloudflare, Meta, OpenAI, and Microsoft is what your team gets pointed at on day one.

EDGE
Cloudflare
FRAMEWORKS
Vercel
PLATFORM
Google
RUNTIME
Meta
FRONTIER
OpenAI
DEVTOOLS
Microsoft
Navigara engine
Performance
See how engineering performance changes month over month, and the signals that explain why.
Work mix
Growth, Maintenance, and Fixes shares, so leaders see where output is being booked, not just how much.
Benchmark
Your team's per-quarter trajectory placed alongside the six reference organizations.
Performance change · Q1'25 → Q1'26
+116%
Mean ETV / Developer · CI [+84, +148]
Growth share · Q1'25 → Q1'26
30% → 36%
New-value output · +7pp shift
Fixes share · Q1'25 → Q1'26
15% → 18%
Repair output · +3pp shift
Trajectory range
+51% → +373%
Meta · slowest  →  OpenAI · fastest
Read the white paperFull methodology →OSS-EPR · v1.0 · 2026-04-30
03Capabilities

Five things you can do on Monday morning.

F.01Benchmarking

Prove AI is working.

Stop arguing about whether the new tools are paying off. Benchmark your team against itself a year ago. Same people, same codebase, real numbers. Real performance, real delta, no debates.

  • Year-over-year comparisons, normalized for headcount and project scope
  • Drill into any team, repo, or individual contributor
  • Board-ready charts you won't have to defend in the room
Avg. developer performance · last 5 quarters
Performance per developer per quarter, calculated from commit complexity, architecture changes, and deployment impact. Full methodology here →
HOVER ANY POINT
ETV12840baseline · 5 ETVbase5.0 ETV−21%3.95 ETV+20%6.0 ETV+77%8.85 ETV+128%11.4 ETVQ1'25Q2'25Q3'25Q4'25Q1'26
% change vs Q1'25 · baseline 5 ETV/dev418 SWE cohort · OSS-EPR v1.0
F.02Industry pulse

See what other companies are doing.

Real-time view into what's happening across engineering: new tools, new processes, new patterns. Implement what's working, faster.

  • Anonymized adoption curves from 200+ engineering orgs
  • Filter by company stage, headcount, and stack
  • Weekly digest on what your peers shipped this week
Industry pulse · This week
live
Cursor 0.9
Adoption +14pt among Series B+ orgs
42 of 84 sampled · last 7 days
+14pt
Linear M
Stacked PR workflow rolled out at 12 cos.
Median cycle time dropped 31%
−31%
Claude C.
Auto-review now active in 38% of repos
Up from 9% three months ago
+29pt
Trunk-only
Single-branch teams ship 1.6× more
Controlled for team size
1.6×
214 orgs · anonymized · refreshed hourly
F.03Reporting

Unbiased reporting your board will read.

Weekly, monthly, and quarterly reports on what your teams are actually doing: adoption rates, tool usage, output. Numbers you can take to the board.

  • One-click board pack: PDF, Notion, or Google Slides
  • Audit trail with raw signals behind every metric
  • No vibes, no proxies, no team self-reports
Q1-2026-Board-Pack.pdf
Section 03 · Engineering output · Q1 2026
+47%
vs. Q1 2025 baseline
Per-developer performance rose across most teams this quarter, with quality holding steady. Growth dipped after senior attrition.
Average developer performance, by team
ETV per developer per week · Q1'25 baseline vs Q1'26
Q1'25Q1'26
Platform
4.27.2
+71%
Platform+71% YoY
  • Rolled out Cursor + Claude to 100% of the team in May'25
  • Cut median PR review wait from 14h to 3.2h (CODEOWNERS rebuild)
  • Migrated 6 monoliths to Bazel; CI dropped from 22m to 7m
Payments
5.08.1
+62%
Payments+62% YoY
  • Replaced manual ledger reconciliation with an LLM-assisted pipeline
  • Hired 2 senior engineers from Stripe in Q2'25
  • Killed 4 legacy gateway integrations; 38% less code to maintain
API
3.85.6
+47%
API+47% YoY
  • Adopted Claude Code for spec-driven endpoint generation
  • Replaced REST handlers with auto-generated tRPC + OpenAPI sync
  • On-call rotation moved from weekly to bi-weekly; less context loss
Mobile
4.56.5
+44%
Mobile+44% YoY
  • Unified iOS + Android under React Native (Q3'25)
  • Brought design system in-house; 60% fewer one-off components
  • Detox snapshot tests cut regression QA from 3 days to 4 hours
Search
4.86.4
+33%
Search+33% YoY
  • Swapped Elasticsearch for Typesense; ops surface 70% smaller
  • Embedding pipeline moved from nightly batch to streaming
  • Killed legacy ranker; replaced with single learned-to-rank model
Growth
4.65.0
−8%
Growth−8% YoY
  • 2 senior engineers and the EM left in Q4'25; team has not refilled
  • Experimentation platform migration absorbed ~40% of remaining capacity
  • Hiring pipeline reopened in Feb; expect inflection by Q3'26
02468
14,206 PRs · 312 engineers · 86 projectsaudit 7f3a··de91 · page 04 / 18
F.04Targets

Performance-based targets that move with the curve.

Every quarter, peer companies ship faster. We benchmark your team against your peer cohort in real time, then move your targets up to keep pace. The bar isn't where you were, it's where they are.

  • Targets re-indexed weekly against your peer cohort's actual output
  • See exactly how fast the curve is moving and where you sit on it
  • Slack alerts when the cohort accelerates and your team falls behind
Targets · Q2 2026
Indexed against your peer cohort · updated weekly
cohort live
Your teamPeer medianCurve moved this quarter
Performance
peer ↑ +6% Q1→Q2
peer 7.8
8.4 / 10on-track
AI adoption
peer ↑ +14% Q1→Q2
peer 75%
62%behind
Cycle time
peer ↓ −9% Q1→Q2
peer 12h
11hon-track
Fixes
peer ↓ −8% Q1→Q2
peer 22%
16%on-track
built on Navigara Research · 2.4M PRs analyzednext recal Apr 30
F.05Slack

Ask in Slack. Get an answer.

Native Slack interface. Ask anything about your engineering org and get an answer. No dashboards to open, no filters to set.

  • Natural language across every metric, team, and timeframe
  • Citations link back to raw activity for verification
  • Schedule recurring questions as Slack digests
#eng-leadership42 members
AS
Aarav Singh10:42 AM
@navigara what did the Platform team ship last week?
N
NavigaraAPP10:42 AM
Platform team · 47 PRs merged across 3 projects.
Top deliverables:
→ Auth migration shipped (Mar 31, 2 weeks ahead of plan)
→ Rate-limiter rewrite merged, +18% throughput in staging
→ Observability v2 72% complete, on track for Apr 14
cite: 47 PRs · 18 engineers · 9 review threads
AS
Aarav Singh10:43 AM
how much unaligned work did we detect this sprint, and what?
N
NavigaraAPP10:43 AM
22% of merged workdidn't trace to a Q2 OKR. (214 of 974 PRs)
Largest pockets:
Search team · 84 PRs on a v2 ranker not in roadmap
Mobile · 41 PRs refactoring legacy nav (no JIRA link)
Growth · 28 PRs on an A/B framework rewrite
cite: 974 PRs · OKR-link inferred from JIRA + commit msg
AS
Aarav Singh10:45 AM
what would most improve the Search team's performance?
N
NavigaraAPP10:45 AM
Three highest-leverage interventions, ranked by peer-cohort impact:
Cut review SLA 26h → 10h · est. +19% throughput
Roll out Cursor (currently 31% adoption) · est. +12%
Break up 2 stale epics blocking 6 engineers · est. +8%
based on 312 peer teams that ran similar interventions
Message #eng-leadership/navigara
04Security & Deployment

Your data, your environment, your rules.

Three deployment models. Pick the one that matches your security.

01
Cloud SaaS
Fully managed. No infrastructure needed. Data is processed and stored in Navigara's cloud. Connect your sources, get insights in minutes.
02
Cloud SaaS with on-prem collector
SaaS frontend and API with a collector agent inside your network. Source code stays within your perimeter; only metadata and analysis results are sent to the cloud. Docs →
03
Full on-premises
All components deployed inside your infrastructure. Complete data sovereignty with no external dependencies. For regulated environments and air-gapped networks. Docs →
04
SSO, SCIM, and audit logs in every mode
SAML / OIDC via Okta, Entra, or Google. Role-based access for execs, managers, and ICs. Every read is logged and exportable.
What leaves your perimeter
per deployment
Cloud SaaS
Source + metadata in Navigara cloud
Fully managed · fastest to value · SOC 2 Type II
SaaS + Collector
Only metadata + analysis results
Source code never leaves your network · collector runs inside VPC
Full on-prem
Nothing leaves. At all.
Helm + Terraform · air-gap supported · BYO storage and identity
Compliance
SOC 2 · GDPR · ISO 27001 (Q3 '26)
DPA on request · sub-processor list public
# on-prem collector · what's emitted
commit.metadata shape, author, timestamps
review.activity approvals, comments
analysis.scores ETV per developer
source.code     stays in your network
secrets / .env never read
→ residency: your VPC / your region
07Business Outcomes

Align engineering work to business outcomes.

Every quarter the CEO names the initiatives that move the company forward. Navigara measures, in Engineering Throughput Value, exactly how much each team is putting against each one → week by week, in dollars. No more "what is engineering doing?"

How Navigara works at team and repo scaleWorkflow showing many commits from many repositories grouped under multiple teams, flowing into the Navigara engine, processed in parallel by Architect, Classification, Performance, and Commit story stages, getting cross-checked against JIRA into key aligned, aligned, or unaligned work, rolling up into a repository story, and surfacing as four team priorities.team-backendcheckout-svc · 124billing-api · 89+ 3 moreteam-frontendweb-app · 203admin-ui · 67+ 1 moreteam-platformauth-svc · 89data-svc · 156+ 2 more+ 5 more teamsCommits · 8 teams · 12 reposArchitectMaps changes tofeatures and modulesClassificationTags intent andwork typePerformanceMeasures complexityand engagementCommit storyWhat changed andwhy, per commitConditionJIRA cross-checkSwitchWas this work trackedKey alignedAlignedUnalignedKey workStrategic and trackedAlignedTracked, low priorityUnalignedGhost work, untrackedRepository story8 teams+ 42 devsEvery team, every dev, every repo, one timelineResult4 priorities the org ownsTime and performance per signal, every team, every repo

Run engineering
on data.

Connect your stack. See your first delta within an hour. The board pack writes itself.

Start for free