Cycle Time
Cycle Time measures the amount of time it takes to complete a task once active work has begun. It reflects how quickly work moves through the delivery process after it is pulled from the backlog.
In practice, Cycle Time is one of the clearest indicators of workflow efficiency because it captures how long work stays “in motion” after the team has committed to doing it. Lower cycle time typically enables tighter feedback loops, faster learning, and smoother delivery - when balanced against quality and predictability.
In AI-assisted teams (and increasingly, agentic workflows), Cycle Time can improve dramatically in early stages (coding, scaffolding, drafting), but still remain slow in later stages (review, validation, security, approvals). That makes Cycle Time especially useful as a bottleneck locator, not just a measure of speed.
How do you calculate Cycle Time?
Cycle Time is calculated by measuring the time between the start of active work and the point of completion:
cycle time = task completion time – task start time
To keep Cycle Time meaningful, teams should define start and completion consistently:
- Task start time commonly means the first “active work” signal (e.g., ticket moved to In Progress, first commit, first branch creation, or first PR opened).
- Task completion time commonly means the work is done and accepted (e.g., ticket moved to Done, PR merged, or a deployment completed - depending on how your organization defines “complete”).
For agentic or AI-assisted flows, it’s especially important to document what “start” means:
- If an AI agent opens a PR automatically, does Cycle Time start at PR open time?
- If a human initiates the task but an agent completes most of the implementation, do you attribute the same start event?
Consistency matters more than the specific choice, as long as you use the same rule over time and across teams.
Why does Cycle Time matter?
Cycle Time helps teams evaluate the speed and consistency of their delivery process. It answers questions like:
- How long does it take to complete a task once it enters development?
- Are there consistent bottlenecks in our delivery flow?
- Are we improving our delivery speed over time?
Reducing cycle time enables tighter feedback loops and faster learning. For a data-driven breakdown of cycle time modeling, see minware’s report on Lead/Cycle Time and Workflow.
Cycle Time is also a practical leading indicator of delivery friction: if Cycle Time increases, something is slowing execution, often before stakeholders feel the full impact in release cadence or roadmap slippage.
How does AI and automation change Cycle Time in practice?
AI tools and agentic automation often change where Cycle Time accumulates.
Common patterns teams see:
- Faster implementation, same review time: AI can reduce coding time, but review queues, CI, and approvals still dominate the end-to-end cycle.
- More, smaller PRs: AI assistance can make it easier to ship smaller increments, improving flow—if review standards and automation keep up.
- Higher variance: Agentic workflows can produce bursts of changes (many PRs/tickets opened quickly), which may increase WIP and slow throughput if the team’s “merge capacity” isn’t scaled.
- New bottlenecks emerge: Security review, dependency checks, flaky tests, and integration validation often become the pacing steps once coding accelerates.
For teams measuring Cycle Time in the AI era, it is often useful to compare Cycle Time by stage (coding vs review vs testing) rather than relying on a single blended number.
What are common variations of Cycle Time?
At minware, this metric is broken down further into multiple analytical perspectives:
- Ticket Cycle Time (TCT): Time from when a ticket moves to an “in progress” state until it is completed. Reflects overall execution speed.
- Pull Request Cycle Time (PRCT): Time between PR creation and merge. Captures how long code spends in review and validation.
- Stage-Specific Cycle Time: Measures time spent in coding, review, testing, and deployment stages individually.
- Long Cycle Time Tickets (LCTT): Flags tickets that exceed a defined cycle time threshold (e.g. 4 weeks) to identify and address stalled work.
Cycle Time and Lead Time are often considered variations of each other, but each is unique. While Cycle Time tracks time from start to finish of active work, Lead Time includes the full timeline from task creation, including time spent waiting in the backlog. Lead Time provides broader visibility into planning and prioritization delays, while Cycle Time offers deeper focus on workflow execution.
Additional segmentations (especially useful in AI-assisted teams) include:
- By work origin: human-created vs automation-created tasks (e.g., agent-generated PRs)
- By change type: feature vs bug vs refactor vs dependency upgrade
- By workflow stage: where time accumulates (coding vs review vs CI vs approvals)
- By size bucket: small/medium/large work items, since large items skew averages and hide flow problems
What are the limitations of Cycle Time?
Cycle Time highlights how long tasks take to complete once started, but it doesn’t account for time spent waiting in the backlog or before prioritization. It also doesn’t indicate why a task took longer, whether due to technical complexity, delays in review, or cross-team dependencies.
In AI-assisted or agentic workflows, additional limitations commonly appear:
- Attribution ambiguity: If bots open PRs or update ticket statuses automatically, Cycle Time can look better or worse depending on how “start” and “done” events are generated.
- Hidden waiting time: AI can compress implementation time so much that waiting (review queues, CI congestion, environment issues) dominates. Cycle Time stays high even when coding is fast.
- Metric gaming risk: Teams can reduce Cycle Time by splitting work into many tiny tickets/PRs without improving actual delivery outcomes. That can hurt quality or increase coordination overhead.
- Comparing individuals is risky: Cycle Time differences often reflect work type, ownership boundaries, or review load, not effort or performance.
To better understand where delays occur and how to address them, pair Cycle Time with:
| Complementary Metric | Why It’s Relevant |
|---|---|
| Lead Time for Changes | Provides visibility into delays before development begins, not just execution time. |
| Flow Efficiency | Reveals how much time in Cycle Time was spent actively progressing versus waiting. |
| Review Latency | Isolates how long tasks are delayed in peer review, often a major cause of long cycle times. |
How can teams reduce Cycle Time without hurting quality?
Improving Cycle Time involves removing delivery bottlenecks, breaking work into smaller increments, and increasing team focus on finishing started tasks.
-
Limit WIP. Too many in-progress tasks increase context switching and delay completion. Use WIP Limits to keep cycle time stable.
-
Swarm on stuck work. Teams should prioritize completing in-flight items before starting new ones. Swarming shortens lead time and exposes blockers early.
-
Break down large tasks. Oversized stories or tickets are harder to complete and often mask multiple units of work. Aim for atomic, testable slices of functionality.
-
Improve handoffs and automation. Long review or test handoffs extend cycle time. Apply Code Review Best Practices and build [CI/CD] automation to reduce handoff delays.
-
Track aging tickets. Surface tasks that have remained in progress for too long. Older items tend to accumulate risk, scope creep, or unclear ownership.
Additional optimization tactics that fit AI-assisted teams:
- Use AI to accelerate the safe steps, not just coding. For example, use AI to draft PR descriptions, generate test scaffolding, summarize changes for reviewers, and propose rollback notes while keeping engineers accountable for correctness.
- Protect review quality while increasing throughput. If AI increases PR volume, invest in reviewer capacity, clearer review standards, and better CI signal quality so review doesn’t become the permanent bottleneck.
- Separate “implemented” from “validated.” If agentic tools can produce code quickly, ensure your workflow still enforces validation gates (tests, security checks, staged rollout) so faster Cycle Time does not inflate failure rates.
Reducing Cycle Time leads to faster iteration, clearer delivery rhythm, and more predictable execution, all key ingredients for teams optimizing workflow efficiency in the AI era.