Developers in small teams juggle many threads. AI tools add more. When an agent runs, we often start another task. The result is more switching, not less focus.
What the research actually shows
Gloria Mark’s CHI-2008 experiment tested interruptions under controlled conditions. Interrupted participants finished faster and with no loss in measured quality, but reported significantly higher stress, frustration, time pressure, and effort. Speed came at a cognitive cost.
The popular “23 minutes 15 seconds to get back on task” figure does not appear in that 2008 paper. It comes from Mark’s interviews and articles summarizing field observations: most interrupted work resumed the same day, and the average resumption time was 23:15. Treat it as reported field evidence, not a peer-reviewed metric in CHI-2008.
Earlier fieldwork on fragmented work documented short focus windows and frequent switching among “working spheres.” Information workers changed focus often; many switches were self-initiated. This is the baseline we already live in.
Complementary HCI studies show resumption lags in the ~10–20+ minute range, depending on timing and workload. Interruptions during high mental load delay resumption more. This aligns with the 23:15 ballpark.
Cognitive psychology explains why: switching imposes executive control and working-memory reload. The APA summarizes the effect simply—frequent switching can waste up to ~40% of productive time. Complex tasks suffer most.
Programming studies add domain-specific texture. Developers rely on fragile mental context; resumption typically requires navigation and reconstruction. Large telemetry studies and lab/field work show substantial re-orientation overhead and limited truly “continuous” coding.
Mechanics in AI-assisted workflows
AI agents run in parallel; humans don’t. When an agent is generating PHP or JS, jumping to a different client task opens a second cognitive thread. When the agent finishes, you switch back, reload context, and now manage two partially active states. The switching cost is paid twice.
If the delegated task is small and self-contained, the reload is minor. If it is non-trivial (e.g., multi-file generation or a failing test matrix), parallelizing your own attention is risky. Known resumption delays and higher stress reappear—now with the added step of vetting AI output before integration. The literature on interruptions and programmer resumption supports this caution; current AI-specific evidence is emerging, but the human cost model remains the same.
Operational relevance for small web teams (PHP/JS)
Agencies optimize for throughput across many small tickets. AI accelerates micro-tasks, but it also multiplies open loops. Two common failure modes:
- Shallow review. Accepting agent output while half-focused on another task. Bugs slip through. Resumption research predicts this under fragmentation.
- Context thrash. PHP feature → Slack ping → AI output review → JS hotfix → back to PHP. Each hop reloads state and raises stress. Mark’s findings—speed with stress—fit this pattern.
Minimal, actionable strategies
- Single-thread deep work. For complex tickets, don’t start a different task while the agent runs. Monitor, refine prompts, or prepare tests for the same ticket. Keep context aligned.
- Parallelize only within one sphere. While an AI runs unit tests for a PHP module, draft fixture data or docs for that same module. Avoid cross-project switches.
- Use micro-tasks intentionally. Offload small, bounded edits to AI (pure refactors, doc blocks, boilerplate). Time-box the review as a single focus block.
- Schedule attention, not just tasks. Block “no-ping” windows during AI-assisted deep work. Teams should treat this like a build window—no Slack, no email. The stress delta in interruption studies justifies the guardrail.
- Close loops aggressively. Review and integrate AI output to a stable point before hopping. Fewer open loops → fewer costly resumptions.
Takeaways
- The 23:15 number is interview-reported, not in CHI-2008, but it matches other ~10–20+ min resumption evidence.
- Interruptions can maintain speed short-term yet raise stress and effort. Quality may look stable in short tasks but degrades under sustained fragmentation.
- AI adds parallel compute, not parallel attention. Treat human focus as the bottleneck. Keep work within one context while agents run.
- For small agencies, the win comes from sequencing: deep work single-threaded; AI for micro-tasks; reviews done in one pass.
References
Mark, G., Gudith, D., & Klocke, U. (2008). The cost of interrupted work: More speed and stress. CHI ’08: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 107–110. https://doi.org/10.1145/1357054.1357072. (See also open PDF at UCI.)
Mark, G. (2008, July 28). Worker, interrupted: The cost of task switching. Fast Company. (Reports the 23:15 resumption figure and 82% same-day resumption.)
Mark, G., González, V. M., & Harris, J. (2005). No task left behind? Examining the nature of fragmented work. CHI ’05 Proceedings, 321–330. (Field observations of short activity lengths and frequent switching across “working spheres”.)
Iqbal, S. T., & Horvitz, E. (2007). Disruption and recovery of computing tasks: Field study, analysis, and directions. CHI ’07 Proceedings, 677–686. (Resumption lags depend on timing and workload; ~10–20+ minutes is common.)
American Psychological Association. (n.d.). Multitasking: Switching costs. Retrieved from apa.org. (Summarizes productivity loss up to ~40% with frequent switching.)
Parnin, C., & DeLine, R. (2010). Resumption strategies for interrupted programming tasks. ICPC ’10 / SQJ ’11 papers. (Developers perform substantial navigation and reconstruction on resumption; few sessions resume instantly.)
Gallup (Interview with G. Mark). (2006, June 8). Too many interruptions at work? (Reports 81.9% same-day resumption; average resumption time 23:15.)
(Where this post generalizes to AI agents, it extrapolates from the cited interruption/resumption literature. Direct AI-specific longitudinal results are still emerging.)