Summary

This paper investigates how generative AI impacts critical thinking in knowledge work through a survey of 319 knowledge workers. The study examines when workers perceive the need for critical thinking, how they enact it, and how GenAI affects their cognitive effort in critical thinking tasks. Key findings show that higher confidence in GenAI correlates with less critical thinking, while higher self-confidence correlates with more critical thinking but greater perceived effort. The research reveals shifts in critical thinking patterns: from information gathering to verification, from problem-solving to AI response integration, and from task execution to oversight.

Key Contributions

  • First large-scale empirical study examining critical thinking in GenAI-assisted knowledge work
  • Identification of key relationships between user confidence (in self and AI) and critical thinking behaviors
  • Framework for understanding shifts in cognitive effort patterns when using GenAI
  • Design implications for supporting critical thinking in GenAI tools

Method

  • Online survey of 319 knowledge workers who use GenAI tools at least weekly
  • Collected 936 real-world examples of GenAI use in work tasks
  • Mixed-methods analysis:
    • Quantitative: Regression models examining relationships between task/user factors and critical thinking
    • Qualitative: Open coding of free-text responses about critical thinking practices
  • Used Bloom’s taxonomy to operationalize critical thinking measurement

Results

  • Higher confidence in AI correlates with less critical thinking but perceived reduced effort
  • Higher self-confidence correlates with more critical thinking but increased perceived effort
  • Three major shifts in critical thinking patterns:
    1. From information gathering to information verification
    2. From problem-solving to AI response integration
    3. From task execution to task stewardship
  • Identified key motivators (work quality, avoiding negative outcomes, skill development) and barriers (awareness, motivation, ability) to critical thinking

Takeaways

Strengths

  • Large, diverse sample of real-world GenAI use cases
  • Robust mixed-methods approach
  • Clear practical implications for tool design
  • Strong theoretical grounding in Bloom’s taxonomy

Limitations

  • Self-reported data may not fully capture actual critical thinking behaviors
  • Sample skewed toward younger, tech-savvy workers
  • English-only participants limits generalizability
  • Subjective nature of confidence measures

Notable References

  • Bainbridge, L. (1983). Ironies of automation
  • Bloom et al. (1956). Taxonomy of educational objectives
  • Brachman et al. (2024). How Knowledge Workers Use and Want to Use LLMs in an Enterprise Context
  • Tankelevitch et al. (2024). The Metacognitive Demands and Opportunities of Generative AI
  • Simkute et al. (2024). Ironies of Generative AI