How Humans Really Interact with ChatGPT Six Neuropsychological Patterns: From Anthropomorphization to Emotional Dependency
A qualitative research study investigating the behavioral and cognitive patterns that emerge when people interact with conversational AI bridging neuropsychology and Human-AI Interaction design.
Client
Type
Year

Overview
When ChatGPT became ubiquitous, I wasn't watching the technology, I was watching the people. As a psychologist, I recognized the behaviors: users apologizing before asking questions, feeling its absence, building inside jokes with a language model. These weren't tool-use behaviors. They were social ones.
This study explores why, and what it means for the design of AI systems.
Methods
Qualitative exploratory research with N=6 participants.
Data collected through semi-structured interviews, real-use screenshots, and Think Aloud protocol. Analysis via Thematic Coding with evidence triangulation across three sources: what users say, what they do, and what they think while doing it.
Key Findings — 6 Behavioral Patterns
Spontaneous Anthropomorphization — 83% treat ChatGPT as a social entity (CASA Paradigm as active cognitive strategy, not error)
Calibrated Trust & Strategic Distrust — 100% trust the tool while actively monitoring its failures
Sycophancy Detection — 67% identify and work around the AI's tendency to please over accuracy
Prompt Formality Adaptation — 100% adjust tone and complexity based on task demands
Cognitive Overload in Long Responses — users satisfice, skipping content that exceeds working memory limits
Informal Therapeutic Use — 33% use AI for emotional processing, raising ethical design concerns
Design Implications Three evidence-based interface proposals: a Certainty Score system for trust calibration, Progressive Disclosure for cognitive load management, and a configurable Honesty Mode to counter sycophancy.
