Phishing Psychology: Why the Smartest People Still Click
The cognitive biases exploited by phishing: authority, urgency, social proof. Why intelligence doesn't protect you and how to train real reflexes.
In October 2025, 65% of IT managers admitted to clicking a phishing link during the year - 15 points higher than ordinary employees (Arctic Wolf Human Risk Report 2025). The very people tasked with protecting the organization from phishing are the ones who fall for it the most.
This finding is not an anomaly. It belongs to a fifteen-year body of research demonstrating a counterintuitive truth: intelligence, technical expertise, and knowledge of phishing do not protect against phishing. In some cases, these qualities actually increase vulnerability - through a mechanism psychologists call the overconfidence bias.
This article explores the cognitive mechanisms phishing exploits, the reasons why theoretical knowledge fails to prevent clicks, and what neuroscience and cognitive psychology research tells us about defense methods that actually work. Every study cited is identified by author, institution, and publication date. For hard numbers on the scale of the threat: Phishing in business: 2026 statistics.
System 1 vs. System 2: Your Brain Under Attack
In 2011, psychologist and Nobel laureate in economics Daniel Kahneman published Thinking, Fast and Slow, a book that reshaped our understanding of human decision-making. Kahneman describes two cognitive systems that coexist in our brains.
System 1 is fast, automatic, intuitive. It operates without conscious effort, relying on patterns, associations, and mental shortcuts. It is the system that lets you read this sentence without deciphering each letter, recognize a colleague's face in a fraction of a second, and hit the brakes before you have consciously identified the obstacle on the road.
System 2 is slow, deliberate, analytical. It demands conscious effort. It is the system you engage to solve a complex calculation, compare two business proposals, or draft a sensitive email. System 2 consumes cognitive energy: it tires, it grows weary, and it constantly seeks shortcuts to hand off to System 1.
How We Process Our Emails
A typical workday generates between 50 and 120 emails. No human being can apply System 2 analysis to every single one. The brain naturally shifts into System 1 mode: it scans the sender, the subject line, the opening words of the body, and makes a decision in under two seconds - open, ignore, archive, click.
This is exactly the operating mode phishing exploits. A well-crafted phishing email is optimized for System 1: it reproduces familiar visual patterns (the bank's logo, the company's layout), it uses language that triggers automatic responses ("urgent," "action required," "your account"), and it places the clickable link where the finger or cursor instinctively moves.
Vishwanath's Research: Heuristic Processing Dominates
Arun Vishwanath, associate professor at the University of Buffalo and research affiliate at Harvard's Berkman Center, has published over twenty papers on phishing vulnerability. His foundational 2011 study, Why do people get phished?, published in Decision Support Systems, demonstrated that the majority of phishing emails are processed peripherally - that is, by System 1.
His SCAM model (Suspicion, Cognition, and Automaticity Model), published in 2018 in Human Communication Research, goes further. Vishwanath shows that digital media consumption habits create behavioral automatisms - "media habits" - that lead to unconscious actions. A user accustomed to clicking on notifications from their bank will process a fake bank email exactly like a real one, because the action has become automatic.
Vishwanath's model explains roughly 50% of the variance in individual phishing vulnerability. Half of what determines whether you will click has nothing to do with your intelligence - it depends on your information-processing habits.
The System 2 Paradox: Why "Paying Attention" Is Not Enough
The instinctive response to the phishing problem is to say: "Just pay attention." In other words, engage System 2 for every email. Three obstacles make this strategy unworkable.
First, System 2 is a limited resource. It fatigues over the course of the day - a phenomenon psychologists call "ego depletion" or decision fatigue. After hours of intense cognitive work, System 2 is exhausted and increasingly delegates to System 1 - including for email processing.
Second, analyzing each email in depth cuts processing productivity by a factor of five to ten. No organization can afford to have its employees spend 30 seconds analyzing each of their 100 daily emails.
Third, the most sophisticated phishing emails - spear phishing, business email compromise (BEC) - are designed to withstand System 2 analysis. When the email comes from a domain nearly identical to the real one, references an ongoing project, and is signed with the CEO's name, even careful examination can conclude it is legitimate. These new forms of phishing (quishing, vishing, smishing) make the task even more complex.
The Six Psychological Levers of Phishing
Robert Cialdini, professor emeritus of psychology and marketing at Arizona State University, identified six principles of influence that govern human persuasion. His book Influence: The Psychology of Persuasion (1984, revised 2021) became a classic of social psychology. Each of these six principles is systematically exploited by phishing attacks.
A meta-analysis published in Archives of Computing Methods in Engineering (arXiv, 2024) confirms that Cialdini's principles are the most frequently identified manipulation tools in analyzed phishing campaigns. A 2025 comparative study published in the SecurWare proceedings quantified their relative effectiveness on actual compromise rates.
1. Authority: "Message from the IT Department"
The authority principle rests on our tendency to obey authority figures without questioning their requests. This is the mechanism exploited by CEO fraud (BEC), fake IT department emails, and messages impersonating banks, tax authorities, or regulatory bodies like CNIL (France's data protection authority).
Example email exploiting authority:
From: IT Department Subject: [URGENT] Mandatory security update Dear employee, following an intrusion attempt detected on our network, you must update your password immediately via the secure portal below within 2 hours. Any account not updated will be temporarily suspended. - IT Department
The email combines authority (the IT department) and urgency (2 hours). The recipient must simultaneously resist two psychological levers - which demands considerable System 2 effort.
The 2025 research (SecurWare) shows that authority exhibits a positive linear correlation with compromise rates: the higher the perceived level of authority, the higher the click rate. Emails impersonating a CEO or official body produce the highest conversion rates.
2. Urgency and Scarcity: "Your account will be suspended in 24 hours"
Scarcity triggers what psychologists call "psychological reactance": the fear of losing access, an opportunity, or a privilege. Phishing exploits this mechanism by imposing artificial deadlines - 24 hours, 2 hours, "immediately."
Example email exploiting urgency:
From: Orange Customer Service Subject: Final notice before suspension of your line We were unable to process your last invoice payment. Without settlement within 48 hours, your line will be permanently suspended. Resolve your situation: [link]
Urgency shuts down System 2's analytical thinking. Under time pressure, the brain switches to reactive mode. A study published in Computers & Security (2022) by Yan and colleagues demonstrated that time pressure markedly reduces phishing detection ability. Participants subjected to time constraints (7 seconds per email versus 15 seconds) showed a notably lower detection rate.
The surprise from the SecurWare 2025 research: scarcity is the most frequently used principle in phishing campaigns, but it shows the weakest correlation with compromise rates. Likely explanation: users have gradually desensitized to "urgent" messages after years of exposure. Urgency still works, but less effectively than it did five years ago.
3. Social Proof: "3 colleagues have already approved"
The social proof principle - we do what others do - is exploited by phishing emails that mention other people ("Following your team's approval..."), mimic social platform notifications ("5 people viewed your profile"), or include fake testimonials.
Example email exploiting social proof:
From: Microsoft Teams Subject: Pierre Durand and 2 others shared a file with you The document "Q2 Budget - CONFIDENTIAL.xlsx" has been shared with you. Click to access.
Cross-cultural studies (Ferreira et al., 2024) identify social proof as one of the two most influential principles - alongside authority - across cultural contexts as different as the United Kingdom and the Arab world. Social proof works because it bypasses System 2: if others have already validated, individual analysis seems unnecessary.
4. Reciprocity: "Here's a gift for you"
The reciprocity principle rests on our implicit obligation to return favors. When someone gives us something - a gift, a service, information - we feel social pressure to respond. Phishing exploits this mechanism by offering something (a discount coupon, free access, a useful document) before requesting an action.
Example email exploiting reciprocity:
From: Amazon Prime Subject: Your 50 EUR voucher is available As a thank you for your loyalty, Amazon is offering you a 50 EUR gift card. Activate your voucher by logging into your account.
Cialdini documented a more subtle variant: the "reject and retreat" technique. The attacker starts with an excessive demand (transfer 50,000 euros), then "retreats" to a more reasonable request (just confirm your banking details). The target, relieved by the reduced demand, feels compelled to cooperate.
5. Commitment and Consistency: "Following your registration..."
Humans seek consistency with their past commitments. If you have started a process - a registration, an order, an administrative procedure - you are psychologically driven to complete it, even if warning signs appear along the way.
Example email exploiting commitment:
From: Doctolib Subject: Confirm your appointment on March 15 Your appointment with Dr. Martin is confirmed. To finalize, please update your health information in your account: [link]
The email references a past commitment (an appointment) to trigger an action (updating information). The victim, invested in the process, is less inclined to question the request's legitimacy. The 2025 comparative study (SecurWare) confirms that commitment/consistency is particularly effective in BEC and wire fraud scenarios, where the attacker inserts themselves into an existing conversation.
6. Familiarity and Trust: Brand and Contact Impersonation
Cialdini's liking principle rests on our tendency to say yes to people and brands we like, know, and trust. Phishing exploits this through brand impersonation (Google, Amazon, your bank), contact impersonation (email "from" a colleague), and personalization (using your first name, job title, and details of your professional life).
The SecurWare 2025 research identifies familiarity as the lever with the strongest positive correlation with compromise rates. The more an email appears to come from a familiar and emotionally close source, the higher the click rate. This is the principle that explains the devastating effectiveness of spear phishing compared to generic phishing.
The Cultural Dimension: Phishing Adapts to Social Norms
Cialdini's six principles are universal, but their effectiveness varies across cultures. A 2024 cross-cultural study (Ferreira et al.), conducted with 314 British participants and 328 Arab participants, measured the relative influence of each principle across different cultural contexts.
Collectivist vs. Individualist Cultures
In collectivist cultures (Middle East, East Asia, Africa), authority and social proof exert stronger influence than in individualist cultures (North America, Northern Europe). An email impersonating a superior produces a higher click rate in a Moroccan company than in a Swedish one - because the cultural norm of deference to authority is more deeply ingrained.
Conversely, individualist cultures are more responsive to reciprocity and personal commitment levers. An email promising an individual advantage ("Your performance review") works better in a culture where personal achievement is valued.
Implications for French Businesses
France presents a mixed cultural profile. According to Geert Hofstede's cultural dimensions framework, France is characterized by a high power distance index (68/100) - meaning authority is more respected and less questioned than in Anglo-Saxon or Scandinavian countries. At the same time, France scores high on individualism (71/100).
This combination creates a specific vulnerability profile: authority-based attacks (CEO fraud, fake emails from senior management) are particularly effective in France, while also combining with individualist levers (bonus promises, personal evaluations). Data from the Vade platform confirms this: CEO fraud remains the most costly phishing scenario for French businesses, with an average loss of 150,000 euros per successful incident - a figure to put in perspective with the total cost of a cyberattack for a 50-person SMB.
Simulation programs must account for these cultural specificities. A scenario that works in an American company (fake Amazon discount coupon) may have less impact in a French SME, where a fake email from the tax authority (impots.gouv.fr) or social security agency (URSSAF) will be far more effective.
The Language Factor
French adds a layer of complexity. Historically, phishing emails in French were easily identifiable thanks to grammar and syntax errors - texts automatically translated by non-French-speaking attackers. According to data from Vade and Proofpoint, this indicator vanished in 2025: language models now generate grammatically flawless French, with appropriate register nuances (formal address forms, administrative courtesy phrases, technical jargon). The last heuristic filter that French-speaking users had developed ("the French is bad, so it must be phishing") has become obsolete. To learn about the signs that remain reliable, see our guide to recognizing fraudulent emails.
Why Intelligence Does Not Protect You
The most widespread intuition - and the most dangerous one - is that intelligent, informed people are naturally protected from phishing. Research disproves this intuition systematically.
The UMBC Paradox: The More You Know About Phishing, the More You Click
A study at the University of Maryland, Baltimore County (UMBC) produced one of the field's most counterintuitive results. Researchers found a positive relationship between phishing knowledge and phishing vulnerability. Students who claimed to understand the definition of phishing had a higher susceptibility rate than those who had merely heard of it, and both groups were more vulnerable than those with no phishing knowledge at all.
The explanation comes down to one word: overconfidence. Those who "know what phishing is" believe they can spot it. This confidence leads them to process emails faster (System 1), spend less time checking for fraud indicators, and ignore warning signs they would normally have noticed.
IT Managers: The Most Confident, the Most Vulnerable
The Arctic Wolf 2025 Human Risk Report, based on a survey of 1,700 IT managers and end users, quantified this paradox in a professional context. The results are striking:
- 65% of IT managers clicked a phishing link (versus 50% of regular users)
- 76% of IT managers say they are confident in their organization's ability to resist phishing
- 1 in 5 IT managers who clicked a malicious link did not report it
Overconfidence produces a double effect: it increases click probability ("I'm too skilled to be fooled") and reduces reporting ("if I report it, it reveals I failed").
The Dunning-Kruger Effect in Cybersecurity
The Dunning-Kruger effect, identified by psychologists Justin Kruger and David Dunning in 1999, describes the tendency of less competent individuals in a domain to overestimate their abilities, and of highly competent individuals to underestimate theirs.
In cybersecurity, this effect takes a particular form. Professor H.R. Rao of the University of Texas at San Antonio (UTSA), a specialist in behavioral security, documented it in his work on phishing: "A major advantage for phishers is the self-efficacy of victims. Most people think they are smarter than the criminals behind these schemes, which is why so many people easily fall prey."
A global survey cited in the Arctic Wolf report found that 86% of employees say they are confident in their ability to identify phishing attempts. This near-universal confidence contrasts sharply with actual click rates, which regularly exceed 20% during initial corporate simulations.
Training Can Make the Problem Worse
An even more troubling finding comes from a 2024 study by ETH Zurich. Researchers found that embedded training - the educational modules displayed after clicking on a simulation - does not make employees more resistant to phishing. Worse: it can make them more vulnerable, by generating overconfidence in their abilities and a sense that mistakes during tests carry no consequences.
This finding does not condemn training itself - it condemns a type of training that focuses on theoretical knowledge ("knowing what phishing is") instead of building behavioral reflexes ("automatically verifying the sender before clicking"). The distinction is fundamental - we analyze it in detail in why cybersecurity e-learning is no longer enough.
The Emotional States That Make You Vulnerable
Phishing does not only exploit abstract cognitive biases. It targets concrete, measurable emotional states that affect decision-making quality.
Stress Increases Click Rates
A study from the Pacific Northwest National Laboratory (PNNL), published in the Journal of Information Warfare, demonstrated a clear statistical correlation between workplace stress and phishing vulnerability. Researchers measured participants' stress levels before exposing them to simulated phishing emails. Result: work-related distress markedly increases the likelihood of clicking a phishing link.
The mechanism is neurological. Under stress, the prefrontal cortex - the brain region responsible for analysis, planning, and impulse control - is inhibited. Neuroscientist Amy Arnsten at Yale documented this phenomenon in Molecular Psychiatry: elevated catecholamine levels (norepinephrine, dopamine) under stress take the prefrontal cortex "offline" while strengthening the functions of more primitive circuits, including the amygdala (conditioned emotional responses) and the basal ganglia (habitual actions).
In plain terms: stress disconnects System 2 and activates System 1. Emails are processed on autopilot, warning signs are ignored, and the click happens before any conscious analysis.
Multitasking Splits Attention - and Vigilance
A major study from Binghamton University, published in the European Journal of Information Systems in 2025 with 977 participants, demonstrated that phishing detection accuracy drops markedly under cognitive overload. Associate professor Jinglu Jiang summarizes the mechanism: "When you're working with multiple screens, your attention is never fully focused on one screen or one particular email, especially when you're handling urgent tasks. If you want to respond quickly to that email, ignoring phishing warning signs is easy."
Multitasking also affects the type of cognitive processing. An earlier study showed that participants under heavy workloads relied more on heuristic processing (System 1) rather than systematic processing (System 2), which increased the probability of accepting a fraudulent request.
The good news from the Binghamton study: when researchers introduced brief security reminders ("nudges") during multitasking phases, detection performance improved - even under heavy cognitive load.
Decision Fatigue: Why Phishing Strikes After Lunch
Data from Proofpoint and the SANS Institute converge on the time slots when click rates are highest: 2:00-3:00 PM, corresponding to the post-lunch dip in alertness. This is no coincidence: decision fatigue accumulates over the course of the day. Every decision made - answering an email, approving a request, choosing between two options - consumes System 2 cognitive resources.
University-based studies have documented that phishing attacks targeting students increase by nearly 40% during midterm and final exam periods. The mechanism is identical: cognitive fatigue reduces vigilance and increases click probability.
PNNL researcher Corey Fallon suggests an intervention approach: "One option is to help people recognize when they are in a state of distress, so they can be extra-vigilant when they are particularly vulnerable."
Emotional State as an Attack Vector
The most sophisticated attackers choose their target and their timing. Phishing scenarios are often designed to reach targets in specific emotional states:
- A new hire eager to make a good impression, unlikely to question a request from their "manager"
- An employee under performance pressure, processing emails in rapid succession without verification
- An accountant during month-end close, overwhelmed with invoices and payment requests
- An anxious employee following a restructuring announcement, receptive to emails about their position or benefits
The USENIX Security 2024 study (Schöps et al.) measured participants' stress and self-efficacy levels after phishing simulations. Participants who had clicked showed markedly higher stress levels and markedly lower perceived self-efficacy than those who had reported the email.
Psychological Anatomy of a Click: Second by Second
What happens in the brain of an employee who receives a phishing email? Here is the cognitive timeline, reconstructed from the models of Vishwanath (SCAM), Kahneman (dual-process), and stress neuroscience data.
Second 0-1: The notification. The email arrives. The brain registers the notification - visual (badge on the icon) or auditory (ding). The amygdala performs an initial emotional sort: is there a threat or an opportunity? If the subject line contains emotional keywords ("urgent," "suspended," "last chance"), the amygdala sends an alert signal to the rest of the brain. The prefrontal cortex has not yet had time to engage.
Second 1-3: The sender scan. System 1 identifies the sender. If the name is familiar (Microsoft, the bank, a colleague) and the visual format matches expectations (logo, layout), System 1 categorizes the email as "legitimate" and moves to reading the content. The sender's domain is not verified: System 1 does not distinguish @microsoft.com from @microsoft-security.com.
Second 3-5: Reading the subject and body. System 1 extracts the key information: who, what, what action is requested. If the message is short, clear, and asks for a simple action ("click here"), processing stays in automatic mode. Cialdini's principles take effect: urgency accelerates the process, authority inhibits questioning, familiarity disarms suspicion.
Second 5-8: The decision. The finger moves toward the link. At this point, System 2 could still intervene - but for that to happen, a signal must trigger doubt. That signal can be a visual inconsistency (spelling mistake, distorted logo), a contextual inconsistency (the bank never asks for passwords by email), or a trained reflex ("I always check the sender's address").
Second 8-10: The click (or the report). If no doubt signal is triggered, the click happens. The System 1-to-action chain has operated without System 2 interruption. If a doubt signal is triggered, the brain switches to System 2: the employee examines the sender, hovers over the link to see the URL, considers the context - and, ideally, reports the email as suspicious.
The Intervention Point: Second 5
The protection window lies between second 5 and second 8. It is in this interval that doubt may (or may not) emerge. And it is this window that phishing simulation trains - by exposing employees to realistic scenarios, simulation creates "automatic doubt responses" that activate on their own, without requiring the costly engagement of System 2.
What stress neuroscience tells us is that this doubt window shrinks under pressure. When the prefrontal cortex is inhibited by stress catecholamines, the transition from second 5 to second 8 compresses: the finger reaches the link before the alert signal has had time to surface. Simulation data confirms this mechanism: the average time between opening an email and clicking a phishing link is 3.2 seconds during high-workload hours, versus 5.8 seconds during calm periods. Simulation trains the brain to trigger the doubt reflex earlier in the sequence - ideally by second 3, at the sender-scan stage.
For a practical guide on building a simulation program, see our complete guide to phishing simulation in business.
Personality as a Risk Factor
Beyond universal cognitive biases, personality psychology research has identified traits that modulate phishing vulnerability. A systematic meta-analysis by Grandhi and Still (2024, published via HCII/Springer), covering 40 studies from 2014 to 2024, mapped the relationship between Big Five personality traits and cybersecurity behaviors.
Neuroticism: Anxious but Impulsive
Individuals with high neuroticism scores (emotional instability, anxiety) present a paradoxical profile against phishing. On one hand, their anxiety makes them naturally more suspicious - they perceive more threats. On the other, stress and emotional pressure degrade their impulse control. A phishing email invoking an authority figure or imposing a time constraint can trigger a submission response in a neurotic person, precisely because the pressure exacerbates their anxiety to the point of short-circuiting rational analysis.
Rahman et al. (2024) recommend tailored training programs for high-neuroticism profiles, with stress-management exercises in digital contexts.
Conscientiousness: The Best Shield - with a Flaw
Conscientiousness (diligence, organization, rule-following) is the trait that shows the strongest and most consistent correlation with phishing resistance. Conscientious people check details, follow procedures, and question unusual requests. It is the most protective personality trait.
Its flaw: performance orientation. A highly conscientious person can fall for a phishing trap if the email promises a result tied to their professional goals ("Your performance review is available") or if the requested action appears to fit a legitimate procedure ("Submit your monthly report before the deadline"). The desire to do well can override caution.
Agreeableness: Trust as a Vulnerability
Agreeable people - cooperative, trusting, empathetic - are more vulnerable to attacks exploiting reciprocity and authority. Their tendency to trust and want to help makes them susceptible to emails that make a polite request or invoke a need for assistance. Spear phishing exploits this trait directly: a personalized email, written in a friendly tone, signed with a familiar name, disarms an agreeable person's defenses far more effectively than a threatening email.
Extraversion: Visible, Social, and Exposed
Extraverted individuals share more personal information online - on LinkedIn, social media, and in professional conversations. This information exposure provides attackers with the raw material for spear phishing: the manager's name, the current project, the recent conference, the weekend hobby. The Grandhi and Still (2024) study identifies extraversion as an indirect risk factor: it is not extraversion itself that creates vulnerability, but the information-sharing behavior it encourages.
Extraverts also display a faster, more intuitive information-processing style. They respond to emails more quickly, with less analysis time, which keeps them more firmly in System 1 mode. In simulation data, fast responders (under 5 seconds between opening and clicking) show a markedly higher compromise rate than slow responders.
Openness to Experience: Intellectual Curiosity and Risk-Taking
Openness to experience - intellectual curiosity, attraction to novelty, imagination - produces contradictory results in phishing research. On one hand, open individuals are more curious and therefore more likely to click an unusual link ("Watch this video from the conference") or download a document promising new information. On the other, they are also more receptive to training and faster to adopt new security behaviors.
The 2024 meta-analysis concludes that openness is a short-term risk factor but a long-term protective factor: open individuals click more at the start of a simulation program but improve faster than average. When redirected toward analyzing suspicious emails, this curiosity accelerates the acquisition of detection reflexes.
Beyond Personality: The Role of Professional Context
Personality does not operate in a vacuum. Professional context modulates the influence of personality traits on phishing vulnerability. A conscientious person placed in an extreme-pressure environment (month-end close, project delivery, restructuring period) will see their natural defenses diminished by stress. An agreeable person in a corporate culture that values hierarchical obedience will be doubly exposed to authority impersonation attacks.
Corporate simulation data shows click rate variations from 15% to 45% between departments in the same organization. Finance, HR, and administration teams - which handle a high volume of urgent requests from varied sources - consistently show higher click rates than technical teams. Individual personality explains part of the variance, but professional context explains at least as much. For detailed benchmarks by industry and company size: Phishing click rates: industry benchmarks.
Outsmarting Your Own Biases: Why Simulation Works and Theory Fails
The distinction between declarative knowledge ("I know what phishing is") and procedural knowledge ("I automatically verify the sender before clicking") sits at the heart of anti-phishing training effectiveness - or ineffectiveness.
Declarative vs. Procedural Knowledge
Declarative knowledge is stored in semantic memory. It is the kind you draw on to answer a quiz: "What is phishing?", "What are the signs of a fraudulent email?" This type of knowledge is easily acquired through a PowerPoint presentation or an e-learning module. And it is precisely this type of knowledge that, according to the UMBC study, can increase overconfidence without reducing vulnerability.
Procedural knowledge is stored in procedural memory - the same memory that lets you ride a bicycle or type on a keyboard without thinking. It is knowledge embodied in actions, reflexes, and automatisms. You do not acquire it by listening - you acquire it by practicing.
How Simulation Creates Reflexes
Phishing simulation works because it trains procedural memory. The employee who clicked on a simulated phish, saw the remediation page, understood which clue they missed, relived the same situation two weeks later, and this time spotted the clue - that employee has developed a vigilance reflex. The next time a suspicious email arrives, their System 1 will automatically trigger a doubt signal, without requiring System 2 intervention.
The SANS Security Awareness 2025 report documents this mechanism: organizations with a regular simulation program reduce their click rate by 75% in 12 months. Not because employees "know more things" - because they have developed verification reflexes.
The Role of Emotion in Memorization
Immediate remediation - the educational module displayed seconds after clicking on a simulation - exploits a documented neuroscience phenomenon: episodic memory (memory of lived events) is strengthened by emotion. The surprise and realization ("I clicked on a phishing email?") create an emotional marker that anchors learning far more deeply than a training slide.
It is the same mechanism that makes you remember exactly where you were during a major event, while you have forgotten the content of an ordinary meeting from last week. Emotion is the catalyst for long-term memory.
To understand how to structure a training program that harnesses this mechanism, see our cybersecurity training guide for SMEs.
From Theory to Practice: Designing Psychologically Effective Simulations
The psychological principles described in this article are not merely academic. They have direct implications for how phishing simulation campaigns are designed.
Vary the Psychological Levers
An effective simulation program does not repeat the same scenario. Each campaign should target a different psychological lever:
- Month 1: urgency - "Your access will be suspended in 24 hours"
- Month 2: authority - "Message from the IT department"
- Month 3: social proof - "3 colleagues shared a document with you"
- Month 4: familiarity - Spear phishing using the direct manager's name
- Month 5: reciprocity - "Here's your welcome gift card"
- Month 6: commitment - "Following your leave request, please confirm"
This rotation exposes employees to the full spectrum of real-world attacks and helps identify which levers the organization is most sensitive to.
Adapt Difficulty to Risk Profile
Personality data and past behavior allow simulation difficulty to be calibrated:
- Low click-rate employees: personalized spear phishing scenarios exploiting familiarity and commitment
- High click-rate employees: moderate-difficulty scenarios with reinforced remediation, targeting the levers they are most sensitive to
- Finance teams: BEC and wire fraud scenarios exploiting authority and consistency
- New hires: onboarding scenarios ("Welcome! Complete your HR profile"), exploiting commitment and familiarity
Schedule Simulations at Vulnerability Windows
If data shows that click rates are highest on Tuesday-Wednesday between 2:00 and 3:00 PM (post-lunch fatigue window), or Monday morning (backlog of accumulated emails), simulations should be scheduled at those times. The goal is not to trap employees - it is to train them under the real conditions where attacks will occur.
Corporate Culture as a Vulnerability Amplifier
Cognitive biases operate at the individual level, but organizational culture can amplify or dampen them in meaningful ways. A company that punishes phishing mistakes creates an environment where employees do not report their clicks - worsening the impact of every incident. A company that values reporting creates a collective safety net.
The Culture of Permanent Urgency
In some organizations, the norm is to respond to emails within minutes of receiving them. Managers measure their teams' responsiveness, clients expect immediate answers, internal processes impose tight deadlines. This culture of permanent urgency keeps employees in chronic System 1 mode. Phishing thrives in this environment because it does not even need to create urgency - it already exists.
Companies that have implemented "reflection time" policies for sensitive requests (wire transfers, bank detail changes, confidential data sharing) see a notable reduction in BEC incidents. The rule is simple: any request involving a money transfer or sensitive data must be verified through a separate channel (phone call, in-person confirmation). This policy neutralizes the urgency lever by introducing a structural delay that forces System 2 activation. It is also a key argument for cyber insurers who require training evidence.
Hierarchy as an Attack Vector
Strongly hierarchical organizations - where questioning a superior is perceived as disrespectful - are structurally more vulnerable to CEO fraud. An accountant who receives an email from their CEO requesting a "confidential and urgent" transfer faces a conflict between their caution (System 2) and their cultural norm of submission to authority (System 1). In the majority of cases documented by CESIN (France's cybersecurity expert association), the cultural norm wins.
The organizations that best resist CEO fraud are those that have explicitly authorized - and even encouraged - employees to question unusual requests, regardless of the apparent source. This formal permission to doubt is a cultural act, not a technical one. It cannot be replaced by a firewall or a spam filter. Knowing what to do in case of phishing is part of this culture.
Reporting Rate as a Health Indicator
The reporting rate for suspicious emails is a more reliable indicator of cybersecurity culture than the click rate. A low click rate can mask complacency (simulations too easy, alert fatigue). A high reporting rate indicates that employees have developed the doubt reflex and that they trust the organization to handle their reports without punishment.
Data from the SANS 2025 report shows that organizations with the best reporting-to-click ratio (above 3:1 - three reports for every click) are also those that suffer the fewest real phishing incidents. This ratio reflects a culture where doubt is valued, where reporting is simple (a button in the email client), and where feedback is given to reporters ("Thanks, that was indeed a phishing email" or "That was a legitimate email, but you were right to check").
nophi.sh applies these principles. Simulations calibrated to your teams' cognitive biases, immediate post-click remediation, reporting rate measurement. Create a free account - measurable results within 90 days.
The Training Paradox: When Too Much Awareness Backfires
Research identifies three risks associated with a poorly calibrated awareness program.
Alert Fatigue
A program that sends simulations too frequently (more than two per month according to the SANS Institute) risks triggering alert fatigue. Employees become cynical: they treat every email as a potential test, which slows their productivity - or, conversely, they stop taking alerts seriously (the "boy who cried wolf" effect).
Post-Training Overconfidence
As documented above (ETH Zurich, 2024), training can make employees overconfident. After "passing" a few simulations, some employees consider themselves immune and lower their guard. The program must maintain a sufficient challenge level to prevent this complacency - by progressively increasing scenario sophistication.
Learned Helplessness
At the opposite end from overconfidence, some employees who repeatedly fail simulations develop a sense of learned helplessness: "I'll always get caught, so why bother trying." This phenomenon, documented by psychologist Martin Seligman in his work on depression, leads to total disengagement from cybersecurity.
The antidote is program benevolence - a central argument for convincing management to invest in awareness. Phishing simulation must never be perceived as a trap or a punitive test. Remediation should be encouraging ("Here's what you could have spotted - next time, you'll see it"), and individual results must remain confidential. Some organizations measure the reporting rate rather than the click rate - praising those who report rather than punishing those who click.
Frequently Asked Questions
Are older people more vulnerable to phishing than younger ones?
Studies show mixed results. Sheng et al. (Carnegie Mellon University) found that both older and younger users are more vulnerable than middle-aged adults. Young users, comfortable with technology, display overconfidence in their ability to spot fraud, which increases their risk-taking. Older users are less familiar with web visual cues and struggle to distinguish a legitimate site from a clone. The determining factor is not age but the mode of information processing: heuristic (fast, automatic) or systematic (slow, analytical).
Can you measure employee personality to tailor training?
It is technically possible but ethically sensitive. Big Five personality questionnaires are standardized and scientifically validated. Some awareness platforms offer adaptive tracks that, without explicitly measuring personality, adjust the difficulty and type of simulation based on observed behaviors (click rate, reaction time, scenario types where the employee fails). This behavioral approach delivers the benefits of personalization without the ethical questions tied to psychometric assessment.
Does AI-generated phishing make cognitive biases more dangerous?
Yes. Academic research comparing AI-generated phishing to human-crafted phishing shows that AI-generated emails achieve markedly higher click rates than manually written ones. AI produces error-free emails with a natural tone and contextual personalization that neutralizes the fraud indicators users traditionally relied on. The cognitive biases remain the same, but detection signals become more subtle.
How long does it take to develop an anti-phishing reflex?
Data from the SANS Institute 2025 indicates that a regular simulation program (one to two simulations per month) produces a marked reduction in click rates by the third month, with peak effectiveness reached between 9 and 12 months. However, the USENIX Security 2020 study (cited by SANS) shows that acquired reflexes erode within a few months without regular exposure. The anti-phishing reflex, like any automatism, requires ongoing maintenance.
Does gamification improve anti-phishing training effectiveness?
Gamification (team leaderboards, reporting badges, monthly challenges) taps into social proof and the need for competence to drive engagement in the program. Organizations highlighted by the SANS Institute that adopted "positive gamification" - rewarding departments with the best reporting rates rather than stigmatizing those with the worst click rates - see higher engagement and faster improvement in metrics. Gamification works as long as it remains supportive and never turns training into a humiliating individual competition.
How do you explain phishing psychology to senior leadership?
Summarize in one sentence: "Phishing does not exploit stupidity - it exploits the normal functioning of the human brain under pressure." Then present three data points: the click rate of IT managers (65%, Arctic Wolf 2025), the reduction achieved through simulation (75% in 12 months, SANS 2025), and the average cost of a phishing incident (466,000 euros for an SME, Groupama 2025). The message to deliver: training does not compensate for a weakness - it trains a reflex. And that reflex protects the business where technology cannot.
Does remote work increase phishing vulnerability?
Remote work alters several vulnerability factors. Isolation reduces the opportunity for informal verification ("Did you get this email too?"), which is one of the most effective collective defense mechanisms in a workplace. The home environment multiplies sources of distraction, keeping people more firmly in System 1 mode. The absence of the manager's physical presence makes authority impersonation emails more plausible - you cannot check by turning around to look at the director's office. Proofpoint data for 2024-2025 shows a 30% increase in phishing incidents targeting remote workers compared to in-office periods.
Are there gender differences in phishing vulnerability?
The meta-analysis by Grandhi and Still (2024) reports contradictory findings. Some studies (Sheng et al., 2010) identified women as slightly more vulnerable in specific experimental contexts. More recent studies (2020-2024) find no meaningful difference after controlling for confidence and technical experience. The current consensus is that gender itself is not a reliable predictor of vulnerability. Differences observed in older studies are better explained by levels of self-confidence (male overconfidence) and technical exposure (which is normalizing as digital use becomes universal) than by gender per se.
Does SMS phishing (smishing) and phone phishing (vishing) exploit the same biases?
The same cognitive biases are at play, but the channel changes their relative effectiveness. Vishing (voice phishing) exploits authority and urgency with greater force, because the human voice activates deeper emotional circuits than written text. Time pressure is more intense: on the phone, silence is uncomfortable, and the victim feels an obligation to respond immediately. Smishing exploits brevity and automatism: a 50-character SMS does not leave enough material for System 2 to engage. Available data indicates markedly higher compromise rates for targeted voice attacks compared to email phishing. Simulation programs should integrate these vectors to build vigilance reflexes across all communication channels.
Conclusion
Phishing works because it exploits the normal functioning of the human brain - not its deficiencies. System 1, fast, automatic, energy-efficient, processes the majority of our emails without conscious intervention. Cialdini's six principles of influence provide attackers with a proven arsenal of psychological levers. Stress, fatigue, and multitasking further reduce detection capacity by inhibiting the prefrontal cortex and activating the amygdala's emotional circuits.
Intelligence does not protect you - the overconfidence it generates can actually increase vulnerability. Theoretical knowledge of phishing is not enough - it speaks to System 2, while phishing targets System 1. Personality traits, organizational culture, and cultural context modulate each individual's vulnerability in unique ways - there is no "phishing-proof" profile.
What works is practice-based training. Phishing simulation creates procedural reflexes that activate automatically, within System 1, without drawing on System 2's limited resources. These reflexes develop through repeated exposure to varied scenarios, under real working conditions - including under stress and while multitasking. The data converges on this point: 75% reduction in click rate over 12 months of regular simulation (SANS 2025), with measurable reflexes from the third month onward.
The organizational corollary is equally important. A culture that permits doubt, that values reporting over punishing mistakes, and that provides simple tools for verifying suspicious requests - that culture multiplies simulation effectiveness by creating an environment where reflexes can be expressed without social barriers.
Understanding phishing psychology does not make you invulnerable. But understanding why you are vulnerable makes it possible to design defenses that account for that vulnerability, rather than denying it. For an SME, investing in an adaptive simulation program calibrated to the psychological levers that specifically affect its teams remains the cybersecurity measure with the best cost-effectiveness ratio. Cyber insurers have understood this: proof of training has become a subscription prerequisite.
Test your teams on Cialdini's 6 levers | See how simulation builds reflexes