Cyber Threat Intelligence Briefing: AI-Driven ‘Vibe Crime’, Human Risk, and New Defense Playbooks (Dec 7–14, 2025)

The second week of December 2025 underscored how AI-driven automation, human-centric risk, and macro-level policy moves are reshaping the threat intelligence landscape. Trend Micro’s warning about emerging “vibe crime”—agentic AI systems that can run end‑to‑end attack chains with minimal human oversight—captured how quickly offensive capabilities are evolving.[2][8] At the same time, recent research highlighted that the primary vulnerability remains human behavior, with organizations reporting surging incidents tied to email-driven attacks, social engineering, and unsafe user actions as AI permeates the workplace.[2]

On the defender side, public‑sector initiatives and industry threat feeds tried to keep pace. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued an update to its voluntary cybersecurity performance goals, explicitly pushing critical infrastructure operators toward a more proactive, intelligence‑driven posture.[3] Commercial intel teams, including LevelBlue SpiderLabs, released updated detection content and campaign reporting, reflecting how vendors are racing to encode fresh TTPs (tactics, techniques, and procedures) into products as quickly as adversaries iterate.[6]

Beyond corporate networks, SecurityWeek reported that militant groups are experimenting with AI for recruitment, deepfake propaganda, and cyber operations, raising the stakes for governments and platforms alike.[4] The World Economic Forum, meanwhile, spotlighted AI-powered remote IT worker scams, where adversaries use deepfakes and synthetic personas to infiltrate organizations under the guise of outsourced technical staff.[5] Together, these developments illustrate a stark reality: threat intelligence is no longer just about enumerating malware families or IP ranges; it is about understanding complex socio‑technical systems where humans, AI agents, and policy all interact.

For security leaders, this week’s news points to an urgent need to recalibrate monitoring, detection, and training strategies. The central challenge is not merely blocking new tools, but aligning human processes and machine defenses against increasingly autonomous and deceptive adversaries.

What Happened: The Week’s Key Threat Intelligence Developments

The most attention‑grabbing narrative came from Trend Micro’s forecast of “vibe crime”, a term describing cybercriminal use of agentic AI to run autonomous, continuous attack chains—conducting reconnaissance, phishing, fraud, and exploitation at scale without direct human control.[2][8] Industry briefings and coverage amplified this warning, framing it as an evolution of cybercrime‑as‑a‑service into a model where chained AI agents and orchestration layers run criminal businesses end-to-end.[2][8]

A parallel thread in the same body of research focused on human-centric cyber risks as AI enters everyday workflows. Analysts noted a sharp rise in incidents driven by email attacks, social engineering, unsafe behaviors, and user mistakes, with AI‑related incidents and deepfake-enabled fraud on the rise.[2] Multiple reports during 2025 also highlighted expanding shadow AI usage, where employees adopt unsanctioned AI tools, degrading organizational visibility and control over data flows.[2]

From a national‑level defense perspective, CISA announced an update to its voluntary Cybersecurity Performance Goals (CPGs) on December 12, targeting critical infrastructure sectors.[3] The updated goals serve as a baseline for what “good” looks like, encouraging operators to implement prioritized practices, many of which depend on improved threat intelligence consumption and sharing.[3]

In vendor‑driven intel, LevelBlue SpiderLabs released its December 2025 threat intelligence update, describing new threat trends, related detection updates, and telemetry‑driven insights for customers.[6] This kind of rolling update reflects how threat intel teams increasingly use cloud detection platforms as both collection sensors and rapid distribution channels for new indicators and behavioral analytics.[6]

At the geopolitical and extremism edge, SecurityWeek reported that militant groups are experimenting with AI, using it for recruitment, generation of realistic deepfake images, and potential enhancement of cyberattacks, with experts warning that these risks are expected to grow.[4] In the fraud domain, the World Economic Forum highlighted AI-powered remote IT worker scams, where attackers leverage generative AI to create convincing candidate personas and real‑time video deepfakes to secure remote roles and then exfiltrate data or plant backdoors from the inside.[5]

Collectively, these stories show a threat environment where AI’s role is expanding across criminal, extremist, and fraud ecosystems—all within a single week’s reporting and broader 2025 research.[2][3][4][5][6]

Why It Matters: From Point Threats to Systemic Risk

The “vibe crime” concept matters because it moves the narrative from isolated AI‑authored phishing emails to fully automated, persistent attack ecosystems. Agentic AI that can chain tasks, adapt to partial failures, and learn from feedback transforms the economics of cybercrime, allowing smaller or less‑skilled operators to launch sophisticated campaigns at scale.[2][8] For defenders, this translates into a background level of automated harassment that can blur the signal‑to‑noise ratio in SOC operations and make anomaly detection harder.[2]

The rise in human-centric incidents amid AI adoption highlights a subtle but critical point: adding AI to workflows often increases cognitive load and complexity for employees, even as it promises efficiency.[2] When staff juggle unfamiliar tools, semi‑autonomous assistants, and heightened productivity expectations, they may become more susceptible to social engineering, misconfiguration, and data mishandling. Shadow AI worsens this by creating ungoverned channels where sensitive information can leak and risk models become inaccurate.[2]

CISA’s updated Cybersecurity Performance Goals matter because they translate high‑level risk conversations into concrete, prioritized controls, many aligned with threat intel best practices such as asset visibility, logging, and identity security.[3] For critical infrastructure, where regulatory fragmentation and resource disparities are stark, a widely accepted baseline increases the likelihood of consistent minimum defenses—and of meaningful, comparable threat reporting upstream.[3]

The news that militant groups are testing AI for recruitment campaigns, deepfake propaganda, and cyber operations signals that the same innovation cycle driving commercial AI is diffusing into non‑state actors.[4] This raises the stakes for platform moderation, government threat monitoring, and cross‑border information‑sharing. It also complicates attribution, since AI‑generated artifacts can be more uniform and harder to link to specific groups without deep technical forensics.[4]

Finally, AI-powered remote IT worker scams illustrate a new attack vector against hybrid and distributed work models.[5] As more organizations outsource IT and security functions, threat actors can use deepfakes and synthetic identities to pass remote interviews, gain privileged access, and then operate from inside the network, bypassing many traditional perimeter defenses.[5]

Expert Take: How Practitioners Are Reframing Threat Intelligence

Industry and government voices this year converged on a core message: ignoring AI in the threat chain is no longer an option.[2][8] Trend Micro and other experts warn that defenders who model threats only in terms of human adversaries and static malware families will misjudge both the tempo and adaptability of future attacks.[2][8] Threat intelligence programs must therefore integrate AI behavior analysis—including how agentic systems conduct reconnaissance, generate lures, and pivot through environments—alongside traditional IoCs.[2][8]

The human element remains front and center in expert commentary. Surveys and incident analyses note that a large majority of organizations struggle to secure the human element as AI transforms the workforce, pushing many to rethink security awareness beyond annual training.[2] Practitioners increasingly view security education as an ongoing, data‑driven program tuned by intel on real‑world phishing themes, business‑email‑compromise narratives, and deepfake tactics, rather than hypothetical scenarios.[2]

CISA’s updated CPGs embody a “minimum viable resilience” philosophy. Officials emphasize that while the goals are voluntary, they are designed to be implementable across diverse sectors and sizes, pushing operators to adopt logging, multi‑factor authentication, network segmentation, and incident planning that are prerequisites for meaningful threat intel usage.[3] Without such foundational controls, many organizations cannot collect or act on intel at the speed required to counter modern threats.[3]

Threat research teams like LevelBlue SpiderLabs frame their December updates as part of a continuous intel‑to‑detection pipeline, where findings from honeypots, customer telemetry, and open‑source reporting are translated into detection rules, playbooks, and advisory content.[6] This reflects an expert consensus that closed, static intel feeds are insufficient; value now lies in the speed and fidelity with which intel can be operationalized across heterogeneous environments.[6]

On the extremism front, experts quoted by SecurityWeek warn that as militant groups adopt AI, counter‑radicalization and counter‑disinformation efforts must also leverage AI, both to detect synthetic media and to respond at scale.[4] Here, threat intelligence overlaps with influence operations, requiring interdisciplinary teams that understand technology, language, and regional politics.[4]

Real-World Impact: From SOC Workflows to Hiring and Insurance

Operationally, the prospect of AI‑driven vibe crime forces SOCs to rethink how they triage and respond to high volumes of low‑signal events. Automated campaigns can send a steady stream of personalized phishing, fraud attempts, and low‑level exploits that each appear unremarkable but collectively create analyst fatigue and increase the chance of a miss.[2][8] This drives demand for behavioral analytics, anomaly detection, and automated playbooks that can handle routine noise and escalate only genuinely novel or high‑risk activity.[2]

Rising human-centric incidents have concrete consequences for compliance and insurance. Multiple cyber insurance trend reports in 2025 point to an uptick in claims linked to human error and social engineering, with insurers tightening underwriting standards and placing more emphasis on controls around identity, email security, and user training.[2] This, in turn, pushes organizations to use threat intel to justify and fine‑tune their control investments.[2]

CISA’s updated CPGs are likely to influence not just federal partners but state regulators, auditors, and boards, which often look to CISA for guidance on what constitutes “reasonable” security.[3] This may shift procurement decisions toward vendors that can demonstrate alignment with CPG‑style controls and provide rich telemetry for threat intel correlation.[3] For smaller critical infrastructure operators, however, implementation costs and skill gaps may slow adoption, leaving pockets of systemic vulnerability.[3]

The AI-powered remote IT worker scams described by the World Economic Forum have immediate HR and procurement implications.[5] Organizations must now treat remote hiring—especially for technical and privileged roles—as a security-sensitive process, incorporating background checks, identity verification technologies tuned to deepfake detection, and continuous access review.[5] This shifts some threat intel focus upstream, into labor markets and contractor ecosystems that were previously considered outside the SOC’s purview.[5]

At a societal level, militant use of AI for propaganda and cyber operations raises risks for election security, critical services, and public trust.[4] Platforms and governments will increasingly rely on threat intelligence that blends cyber indicators with information‑operations telemetry, such as bot activity, synthetic media campaigns, and cross‑platform coordination.[4] For enterprises, this may manifest as spillover risk, where geopolitical campaigns exploit corporate infrastructure or brands as part of broader influence or disruption efforts.[4]

Analysis & Implications: Rethinking Threat Intelligence for the AI Era

This week’s developments point to a structural shift: threat intelligence is becoming inseparable from AI risk management. Traditionally, TI programs focused on cataloging adversary infrastructure, malware families, and campaign patterns. Now, defenders must understand how AI systems themselves behave as both tools and targets. The “vibe crime” and “vibe hacking” narratives exemplify this, portraying agentic AI as a new class of adversary capability rather than a mere accelerator of existing workflows.[2][5][8] Intelligence teams will need to track AI model usage, agent frameworks, and toolchains in underground markets much as they once tracked exploit kits.[2][8]

The surge in human-centric risk amid AI deployment forces a more integrated approach between security, HR, and business operations.[2] Threat intel cannot stay siloed in the SOC; it must inform policy on which AI tools are permitted, how data is classified, and how employees are trained to handle ambiguous or synthetic content. The expansion of shadow AI is particularly problematic because it erodes the asset inventory and data lineage that underpin effective intel correlation.[2] Organizations may need to deploy AI discovery and usage‑monitoring solutions alongside more traditional data loss prevention, informed by intel on how attackers abuse consumer AI apps.[2]

CISA’s updated CPGs can be viewed as an attempt to standardize the prerequisites for consuming and sharing high‑quality intel across critical infrastructure.[3] Without consistent logging, identity hygiene, and network segmentation, sharing IoCs or TTPs has limited value.[3] As more sectors align with such baselines, it becomes feasible to build sector‑spanning detection content, playbooks, and joint exercises grounded in shared intel assumptions.[3] Over time, regulators and insurers may incorporate adherence to such goals into assessments of liability and premiums, effectively turning threat‑informed practices into economic incentives.[3]

The reports on militant and fraudster use of AI underscore the blurring boundary between cyber operations and information operations.[4][5] Threat intelligence functions will increasingly need to monitor narratives, personas, and content patterns, not just IPs and hashes.[4][5] This pushes TI closer to OSINT, trust & safety, and fraud analytics, suggesting the emergence of cross‑functional fusion teams that operate across security, risk, and communications.[4][5] The same analytics used to detect AI-powered remote worker scams—such as behavioral biometrics, deepfake detection, and continuous identity proofing—could inform broader controls against account takeover and insider threats.[5]

From an architectural standpoint, defenders must assume that AI‑enabled adversaries can adapt faster than rule‑based defenses.[2][8] The response will involve embedding machine learning and AI in defensive stacks, but with careful guardrails given the vulnerabilities identified in AI coding tools and LLM ecosystems.[1][5] Threat intel will need to capture not only external threats but also model vulnerabilities, prompt‑injection techniques, and AI supply‑chain risks, effectively expanding the attack surface map to include AI infrastructure itself.[1][5]

In sum, the week’s reporting suggests that successful organizations will treat threat intelligence as a continuous, cross‑disciplinary capability that spans AI governance, human‑factor security, and classical cyber defense. Those that cling to a purely indicator‑driven model risk being overwhelmed by the speed, volume, and subtlety of AI‑mediated threats.[2][8]

Conclusion

The December 7–14, 2025 window offered a clear preview of 2026’s threat intelligence challenges. Cybercriminals are gearing up for an era of agentic, autonomous “vibe crime”, using AI to industrialize reconnaissance, phishing, and fraud.[2][8] At the same time, organizations are grappling with a spike in human-centric incidents driven by social engineering, unsafe behaviors, and uncontrolled AI adoption in the workplace.[2] National agencies like CISA are updating baseline performance goals to nudge critical infrastructure toward threat‑informed resilience,[3] while vendor and research communities race to codify emerging TTPs into detections and guidance.[6][8]

The stories on militant AI experimentation and AI-powered remote IT worker scams show that the attack surface now spans hiring pipelines, information ecosystems, and geopolitical fault lines.[4][5] For threat intelligence teams, the mandate is expanding: they must not only track adversary infrastructure and malware, but also understand AI agents, human behavior, and policy shifts that shape both offense and defense.[2][3][4][5][6][8]

For security leaders, the practical takeaway is straightforward but demanding: treat AI as a first‑class element in your threat models, elevate human‑factor defenses, and align with evolving public‑sector baselines like CISA’s CPGs.[2][3][8] The organizations that succeed will be those that can absorb fast‑moving intelligence, translate it into operational controls and training, and continuously adapt as both attackers and defenders weaponize AI at scale.[2][3][5][6][8]

References

[1] Anthropic. (2025, August). Threat Intelligence Report: August 2025. Anthropic. https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf

[2] Trend Micro. (2025, December 11). The Next Phase of Cybercrime: Agentic AI and the Shift to Autonomous Criminal Operations. Trend Micro Research. https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/the-next-phase-of-cybercrime-agentic-ai-and-the-shift-to-autonomous-criminal-operations

[3] American Hospital Association. (2025, December 12). CISA issues update on voluntary cybersecurity performance goals. American Hospital Association News. https://www.aha.org/news/headline/2025-12-12-cisa-issues-update-voluntary-cybersecurity-performance-goals

[4] Newman, L. H. (2025, December 5). Militant Groups Are Experimenting With AI, and the Risks Are Expected to Grow. SecurityWeek. https://www.securityweek.com/militant-groups-are-experimenting-with-ai-and-the-risks-are-expected-to-grow/

[5] World Economic Forum. (2025, December 9). Unmasking the AI-powered, remote IT worker scams threatening businesses worldwide. World Economic Forum Agenda. https://www.weforum.org/stories/2025/12/unmasking-ai-powered-remote-it-worker-scams-threatening-businesses-worldwide/

[6] LevelBlue SpiderLabs. (2025, December 10). Threat Intelligence News from LevelBlue SpiderLabs – December 2025. LevelBlue SpiderLabs Blog. https://levelblue.com/blogs/spiderlabs-blog/threat-intelligence-news-from-levelblue-spiderlabs

[7] ITPro. (2025, December 9). Trend Micro issues warning over rise of “vibe crime” as cybercriminals weaponise agentic AI. ITPro. https://www.itpro.com/security/cyber-crime/trend-micro-vibe-crime-agentic-ai-cyber-crime

An unhandled error has occurred. Reload 🗙