Daily Brief
Find articles below, see 'DETAILS' for generated summaries
Total articles found: 11761
Checks for new stories every ~15 minutes
| Title | Summary | ROWS | |||
|---|---|---|---|---|---|
| 2025-10-07 22:14:08 | bleepingcomputer | VULNERABILITIES | Docker Launches Affordable Hardened Images Catalog for Small Businesses | Docker has announced unlimited access to its Hardened Images catalog, providing secure software bundles to startups and SMBs at an affordable rate.
The catalog offers container images verified to be free of known vulnerabilities, ensuring near-zero CVEs for development teams.
Hardened Images are built from source code with continuous upstream patches, reducing security risks by eliminating unnecessary components.
Docker's partnership with SRLabs ensures images are signed, rootless by default, and free from high-severity breakout issues.
A seven-day patch Service Level Agreement ensures timely updates when new vulnerabilities are identified, maintaining robust security standards.
The catalog includes a variety of images for AI, databases, and more, with FedRAMP-ready variants meeting U.S. federal security standards.
This initiative could significantly enhance security across the Docker ecosystem, promoting safer container deployment practices. | Details |
| 2025-10-07 20:52:44 | bleepingcomputer | VULNERABILITIES | Google Opts Out of Fixing ASCII Smuggling Flaw in Gemini | Google has decided not to address an ASCII smuggling vulnerability in its Gemini AI assistant, which can manipulate the AI into providing false information or altering its behavior.
ASCII smuggling uses special characters to introduce invisible payloads, exploiting the gap between user-visible content and machine-readable data in large-language models.
Security researcher Viktor Markopoulos demonstrated the attack's effectiveness on AI tools like Gemini, DeepSeek, and Grok, while others like ChatGPT and Microsoft CoPilot remain secure.
The vulnerability poses a significant risk due to Gemini's integration with Google Workspace, potentially allowing attackers to embed hidden instructions in Calendar invites or emails.
Google dismissed the issue as a non-security bug, suggesting it requires social engineering to exploit, but the potential for autonomous data extraction remains a concern.
Other tech companies, such as Amazon, have issued guidance on Unicode character smuggling, indicating varying industry perspectives on the threat.
The findings were reported to Google on September 18, yet the company has not provided further clarification or changes in its security approach. | Details |
| 2025-10-07 20:36:36 | bleepingcomputer | VULNERABILITIES | Google Gemini Faces Unresolved ASCII Smuggling Vulnerability Risks | Google's AI assistant, Gemini, is vulnerable to ASCII smuggling, which can manipulate its behavior and deliver false information to users.
ASCII smuggling uses special Unicode characters to introduce hidden payloads, creating a gap between visible and machine-readable content.
The vulnerability poses increased risks as Gemini, an agentic AI tool, accesses sensitive data and performs tasks autonomously.
Security researcher Viktor Markopoulos found Gemini, DeepSeek, and Grok susceptible, while Claude, ChatGPT, and Microsoft CoPilot remain secure.
Google dismissed the issue as non-critical, viewing it as a potential social engineering exploit rather than a security bug.
The vulnerability could allow attackers to embed hidden commands in Google Workspace, leading to identity spoofing and unauthorized data extraction.
Amazon has issued security guidance on Unicode character smuggling, contrasting Google's stance on the issue. | Details |
| 2025-10-07 20:24:52 | theregister | DATA BREACH | Rising Data Leakage Risks from Uncontrolled AI Tool Usage in Enterprises | A LayerX study reveals that 45% of enterprise employees use generative AI tools, with 77% copying data into chatbots, risking data leakage and compliance issues.
The report notes that 22% of data pasted into AI tools includes sensitive PII or PCI information, often from unmanaged personal accounts, creating significant blind spots.
Approximately 40% of file uploads to AI platforms contain PII/PCI data, with 39% originating from non-corporate accounts, complicating data governance.
High-profile incidents, such as Samsung's temporary ban on ChatGPT, underscore potential geopolitical, regulatory, and compliance challenges posed by AI data leaks.
ChatGPT has become the dominant AI tool in enterprises, used by over 90% of employees, surpassing alternatives like Google Gemini and Microsoft Copilot.
The rapid adoption of AI tools in enterprises prompts calls for enhanced security measures, such as enforcing Single Sign-On (SSO) for better data flow visibility.
LayerX's client base includes global enterprises across financial services, healthcare, and technology sectors, emphasizing the widespread nature of this security challenge. | Details |
| 2025-10-07 19:12:49 | bleepingcomputer | DATA BREACH | DraftKings Faces Credential Stuffing Attacks Compromising Customer Accounts | DraftKings, a major sports betting company, reported a breach affecting an undisclosed number of customer accounts due to credential stuffing attacks.
Attackers accessed limited customer data, including names, addresses, and partial payment card details, but did not obtain sensitive information like full financial account numbers.
Credential stuffing involves using stolen credentials from other platforms to access accounts, a tactic that exploits password reuse among users.
In response, DraftKings is mandating password resets and multifactor authentication for affected accounts to enhance security measures.
Customers are advised to change passwords, monitor financial accounts, and consider credit freezes and fraud alerts as precautionary steps.
The FBI has long warned about the rising threat of credential stuffing, driven by the availability of leaked credentials and automated hacking tools.
DraftKings previously experienced a similar attack in November 2022, resulting in significant financial losses and subsequent customer reimbursements. | Details |
| 2025-10-07 17:31:27 | bleepingcomputer | CYBERCRIME | Clop Ransomware Targets Oracle Zero-Day for Data Theft and Extortion | The Clop ransomware group exploited a zero-day vulnerability in Oracle E-Business Suite to execute data theft attacks since early August, as reported by CrowdStrike.
Identified as CVE-2025-61882, the flaw allows unauthenticated remote code execution through a low-complexity attack, posing significant risks to unpatched systems.
Security researchers discovered that the vulnerability involves a chain that can be exploited with a single HTTP request, raising the threat level.
CrowdStrike noted potential involvement of multiple threat actors, including GRACEFUL SPIDER, in exploiting this vulnerability for data theft and extortion.
Oracle has issued a patch and strongly advises customers to apply it immediately to mitigate ongoing exploitation risks.
Clop has been contacting executives for ransom, threatening to leak sensitive data allegedly stolen from affected Oracle systems.
The U.S. State Department offers a $10 million reward for information linking Clop's activities to foreign governments, highlighting the severity of these attacks.
This incident underscores the critical need for timely patch management and vigilance against zero-day vulnerabilities in enterprise environments. | Details |
| 2025-10-07 17:05:26 | thehackernews | MALWARE | BatShadow Group Deploys Vampire Bot Malware Targeting Job Seekers | BatShadow, a Vietnamese threat actor, is using social engineering to deliver Vampire Bot malware to job seekers and digital marketing professionals.
The campaign involves malicious files disguised as job descriptions, leveraging ZIP archives with decoy PDFs and executable files masked as PDFs.
Upon execution, the LNK file runs a PowerShell script that downloads a lure document and XtraViewer software, establishing persistent access.
Victims are misled into using Microsoft Edge for downloads, bypassing security measures in other browsers, facilitating the infection process.
Vampire Bot, written in Go, can profile infected hosts, steal information, capture screenshots, and communicate with attacker-controlled servers.
Previous campaigns by BatShadow have used similar domains and tactics, indicating a consistent threat to digital marketing professionals.
The group's activities highlight the ongoing risk of sophisticated phishing attacks and the need for heightened vigilance among job seekers. | Details |
| 2025-10-07 16:24:06 | theregister | DATA BREACH | Doctors Imaging Group Discloses Major Patient Data Breach Impacting 171,000 | Doctors Imaging Group reported a cyberattack leading to the theft of sensitive data from 171,862 patients, including medical and financial information, dating back to November 2024.
Compromised data includes admission dates, financial account details, medical records, health insurance information, and Social Security numbers, posing significant identity theft risks.
The breach notification was delayed as the company concluded its investigation in late August 2025, nearly a year after the incident occurred.
The nature of the attack remains unspecified, and no ransomware group has claimed responsibility, leaving the method and motive unclear.
Doctors Imaging Group has notified federal law enforcement and regulatory bodies, emphasizing its commitment to enhancing cybersecurity measures.
Affected individuals were advised to monitor financial statements for fraud, though no complimentary identity protection services were offered by the company.
The incident highlights the critical need for timely breach disclosures and robust cybersecurity protocols in the healthcare sector. | Details |
| 2025-10-07 16:24:05 | bleepingcomputer | DATA BREACH | Avnet Data Breach Exposes Sensitive EMEA Information, Ransom Demands Made | Avnet, a major electronics distributor, confirmed a data breach affecting its internal sales tool in the EMEA region, with unauthorized access to externally hosted cloud storage.
The breach involved the theft of 1.3TB of compressed data, potentially expanding to 12TB of raw data, including sensitive operational and personal information.
Although Avnet claimed the data is unreadable without proprietary tools, leaked samples reportedly contain plaintext sensitive information, challenging the company's assertions.
The threat actor responsible seeks financial gain, using a dark web leak site to pressure Avnet into paying a ransom by releasing data samples.
Avnet detected the breach on September 26 and initiated a rotation of secrets across its Azure/Databricks environments, while limiting the incident's impact to a single system.
The breach did not disrupt Avnet's global operations, and the company is in the process of notifying affected customers and suppliers, though the exact number of impacted individuals remains unknown.
Authorities have been informed, and Avnet continues to assess the situation and implement security measures to prevent future incidents. | Details |
| 2025-10-07 16:07:39 | theregister | DATA BREACH | BK Technologies Reports Cyber Intrusion Impacting Employee Data | BK Technologies identified suspicious activity on September 20, leading to a cyber intrusion affecting non-public employee data.
The breach impacted a limited number of non-critical systems, allowing operations to continue without significant disruption.
External incident-response teams were engaged to isolate affected systems and restore operations swiftly.
The company is assessing the breach's extent and has notified law enforcement and plans to inform affected individuals and regulators.
BK Technologies expects insurance to cover most cleanup costs, minimizing financial impact on the company.
The breach poses reputational challenges for BK Technologies, which markets its products as highly reliable for critical services.
No responsibility for the breach has been claimed, and the company has not disclosed any customer impact. | Details |
| 2025-10-07 15:43:05 | theregister | NATION STATE ACTIVITY | OpenAI Blocks Accounts Tied to Chinese and Russian Cyber Activities | OpenAI has banned accounts linked to Chinese and Russian entities using ChatGPT for surveillance and influence operations, as detailed in their latest threat report.
Chinese-linked accounts attempted to use ChatGPT to design AI tools for monitoring social media platforms for extremist and political content, allegedly for government clients.
Russian-associated accounts used ChatGPT to refine malware, including remote-access trojans and credential stealers, and to draft phishing lures.
OpenAI's models refused requests with clearly malicious intent, adhering to their safety protocols, and banned over 40 networks since February 2024.
The report indicates adversaries are increasingly utilizing multiple AI models for enhanced automation and speed in their cyber activities.
OpenAI's actions reflect growing concerns over AI misuse by authoritarian regimes and criminal groups, highlighting the need for robust AI governance and security measures.
The company remains vigilant, continuously monitoring and disrupting attempts to exploit AI for malicious purposes. | Details |
| 2025-10-07 15:21:05 | thehackernews | VULNERABILITIES | Google's CodeMender AI Agent Automates Vulnerability Detection and Patching | Google's DeepMind introduced CodeMender, an AI agent that detects, patches, and rewrites vulnerable code, aiming to prevent future exploits and enhance software security.
CodeMender is designed to be both reactive and proactive, addressing new vulnerabilities and securing existing codebases to eliminate entire classes of vulnerabilities.
Over six months, CodeMender has upstreamed 72 security fixes to open-source projects, demonstrating its capability to handle large codebases.
Utilizing Google's Gemini Deep Think models, CodeMender identifies root causes of vulnerabilities and ensures changes do not introduce regressions.
The AI agent employs a large language model-based tool to critique code modifications, verifying changes and self-correcting as needed.
Google plans to engage maintainers of critical open-source projects for feedback on CodeMender-generated patches, enhancing the tool's effectiveness.
An AI Vulnerability Reward Program is being launched to report AI-related issues in Google products, with rewards up to $30,000.
Google's Secure AI Framework continues to evolve, focusing on agentic security risks and using AI to counter threats from cybercriminals and state-backed actors. | Details |
| 2025-10-07 14:08:01 | bleepingcomputer | MISCELLANEOUS | AI-Powered Breach and Attack Simulation Revolutionizes Security Validation | AI-driven Breach and Attack Simulation (BAS) platforms are transforming threat intelligence into actionable security validations, providing faster, evidence-backed assurance of defense effectiveness.
Traditional BAS solutions face challenges due to the overwhelming volume of emerging threats, making AI integration crucial for timely and efficient threat simulation.
AI enhances BAS by enabling on-demand validation, allowing security teams to operationalize new threat intelligence in hours rather than days or weeks.
The integration of AI in BAS provides clarity on risk exposure, helping organizations identify which vulnerabilities are weaponizable in their specific environments.
AI-powered BAS delivers measurable ROI by testing security controls against real-world attacker behaviors, ensuring investments are effectively reducing risk.
Business-level reporting from AI-enhanced BAS offers boards and executives confidence through evidence-backed assurance of security posture and remediation efforts.
The upcoming Picus BAS Summit 2025 will showcase AI's role in evolving BAS, featuring insights from CISOs and industry leaders on predictive security validation. | Details |
| 2025-10-07 13:20:25 | bleepingcomputer | VULNERABILITIES | Google Launches AI Bug Bounty Program with $30,000 Top Reward | Google has introduced an AI Vulnerability Reward Program, incentivizing security researchers to identify and report flaws in its AI systems, with rewards reaching up to $30,000.
The program targets significant vulnerabilities in high-profile AI products, including Google Search, Gemini Apps, and Google Workspace core applications like Gmail and Drive.
In-scope products also encompass AI Studio, Jules, and various AI integrations, reflecting Google's focus on safeguarding its AI ecosystem.
Reward tiers include $20,000 for major security bugs, $15,000 for data exfiltration issues, and $5,000 for phishing and model theft vulnerabilities.
This initiative extends Google's existing Vulnerability Reward Program, aiming to enhance third-party discovery and reporting of AI-specific security issues.
Google has a history of rewarding researchers, having awarded $65 million in bug bounties since 2010, with $12 million distributed in 2024 alone.
The program's launch marks a strategic effort to bolster AI security and encourage responsible disclosure from the global research community. | Details |
| 2025-10-07 11:04:08 | thehackernews | DATA BREACH | AI Emerges as Leading Channel for Corporate Data Exfiltration | LayerX's report identifies AI as the primary channel for data exfiltration, surpassing shadow SaaS and unmanaged file sharing in enterprise environments.
Generative AI tools like ChatGPT, Claude, and Copilot are being used by 45% of employees, with 67% of this usage occurring through unmanaged personal accounts.
Sensitive data, including PII and PCI, is frequently uploaded to AI platforms, with 40% of files containing such information and 77% of data pasted into AI tools from unmanaged accounts.
Traditional data loss prevention tools fail to address this risk, as they are designed for sanctioned, file-based environments rather than browser-based AI interactions.
The report emphasizes the need for CISOs to shift focus from traditional security perimeters to browser-based data flows to mitigate AI-driven data breaches.
Instant messaging also poses a significant risk, with 87% of enterprise chat usage occurring through unmanaged accounts and 62% of users pasting sensitive data.
The findings suggest a governance collapse, urging security leaders to treat AI as a current, critical threat rather than an emerging technology. | Details |