Article Details
Scrape Timestamp (UTC): 2026-01-21 09:15:08.026
Source: https://thehackernews.com/2026/01/chainlit-ai-framework-flaws-enable-data.html
Original Article Text
Click to Toggle View
Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs. Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization. Zafran Security said the high-severity flaws, collectively dubbed ChainLeak, could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications. Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week. It has attracted a total of 7.3 million downloads to date. Details of the two vulnerabilities are as follows - "The two Chainlit vulnerabilities can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system," Zafran researchers Gal Zaban and Ido Shani said. "Once an attacker gains arbitrary file read access on the server, the AI application's security quickly begins to collapse. What initially appears to be a contained flaw becomes direct access to the system's most sensitive secrets and internal state." For instance, an attacker can weaponize CVE-2026-22218 to read "/proc/self/environ," allowing them to glean valuable information such as API keys, credentials, and internal file paths that could be used to burrow deeper into the compromised network and even gain access to the application source code. Alternatively, it can be used to leak database files if the setup uses SQLAlchemy with an SQLite backend as its data layer. Following responsible disclosure on November 23, 2025, both vulnerabilities were addressed by Chainlit in version 2.9.4 released on December 24, 2025. "As organizations rapidly adopt AI frameworks and third-party components, long-standing classes of software vulnerabilities are being embedded directly into AI infrastructure," Zafran said. "These frameworks introduce new and often poorly understood attack surfaces, where well-known vulnerability classes can directly compromise AI-powered systems." Flaw in Microsoft MarkItDown MCP Server The disclosure comes as BlueRock disclosed a vulnerability in Microsoft's MarkItDown Model Context Protocol (MCP) server dubbed MCP fURI that enables arbitrary calling of URI resources, exposing organizations to privilege escalation, SSRF, and data leakage attacks. The shortcoming affects the server when running in an Amazon Web Services (AWS) EC2 instance using IDMSv1. "This vulnerability allows an attacker to execute the Markitdown MCP tool convert_to_markdown to call an arbitrary uniform resource identifier (URI)," BlueRock said. "The lack of any boundaries on the URI allows any user, agent, or attacker calling the tool to access any HTTP or file resource." "When providing a URI to the Markitdown MCP server, this can be used to query the instance metadata of the server. A user can then obtain credentials to the instance if there is a role associated, giving you access to the AWS account, including the access and secret keys." The agentic AI security company said its analysis of more than 7,000 MCP servers found that over 36.7% of them are likely exposed to similar SSRF vulnerabilities. To mitigate the risk posed by the issue, it's advised to use IMDSv2 to secure against SSRF attacks, implement private IP blocking, restrict access to metadata services, and create an allowlist to prevent data exfiltration.
Daily Brief Summary
Zafran Security discovered critical vulnerabilities in the Chainlit AI framework, potentially allowing attackers to steal sensitive data and perform server-side request forgery (SSRF) attacks.
The vulnerabilities, named ChainLeak, can be exploited to access cloud environment API keys and sensitive files, posing a significant risk to organizations using the framework.
Chainlit, a tool for developing conversational chatbots, has been downloaded over 7.3 million times, indicating widespread potential exposure.
Attackers can combine the vulnerabilities to escalate privileges and move laterally within affected systems, threatening the security of AI applications.
The issues were responsibly disclosed on November 23, 2025, and addressed in Chainlit version 2.9.4, released on December 24, 2025.
The discovery highlights the risks associated with embedding longstanding software vulnerabilities into AI infrastructure, necessitating vigilant security practices.
Organizations are urged to update to the latest version of Chainlit and review their AI framework security to prevent exploitation.