Security researchers at Wiz Research discovered that a database belonging to DeepSeek, a Chinese AI startup, was left publicly accessible online. This database contained sensitive information, including chat history, secret keys, and backend details.
The exposed database was running on ClickHouse, a widely used system for processing large datasets. It was hosted on DeepSeek’s domains and could be accessed without any authentication. This meant that anyone who found it could view and even control the data inside.
According to Wiz Research, the issue was reported to DeepSeek, which quickly took action to secure the database.
DeepSeek is known for its AI models, including DeepSeek-R1, which competes with top AI systems like OpenAI’s models. The fact that such a company had a major security lapse raises concerns about how AI startups handle sensitive user data.
How was it discovered?
The researchers scanned DeepSeek’s public-facing systems and noticed unusual open ports. These ports led them to a fully open ClickHouse database, where they found over one million log entries. The leaked data included: • Chat history and user interactions • API keys (which could be used to access DeepSeek’s internal systems) • Backend details of how DeepSeek operates • Operational metadata about AI services
The database also allowed full administrative control, meaning an attacker could not only read but also modify or delete data.
The bigger picture
This case highlights a growing problem: AI companies are moving fast but often neglect basic security measures. While discussions around AI security focus on futuristic threats, real dangers—like exposed databases—are happening right now.
As AI services become essential to businesses, companies must prioritise data security and work closely with cybersecurity teams to prevent such incidents.