Tynapse has closed a $3M+ (₩4.5B) seed round. Mirae Asset Venture Investment led the round, with Mirae Asset Capital, Murex Partners and Kakao Ventures participating.
Before founding the company we received ₩200M in angel investment from a serial unicorn founder and world-class AI researchers, and three months after team formation Mashup Ventures led our pre-seed. With this seed round closing, we have reached ₩4.9B in cumulative funding six months after team formation.

We are deeply grateful for our investors' trust, and we feel the weight of the responsibility that comes with it. In this post we would like to share why we do this work, and where we intend to take it.
AI no longer 'experiments' - it 'executes'
Over the past few years AI has advanced at a remarkable pace. At the same time, the landscape that companies face when they actually adopt AI for production work is changing just as fast.
AI is no longer simply a tool for generating answers. It is becoming the actor that makes decisions, calls systems and executes business operations.
And that shift has created operational risks that did not previously exist:
- Misinformation caused by hallucination
- Unintended leakage of sensitive data
- System access and execution that exceeds the agent's authority
In the past these would have been classified as 'bugs' to be patched after the fact. In an era where AI executes real work, that is no longer enough. These are not simple defects - they are operational risks that translate directly into financial loss and legal liability.
The traditional approach of post-hoc log analysis and model improvement cannot control this problem. Instead of looking for root cause after an incident, we have to be able to intervene at the exact moment the AI decides to execute.
This is why Tynapse is building the Trust Layer.
What Trust Layer does
Tynapse's AI Trust Layer is a security platform that runs at the runtime where AI agents actually operate.
At its core is a two-stage detection and decisioning architecture we developed in-house. The moment an AI is about to execute an action, this layer detects and blocks risks in real time. The goal is to stop major risks - hallucination, data leakage and jailbreak attempts - before they actually materialize.
Equally important is the audit record. Every decision step is automatically captured as a traceable audit trail. When an incident occurs, "why did the AI decide that?" and "whose responsibility is it?" must be answerable clearly - that is the foundation enterprises need to operate AI with confidence.
We are currently running PoCs with major Korean commercial banks and preparing for production deployment. Validating the Trust Layer first in finance - the most conservative and most strictly regulated industry - is, we believe, the strongest proof that it actually works.
From our investors
Jin Hwan Cho, Director at Mirae Asset Venture Investment who led the round, said:
"For AI to operate safely in real industry environments, domain understanding and operational experience are as essential as the technology itself. Tynapse combines deep field experience in finance and security with global-grade AI expertise. We decided to invest because this is a team that can actually solve the AI trust problem."
Jung Ho Shin, Principal at Kakao Ventures, added:
"Tynapse has attracted an unusually large early-stage investment in the AI security space, backed by industry experience, technical depth and talent density. Even as the market shifts, we expect them to build meaningful early references quickly."
What comes next
Our next goals are clear.
First, we will move aggressively on hiring core talent and on deepening the technology. If you want to work on this problem with us, please reach out through the Contact page on our site.
Second, we will build our financial-sector references quickly. A product validated in the most demanding market is, we believe, a product that earns trust in every other market.
Third, starting from finance we will expand into healthcare, the public sector and the broader enterprise market. The goal is for the Trust Layer to reach every industry that is wrestling with how to adopt AI.
Closing thoughts
The era of AI agents is now opening in earnest. In this era, operational safety and regulatory readiness are no longer optional - they are essential.
At Tynapse, we want to build a world where AI can be trusted, wherever and whatever it executes. We are building the infrastructure that lets AI run safely in the real world.
Thank you again to everyone who has supported us. We will keep sharing the journey here.

