Introduction: Why Financial Institutions Hesitate on AI

As of 2026, discussions around AI adoption in the financial sector are more active than ever. However, actual implementation on the ground is proceeding more cautiously than expected. It is not a lack of technology. Nor is it a lack of budget.
When speaking with AI officers at financial institutions, one concern comes up consistently: "When AI makes a wrong decision, can we take responsibility for it?" What if a credit scoring AI malfunctions? What if a fraud detection model fails to catch an attack? What if a customer service AI provides incorrect financial information? The liability falls on the financial institution. In other words, adopting AI means opening a new domain of accountability.
The Korea Financial Security Institute (FSI) published the 「AI Security Guidelines for the Financial Sector」 as a reference document establishing standards for AI operations in finance. The guidelines divide the entire AI model development process into stages: training data collection → training data preprocessing → AI model design and training → AI model verification and evaluation, and present security requirements to consider at each stage. In particular, the guidelines identify four AI-specific attack types: data poisoning attacks, model poisoning attacks, model extraction attacks, and evasion attacks, and further include an AI agent security management framework, systematically organizing 5 key requirements that financial institutions must fulfill.
The guidelines clearly state what needs to be done, but how to implement it, with what systems and through what processes, is left for each institution to resolve on its own. Without the technical means to do so, AI adoption stalls. Because when an incident occurs, institutions cannot prove that "we were responding in this way."
In this article, we provide an in-depth interpretation of the 5 core security requirements from the FSI guidelines, examine the practical challenges financial institutions face in implementing each requirement, and share Tynapse's perspective on how these challenges can be addressed technically.
In-Depth Analysis of the 5 Core Security Requirements
1. Countering Data Poisoning Attacks

Nature of the Attack
Data poisoning attacks corrupt the data that AI models learn from, distorting the model's decision-making. Since AI models make decisions based on patterns in their training data, when maliciously manipulated data is injected during the training phase, the model itself learns in the wrong direction. This is particularly dangerous because it is extremely difficult to detect from outside once training is complete.
Attack forms vary: backdoor attacks that plant hidden triggers causing malfunctions only under specific conditions, targeted attacks that manipulate data so specific class samples are misclassified, and indiscriminate attacks that broadly degrade the model's overall prediction accuracy.
The reason this attack is dangerous in the financial sector is clear. If poisoned data is injected into a fraud detection model, the model can learn to pass actual fraudulent transactions as legitimate. If a credit scoring model is poisoned, high-risk customers receive normal credit ratings, leading directly to massive losses for the financial institution.
Guideline Requirements
The FSI guidelines require building a data integrity verification system at the training data collection and preprocessing stages. This includes anomaly detection for externally collected data, data provenance tracking, and access control for training data.
Practical Implementation Challenges
While the guidelines clearly require "verifying data integrity," the question of how to actually verify it is far more complex. Specific technical methodologies for automatically detecting anomalous data in financial AI models with millions of training records, or identifying poisoned data flowing through external data supply chains, must be resolved by each institution independently.
Tynapse's Perspective
What Tynapse has repeatedly confirmed through communications with financial institutions is that countering data poisoning attacks is an area gaining significant attention across the industry. Many institutions have systematic access controls for training data, but runtime verification systems that automatically detect data contamination entering the pipeline are often still in early adoption stages. Because data poisoning is "contamination" rather than "intrusion," it often falls outside the detection scope of existing security systems.
Three things are needed to fill this gap. First, a training data tamper prevention and anomaly detection/cleansing system. Statistical outliers and distribution deviations should be automatically detected at the point data enters the pipeline, filtering suspected contaminated data before the training stage. Second, trusted source verification. Externally collected data should pass through a whitelist-based verification layer confirming it comes from pre-approved sources. Third, hash-based integrity audit logs. Recording hash values of datasets used in training enables immediate verification of whether data has been tampered with after the fact, serving as decisive evidence during audits.
2. Countering Model Poisoning Attacks

Nature of the Attack
Model poisoning attacks directly modify the model itself rather than the training data. Typical approaches include altering the weights or parameters of a completed model, or planting backdoors in externally pre-trained models. This is especially concerning when fine-tuning open-source pre-trained models, where the original model supply chain may already be contaminated.
While data poisoning is a threat during the training phase, model poisoning can be a persistent threat even after the model is complete. When a deployed model is directly tampered with by internal or external attackers, it may appear to function normally on the surface while being designed to make intentionally wrong decisions under specific conditions.
Guideline Requirements
The FSI guidelines require integrity verification during the AI model design and training phase, along with security pre-review when adopting externally pre-trained models. Model version management and change history tracking systems are also included in this scope.
Practical Implementation Challenges
Adopting external open-source models and fine-tuning them for the financial domain is a path many financial institutions choose for rapid AI adoption. But what if a model downloaded from a public repository like Hugging Face already has a backdoor planted? That backdoor can still function after fine-tuning. Model contamination through external supply chains is difficult to identify, making it challenging to build countermeasures without dedicated detection tools.
Tynapse's Perspective
There are two approaches to the model poisoning problem. First, an anomalous response detection system that continuously compares connected models' response patterns against baselines. By learning the response distribution, confidence scores, and output patterns a model exhibits during normal operation as baselines, alarms are immediately triggered when deviating responses are detected. Even if you cannot inspect the model itself, behavioral changes can be observed externally. Second, a model contamination verification system through metadata and response history recording. Systematically recording model versions, deployment timestamps, and input/output history enables backtracking to determine from which point the model's behavior changed when anomalies occur.
3. Countering Data and Model Extraction Attacks

Nature of the Attack
Data and model extraction attacks use reverse engineering to reconstruct the internal structure and training data of AI models built by financial institutions. Attackers send numerous queries to the model API and analyze the responses to create a surrogate model that closely approximates the actual model.
This attack serves two main purposes. First, stealing the model's intellectual property for use by competitors or sale externally. Credit scoring or fraud detection models built by financial institutions with years of extensive financial data and investment are core assets in themselves. Second, analyzing the extracted surrogate model to identify vulnerabilities in the original model, enabling the design of more sophisticated evasion attacks.
Guideline Requirements
The FSI guidelines require abnormal query detection through API call pattern monitoring, prevention of output result exposure, and model access log management. They also require minimizing personal information in training data and de-identification processing to reduce the possibility of personal information exposure from data reconstruction attacks.
Practical Implementation Challenges
The core challenge of model extraction attacks is distinguishing between "legitimate service requests" and "probing queries for extraction purposes." Attackers send queries gradually while impersonating normal users over long periods rather than sending massive queries at once.
Two systems are needed as practical countermeasures. One is PII and sensitive information masking in output results. An automatic detection and blocking layer must prevent customer identification information, account numbers, and other sensitive data from being included in model responses. The other is rate limiting-based blocking of repeated extraction attempts, comprehensively analyzing query diversity, response collection patterns, and time-based distribution from the same user or IP.
Tynapse's Perspective
Defense against model extraction attacks should be based on behavioral pattern analysis rather than individual queries. Individual queries may appear normal, but analyzing query patterns over a period often reveals intent. What we consider critical is contextual logging of all inputs and outputs of the AI system. PII masking and rate limiting should be designed as components of an AI output control system, not as individual features.
4. Countering Evasion Attacks

Nature of the Attack
Evasion attacks cunningly manipulate data input to deployed, operational AI models to deceive them. The key is making outputs that look normal to human eyes produce completely different results from the AI model. Research has extensively proven that in image classification models, pixel-level subtle changes can reverse a model's decision 180 degrees.
The impact of evasion attacks in the financial sector is direct and immediate. Bypassing fraud detection systems allows actual fraudulent transactions to pass as legitimate. Evading anti-money laundering models allows illegal funds to flow through the financial system. Prompt injection against generative AI chatbots is also a form of evasion attack, potentially inducing chatbots to output responses violating security policies or exposing sensitive internal information.
Guideline Requirements
The FSI guidelines require regular adversarial input simulation tests and building abnormal input detection systems. They also recommend applying adversarial training to improve model robustness.
Practical Implementation Challenges
Can conducting simulation tests once or twice a year be considered fulfilling evasion attack response requirements? No. Given the nature of evasion attacks where attack patterns continuously evolve, patterns undetected at the time of inspection can emerge afterward. Continuous runtime monitoring, not one-time inspections, is needed to respond to threats that change in real time.
Tynapse's Perspective
What we focus on in evasion attack response is "how quickly can we detect and respond."
Prompt injection and jailbreak attempts have become the most immediate threats in the financial sector. Tynapse sees a dual-layer detection and blocking system combining regex-based pattern detection with LLM classification models as the core approach. The regex layer serves as a first filter quickly catching known injection patterns and jailbreak attempt phrases, while the LLM classification layer semantically evaluates bypass expressions and context-based attack attempts that regex cannot catch. This structure secures both speed and accuracy simultaneously.
The complete history of blocked attempts and suspicious inputs must be preserved in the management console. Being able to cumulatively analyze what patterns of attacks were attempted, when, and how frequently enables continuous updating of defense baselines and immediate proof to regulators that "we were detecting and responding in this way" when incidents occur.
5. AI Agent Security Management Framework

Paradigm Shift: From Prediction to Execution
The evolution direction of AI in finance is clear. While past AI served to "predict and inform humans," today's AI agents are beginning to take on roles of "judging and executing autonomously." Key challenges financial institutions face when adopting AI agents include data inconsistency, insufficient inter-system integration, weak access controls, and inadequate audit trail systems. In this environment, agents autonomously perform actual work such as customer service, document processing, fraud response, and report generation.
The problem is that agents "executing" creates a much broader attack surface than traditional AI systems. Beyond simply outputting incorrect predictions, they can process actual transactions based on wrong decisions or create irreversible outcomes through integration with external systems.
Cascading Risks in Multi-Agent Environments
What deserves particular attention is the multi-agent structure. Real financial process automation is not handled by a single agent from start to finish, but designed as pipelines where multiple agents divide roles and process sequentially. In a structure flowing from customer application review agent → credit evaluation agent → approval processing agent, if a middle-stage agent is contaminated or attacked, that result is passed directly to the next agent.
Multi-agent system failures cascade through the agent network so rapidly that conventional incident response methods cannot contain them. This represents a fundamentally different risk dimension from single AI model malfunctions.
Guideline Requirements
The FSI presents complete logging of all operations performed by agents, anomalous behavior detection systems, and clear control of agent authorization scope as the core of its AI agent security management framework. Security verification at points where agents integrate with external systems is particularly emphasized.
Practical Implementation Challenges
The greatest difficulty in agent security management is achieving visibility. It is extremely difficult to retrospectively trace what decision basis led to what action by an agent, or at which stage in a multi-agent pipeline an anomaly occurred. Traditional system logs can record an agent's "API calls" but do not record why that API call was made or what internal state led the AI to that decision.
Tynapse's Perspective
AI agents do not simply generate responses. They execute actual work. The starting point for security design must be different accordingly.
First, the principle of least privilege must be applied to agent permissions and action scope. Agents should be designed to have only the minimum system access needed for their tasks, with clearly defined boundaries for permitted actions. In multi-agent pipelines, the scope of information and permissions each agent can pass to the next stage must also be controlled.
Second, agent memory and communication channels must be protected, and sensitive information access must be controlled. Integration points with external systems, communication channels between agents, and context memory referenced by agents are all potential attack surfaces. Agent access to sensitive information such as customer personal data, account information, and internal policy documents must be dynamically controlled based on business context.
Third, real-time anomalous behavior detection and immediate response systems must be in place. The moment an agent exhibits behavior outside normal parameters, this must be automatically detected and the agent's operations suspended or alerts sent to responsible personnel. A structure where humans review logs for judgment cannot keep pace with agent execution speed.
The Gap Between Guidelines and Reality: What Tynapse Has Directly Confirmed

Tynapse has directly confirmed the realistic response status of financial institutions to the FSI guidelines through communications with numerous domestic financial institutions. Through this process, we repeatedly encountered common patterns.
First, checklists are in place, but the technical systems to support them are often still under construction. Many institutions diligently maintain internal checklists for FSI guideline items. However, at the stage of specifying exactly which system implements "verify data integrity" and how, many struggle due to a lack of dedicated tools.
Second, security reviews during the development and adoption phase are systematized, but real-time monitoring during the operational phase is still an evolving area. More institutions are systematically performing security reviews when adopting AI models. However, establishing dedicated systems for real-time detection of anomalies during actual operation after model deployment remains an ongoing challenge across the industry.
Third, dedicated AI security frameworks are still forming across the industry. Since AI system security threats have different characteristics from traditional network intrusions or malware, they require approaches separate from existing IT security frameworks. Currently, many financial institutions have existing IT security teams handling AI security as well, because AI security as a field has rapidly emerged as a new domain.
This is not a challenge unique to any particular institution. As the field of AI security has emerged rapidly, most institutions stand at a similar starting line regardless of scale. An AI-dedicated operational system that detects, records, and connects security threats to responses in real time is infrastructure that must be built together at this very moment.
The Core of AI Security Guideline Implementation

The core of FSI guideline implementation can be summarized in three pillars.
Visibility: Every event occurring in the AI system must be recorded and traceable without exception. Input data, model outputs, API call patterns, and agent action histories must all be preserved in an auditable format.
Detection: Beyond simple log storage, anomalous patterns must be automatically detectable. A system must learn baselines of normal model behavior and detect deviations from those baselines in real time. This must be a structure where the system automatically detects and raises alarms, not humans analyzing logs.
Response: Immediate response to detected anomalies must be possible. This includes blocking suspicious queries, sending alerts to responsible personnel, or rolling back problematic agent operations.
Tynapse implements these three pillars in a single platform. We record AI system inputs and outputs in real time and provide audit trails, automatically detect and mask PII and sensitive information in outputs, detect and block prompt injection and jailbreak attempts through a dual layer of regex-based first filter and LLM classification model, and block repeated model extraction attempts through query pattern analysis-based rate limiting. In AI agent environments, we control per-agent authorization scope and detect anomalous behavior deviating from normal baselines in real time with immediate alerting. The entire process of detection, recording, and response required by the FSI guidelines can be operated within a single framework.
Closing Thoughts
There are two ways financial institutions can approach AI security guidelines. One is "minimal compliance to meet regulatory requirements," and the other is "practical standards for operating AI safely."
The 5 security requirements presented by the FSI guidelines ultimately converge on a single question: "Is our financial institution's AI system operating normally at this very moment, and how can we know?"
As AI adoption accelerates, the risk for financial institutions that cannot answer this question grows exponentially. Especially as the structure shifts toward corporate accountability for AI activities, the structure where responsibility for AI decisions and actions falls on the financial institution as the operating entity is becoming increasingly clear. Guideline compliance is not optional. It is a fundamental requirement for any financial institution operating AI, and establishing response systems after an incident occurs is too late. Now is the time to start preparing.
Tynapse is an AI Trust Layer that records every AI action, detects anomalies, and automatically verifies regulatory compliance. For officers reviewing FSI guideline implementation or looking to audit the security posture of currently operating AI systems, we offer guideline implementation diagnostics, product demos, and officer meetings. Please feel free to reach out.

