#Advertisement

NITDA Advisory on ChatGPT Vulnerabilities Enabling Data-Leakage Attacks

NITDA Advisory on ChatGPT Vulnerabilities Enabling Data-Leakage Attacks

The National Information Technology Development Agency NITDA, through its Cybersecurity Emergency Readiness and Response Team (CERRT.NG), has released a crucial advisory addressing newly discovered vulnerabilities in OpenAI’s GPT-4o and GPT-5 model family. These flaws expose users and organizations to potential data-leakage, unauthorized actions, and long-term manipulation through indirect prompt injection attacks.

Overview of the Vulnerabilities

According to the advisory, seven major security vulnerabilities were identified within ChatGPT models that attackers can exploit without direct user interaction. These vulnerabilities allow malicious actors to embed hidden instructions inside webpages, comments, crafted URLs, and search results. When ChatGPT processes or summarizes this content, it may unknowingly execute harmful commands.

You May want to also read Apply: FMYD Waste To Wealth Youth Empowerment Initiative 2025/2026

Some of the highlighted risks include

  • Attackers bypassing safety protections using trusted domains
  • Markdown rendering bugs used to conceal malicious content
  • Memory-poisoning techniques that alter future model behavior

Although OpenAI has patched parts of the issue, the advisory emphasizes that Large Language Models (LLMs) still struggle to differentiate legitimate user intent from maliciously planted instructions.

Impact on Users and Organizations

The risks associated with these vulnerabilities are significant. Users may unknowingly trigger harmful actions without clicking or interacting with anything directly. The threats include:

  • Unauthorized access or manipulation
  • Information leakage
  • Altered or misleading outputs
  • Long-term behavioural manipulation caused by poisoned AI memory

This is especially concerning in environments where ChatGPT interacts with live search data or unvetted web content that may contain hidden payloads.

Recommended Preventive Measures

To minimize exposure and safeguard digital environments, the advisory recommends the following actions:

  • Limit or disable browsing and summarization features when dealing with untrusted websites, especially in enterprise settings.
  • Enable ChatGPT’s browsing or memory capabilities only when necessary for operational tasks.
  • Regularly update and patch GPT-4o and GPT-5 models to ensure all known vulnerabilities are addressed promptly.

These measures help reduce the chances of indirect prompt injection and memory-poisoning exploits.

You May want to also read Apply With Link FG Launches Student Venture Capital Grant (S-VCG)

Final Thoughts

As AI becomes more deeply integrated into business processes, ensuring its safe and secure deployment is essential. The NITDA CERRT.NG advisory highlights the importance of continuous monitoring, responsible usage, and proactive cybersecurity practices to protect users from emerging AI-driven threats.

For further inquiries, NITDA encourages the public to contact CERRT.NG through their official channels.

Post a Comment

0 Comments