LLM Prompt Injection Detection API Service PoC.
-
Updated
Nov 5, 2025 - Go
LLM Prompt Injection Detection API Service PoC.
FRACTURED-SORRY-Bench: This repository contains the code and data for the creating an Automated Multi-shot Jailbreak framework, as described in our paper.
Bidirectional Security Framework for Human/LLM Interfaces - RC9-FPR4 baseline frozen (ASR 2.76%, Wilson Upper 3.59% GATE PASS, FPR stratified: doc_with_codefence 0.79% Upper GATE PASS, pure_doc 4.69% Upper). RC10.3c development integrated (semantic veto, experimental). Tests: 833/853 (97.7%), MyPy clean, CI GREEN. Shadow deployment ready.
Add a description, image, and links to the prompt-injection-detector topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection-detector topic, visit your repo's landing page and select "manage topics."