@technadu@infosec.exchange
π
GPT-5 can still be jailbroken.
Echo Chamber + storytelling bypassed safety.
1,000+ adversarial prompts showed vulnerabilities in the raw + basic configs.
Layered runtime guardrails remain essential.
#GPT5 #EchoChamber #Jailbreaks #AIPrompts #AISafety #PromptEngineering #LLM