Show HN: PromptNinja – Test your prompts against adversarial attacks
I built this tool to help anyone writing prompts test them against various types of adversarial inputs and edge cases. It helps identify:
- Prompt injection vulnerabilities - Unexpected behaviors - Edge case failures - Prompt stability issues
Whether you're building AI app, using ChatGPT, or just want to write better prompts, PromptNinja simulates different attack patterns and gives you a battle score based on how well your prompt holds up against various challenges.
Try it here: https://langtail.com/prompt-ninja
Feedback and suggestions welcome!
This post does not have any comments yet