Is 99% Enough?
Published:
Thoughts on the practicality of LLM Robustness research
Tags: jailbreaking, machine learning, prompt injection, security
Here’s my collection of notes on research papers, random thoughts, and things I find interesting. Will be updated whenever I feel like it.
Published:
Thoughts on the practicality of LLM Robustness research
Tags: jailbreaking, machine learning, prompt injection, security