
If you’ve used any AI system, you’ve likely noticed that guardrails are in place to prevent misuse, harm, and bias. These guardrails prevent users from requesting forged documents or encouraging it to say something offensive. However, your mileage may vary on how adequate these guardrails are. And what happens if these guardrails are not just ineffective, but purposefully compromised?
Read more