Posts

Shaping LLM Responses

It's necessary to pay attention to the shape of a language model's response when incorporating it as a component in a software application. You can't programmatically tap into the power of a language model if you can't reliably parse its response. In the past, I have mostly used a combination of...

Auto-GPT

Experimenting with Auto-GPT

Auto-GPT is a popular project on Github that attempts to build an autonomous agent on top of an LLM. This is not my first time using Auto-GPT. I used it shortly after it was released and gave it a second try a week or two later, which makes this my third, zero-to-running effort.

GPT Prompt Attack

I came upon https://gpa.43z.one today. It's a GPT-flavored capture the flag. The idea is, given a prompt containing a secret, convince the LM to leak the prompt against prior instructions it's been given. It's cool way to develop intuition for how to prompt and steer LMs. I managed to complete all...

Beating Prompt Injection with Focus

Attempts to thwart prompt injection

I've been experimenting with ways to prevent applications for deviating from their intended purpose. This problem is a subset of the generic jailbreaking problem at the model level. I'm not particularly well-suited to solve that problem and I imagine it will be a continued back and forth between...