LLM Prompt Injection
Jailbreaking as prompt injection
Jailbreaking as prompt injection
I've been keeping an eye out for language models that can run locally so that I can use them on personal data sets for tasks like summarization and knowledge retrieval without sending all my data up to someone else's cloud. Anthony sent me a link to a Twitter thread about product called deepsparse...
If you want to try running these examples yourself, check out my writeup on using a clean Python setup.
Since the launch of GPT3, and more notably ChatGPT, I’ve had a ton of fun learning about and playing with emerging tools in the language model space.
I believe it is important for engineers to care about code quality. Some teams and companies make specific and targeted efforts to keep the quality of their codebases high. The existence of activities like "spring cleaning", "test Fridays", "Fixit week" and others assert the importance of code...
Unix commands are great for manipulating data and files. They get even better when used in shell pipelines. The following are a few of my go-tos -- I'll list the commands with an example or two. While many of the commands can be used standalone, I'll provide examples that assume the input is piped...
I ran into an odd UNIX filename issue while writing Go code the other day.
Delve is a debugger for the Go programming language. The goal of the project is to provide a simple, full featured debugging tool for Go.
Scoping in Go is built around the notion of code blocks. You can find several good explanations of how variable scoping work in Go on Google. I'd like to highlight one slightly unintuitive consequence of Go's block scoping if you're used to a language like Python, keeping in mind, this example does...