I’ve been experimenting with using LLMs locally for generating datasets to test Harper against. I might write a blog post about the technique (which I am grandiosely calling “LLM-assisted fuzzing”), but I’m going to make you wait.
I’ve written a little tool called ofc
that lets you insert Ollama into your bash scripts.
I think it’s pretty neat, since it (very easily) lets you do some pretty cool things.
For example, you can swap out the system prompt, so if you want to compare behavior across prompts, you can just toss it in a loop:
#!/bin/bash
subreddits=("r/vscode" "r/neovim" "r/wallstreetbets")
# Loop over each item in the list
for subreddit in "${subreddits[@]}"; do
echo "++++++++ BEGIN $subreddit ++++++++"
ofc --system-prompt "Assume the persona of a commenter of $subreddit" "What is your opinion on pepperjack cheese?"
cat
done
Or, you can instruct a model to prompt itself:
ofc --system-prompt "$(ofc "Write a prompt for a large language model that makes it think harder. ")" "What is a while loop?"
ofc
is installable from either crates.io or its repository.
cargo install ofc --locked
# Or...
cargo install --git https://github.com/elijah-potter/ofc --locked
Spend more time on the introduction than anything else.
We look at several interesting ways computers generate random numbers. It may fascinate you to know that some methods are not *truly* random, but only an approximate.
An experiment on how to live in a seemingly hopeless world.