I've been experimenting with using LLMs locally for generating datasets to test Harper against. I might write a blog post about the technique (which I am grandiosely calling "LLM-assisted fuzzing"), but I'm going to make you wait.
I've written a little tool called ofc that lets you insert Ollama into your bash scripts.
I think it's pretty neat, since it (very easily) lets you do some pretty cool things.
For example, you can swap out the system prompt, so if you want to compare behavior across prompts, you can just toss it in a loop:
#!/bin/bash
subreddits=("r/vscode" "r/neovim" "r/wallstreetbets")
# Loop over each item in the list
for subreddit in "${subreddits[@]}"; do
echo "++++++++ BEGIN $subreddit ++++++++"
ofc --system-prompt "Assume the persona of a commenter of $subreddit" "What is your opinion on pepperjack cheese?"
cat
done
Or, you can instruct a model to prompt itself:
ofc --system-prompt "$(ofc "Write a prompt for a large language model that makes it think harder. ")" "What is a while loop?"
ofc is installable from either crates.io or its repository.
cargo install ofc --locked
# Or...
cargo install --git https://github.com/elijah-potter/ofc --locked
This post was proofread by Harper.
Failing to account for this reality can slow down development and dissuade contributors from sticking around.
Writing is one of life's greater joys. It's a mental workout that often brings me a level of clarity that is hard to find elsewhere.
I have been seeing an increasingly prevalent trend of people showing up in online spaces flaunting that they are writing with the assistance of AI. They seem to be proud of this. They shouldn't be.