Ryan Wersal - Learnings

🗣️ Prompt Engineering: Speaking AI's Language

I went into yesterday and today expecting to dive into prompt engineering - crafting meticulous and targeted AI language to effectively “program” the AI and its responses. However, that is not what happened - at all.

In fact, the prompting exercise was this blog and was created exclusively in Cursor using incredibly simple prompts. Perfectly natural language even! It was hardly programming of any variety.

A few example prompts from the exercise:

I want to create a blog using Hugo.

That got me the blog and even installed the hugo CLI along with the basic scaffolding from initializing a new blog site. However, I had noticed that the About and Archive pages were both showing up in the Archive page listing instead of only blogs. Shockingly, the following prompt resolved the issue:

The archive page is showing up in the archive page.

No material context. No suggestion on what the problem is or a proposed fix. But it figured out the intent and solved the problem by filtering to only blog posts.

Finally, I noticed the default favicon and wanted to use my usual, which is readily fetched from Gravatar. The following prompt knocked out a quick bash script to curl the image and placed it into the right static directory and it was done in about a minute.

Fetch the gravatar for and use it as the site’s favicon.


That takes me into today: what about Cursor Rules and similar more system-style prompts? What are the important traits of successful system prompts?

Fortunately there’s plenty of examples out there. sharkgwy has a repo containing an old version of the v0 prompt. There’s also the System Prompts & Models of AI Tools repo from x1xhlol with an extensive array of system prompts, including a more current v0 prompt.

A few quick takeaways:

  • Most of the prompting feels under prompted. There’s a lot of “whitespace” for the AI to operate in.
  • The minimal prompting is probably for reducing token counts but there are certainly counter examples such as this verbose example of Mermaid in v0’s prompt.
  • The prompt is heavily structured around constraints rather than capabilities. It’s full of “MUST” and “NEVER” statements that define boundaries rather than explaining what’s possible.

OpenAI has a fantastic section on prompt engineering. I found the Few Shot Learning section of particular interest as it explains the example-laden style employed in the system prompts.