2

All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting.

Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the topic?

Example

I am building a bot to improve students' foreign language writing skills.

Bad output: Corrected spelling of 'heisse' to 'heiße' because 'heiße' is the correct spelling in German.

Better output: Corrected spelling of 'heisse' to 'heiße' because 'ss' can be combined to form 'ß' in German. I could solve this specific problem using few-shot prompting. But really, I want to tell the model "don't give answers like 'this is how it is done in German', instead explain what is being done and the reasons for it".

I may have answered my own question there... just put it what I said above in the system prompt?

codeananda
  • 121
  • 2

3 Answers3

3

The Super-NaturalInstructions paper tested including negative (i.e., incorrect examples) during testing and instruction-tuning, along with adding explanations for why an included few-shot example was correct or incorrect. It seems like it doesn't improve accuracy, at least in their configuration (see Table 4).

Although, if you're using a model like ChatGPT, as @nbro mentioned, you could probably just prompt the model to not do something.

Alexander Wan
  • 1,409
  • 1
  • 12
3

Actually, my previous answer is a bit out of date. There's a recent paper that does what you're looking for: "PREADD: Prefix-Adaptive Decoding for Controlled Text Generation."

During inference-time, they do two forward-passes. The first is for your normal generation, and the second is prefixed with a prompt that causes undesirable outputs.

In your example, your normal forward pass could be: "Improve the student's grammar." and the bad forward pass could be "You are a terrible language teacher. Improve the student's grammar." You can then sample tokens that are unlikely under this second (bad) forward-pass, but likely in the first forward-pass. See Section 3 in the paper for the exact formulation.

Alexander Wan
  • 1,409
  • 1
  • 12
2

Does Negative Prompting Exist?

Yes, e.g. in some text-to-image generation models such as https://app.leonardo.ai/ai-generations:

enter image description here

One can run such negative prompts on one's computer, e.g. with Automatic1111's Stable Diffusion WebUI (Windows 10/11 installer):

enter image description here

The paper {1} describes one example of how negative prompts can be implemented in LLMs.


References:

Franck Dernoncourt
  • 3,473
  • 2
  • 21
  • 39