Skip to main content

Prompting Practices for (Social) Scientists

Successful use of large language models depends on thoughtful prompting, since interacting with these systems is more like communicating with people than programming computers, and offers guidelines to help scientists apply them effectively.

Large Language Models (LLMs) are ubiquitous. Unlike previous computational tools, LLMs have an infinite range of inputs and outputs. Getting the correct output from an LLM can be difficult. While getting the desired behavior out of a language model depends on a variety of factors including model family and size, far and away the most important part is the model’s prompt. However, prompting a language model is different from programming a traditional computational tool. There areinfinite ways to use a programming language to carry out a task. However, once written, code runs deterministically. The same inputs will always produce the same outputs. LLMs, on the other hand, are fundamentally different. It might be more helpful to think of prompting an LLM like telling a person how to do a task. There are many ways you might phrase things, and you can’t be sure how the person will respond. Over the course of your life, however, you build up a good intuition for how to deal with different people. You might discover ways to help explain a challenging concept or check for understanding. Working with LLMs is similar. Different tasks and different models require different prompts. Across models and tasks, there are principles and practices that can be applied. Because LLMs are so variable, we provide practices and suggestions to help scientists use LLMs in a scientifically defensible manner.