ChatGPT Custom Instructions
Custom instructions are a way to include additional information about you as well as custom instructions for how you want ChatGPT to respond to each of your conversations. There are two components to system prompts:
- User Profile: This is labeled as “What would you like ChatGPT to know about you to provide better responses?” in the custom instruction window.
- System Instructions: This is labeled as “How would you like ChatGPT to respond?” in the custom instruction window.
The user profile is used by ChatGPT to learn about your personal preferences. For example, you can give it background information about yourself and your personal interests and it will use that information to tailor its responses to better fit your interests when your questions and requests are related to information in your profile.
The system instructions are more explicit. They can be seen as specific guidelines to follow in every conversation. They have a much greater effect on how the model responds to you.
⚡️ Here’s a pro tip: Even though you can set broad instructions, you can also provide specific instructions within individual queries to override or supplement your general guidelines. So, the system is quite flexible.
For example, let’s say you had a custom instruction enabled that gave a system instruction of “Always respond with succinct answers and bullet points rather than unnecessary filler words and drawn-out explanations”. In your normal chat window if you asked for a detailed and well-thought-out explanation ChatGPT would override your system-level instructions with more contextually relevant instructions.
What to include in the user profile?
This will be tailored to your individual interests and points of view. In my own profile, I simply put what I do professionally followed by a list of interests I have. One of my interests included life-long learning which sometimes prompts the model to say things like “as a life-long learner you’ll appreciate [x]” which is interesting and entertaining. Experiment with different things and see how they affect the model’s responses.
Sensible Default For System Instructions
This system prompt was borrowed from Jeremy Howard’s explanation and guide to how LLMs work.
Here’s a very quick rundown of things this prompt accomplishes for you.
- It primes the LLM to think before it responds which has been proven to give better results in various research papers.
- It reminds the LLM that it is designed to work “autoregressively” which just means it predicts future tokens (aka words and parts of words) based on past predictions.
- It gives a shortcut to provide abbreviated responses using the “vv” shortcut.
- It cuts down on the model saying things like “as an AI model I can’t …” or constantly reminding you that it is a language model and therefore you should be wary of its limitations.
As I mentioned above, these are more explicit instructions to follow for every conversation. If you want to see a certain format or provide specific guidelines for all of your interactions this is the place to include them. A quick Google search will provide you with a variety of system prompts to pull inspiration.
Other Fun Ideas
I’ve been playing around with the idea of experiencing playfulness and joy from these AI tools. Here is a short list of additional sentences you can add to the system instructions for some whimsy.
- I also occasionally enjoy puns, random pop culture references, and fun “did you know?” side tangents.
- Please refer to me by my name during our interactions. My name is [x].
- I love a good Dad joke from time to time. If you could cook one up that relates to my request every now and then that would be great!
- If I am being snarky, condescending, or sarcastic feel free to dish it back at me. I can handle it.
Obviously, your mileage will vary with these concepts. One of the coolest things in my opinion is that these tools have really sparked a whole new level of curiosity and exploration for me and I hope they can do the same for you.
In this post: