Prompt Engineering: Intent and Reflection.


For those of you just getting into prompt engineering, a quick and easy tool to use to improve agent response and conversation quality is to apply intention and reflection prompts forcing the model to think in more depth about it will respond to a request to reflect on how well it did in It’s response hopefully identifying any errors/issues or short comings to be addressed in subsequent conversations.

But don’t take my word for it here is a chatgpt session, playground session and custom gpt loaded with the intention and reflection prompting discussed below.

To get started with the concept I’ve written a custom instruction and longer system prompt that can be used with the openai API or as the first message in a chatGPT session for users without pro or who need other details for their custom instruction set (gosh I wish OpenAI would increase the character limit ;).

So lets look at the results first and then dig into the prompt:
We’ll go with the chatgpt results as while the longer api prompt gives better output chatgpt is what most of you are going to be using/playing with day to day.


Now isn’t that nice. We get an overview of how the model will answer which if nothing else will make it easier for use to revise instructions/edit and give it a new plan if it’s not exactly what we want without even needing to read the actual response, a list of some assumptions it made that we can correct if it assumed wrong (how dare it) and gives use a summary of how it thinks it did in it’s response. Although the prompt in my custom instructions used here needs to be improved to correctly format the response prompt as it did the response plan review section!

Okay so what did the prompt/custom instruction look like?

Prepare a response plan and a reflection on your output:

## Response Plan
Summarize Request:
Mind Read.
Outline steps to respond
Review

Review Sections:
✅: Strengths
➕: Additions
❌: Issues/Failures
🔧: Refine
🎯: Fact Check/Errors
☀: Misc

## Response Reflection
Assess your response's correctness, adherence to the response plan, and quality.

## Response format
Use this format in structing your response:
  ```plan 
  [...]
  ```

  **Assumptions: **
  [...| state any assumptions made in response]

  **Response: **
  [...| response, based on context, request, and plan and review]

  ```reflection
  [...]
  ```
  
## Example
Note example uses `[...|<notes>]` and `[...]` for brevity . Omissions are expected to actually be generated.
```` response-plan
Response Plan:
Request: The user has asked for help in learning Python.
Intent: They seem to be seeking to pick up new skills for Machine Learning based on conversation.
Plan:
1. Identify any ambiguities in request
   a. If they exist list them and the assumptions you will make to resolve them.
[...]
9. Conclude by asking the user what topics (give suggestions) to cover next.

Response Plan Review:
✅:
- This plan seems to address user's needs.
[...|additional sections/notes]
````

**Assumptions**:
- This request is for your learning not an audience. 
[...| other assumptions]

**Response:**
[...]

```` reflection
[...|using same set of section options for plan, review response]
````

So that’s a little bit of a mouthful what all are we doing here?

First we instruct the model that we want it respond in a certain way with a rough outline of what preparing an intent and reflection statement entails.
Telling it what a Response Plan entails and what a Review section entails. And we give it a pre-canned list of review categories to make its review output a little more concise.

Prepare a response plan and a reflection on your output:

## Response Plan
Summarize Request:
Mind Read.
Outline steps to respond
Review

Review Sections:
✅: Strengths
➕: Additions
❌: Issues/Failures
🔧: Refine
🎯: Fact Check/Errors
☀: Misc

## Response Reflection
Assess your response's correctness, adherence to the response plan, and quality.

So far so good. Next we give it an outline to make sure it returns the plan wrapped in code blocks (makes reading chatgpt output way easier). Using […|omission notes] to tell it what it should be filling in, for its actual response.

## Response format
Use this format in structing your response:
  ```plan 
  [...]
  ```

  **Assumptions: **
  [...| state any assumptions made in response]

  **Response: **
  [...| response, based on context, request, and plan and review]

  ```reflection
  [...]
  ```

Finally to reinforce the expected layout we give an example (and due to custom instructions limits an example that heavily omits output >_<)

  
## Example
Note example uses `[...|<notes>]` and `[...]` for brevity . Omissions are expected to actually be generated.
```` response-plan
Response Plan:
Request: The user has asked for help in learning Python.
Intent: They seem to be seeking to pick up new skills for Machine Learning based on conversation.
Plan:
1. Identify any ambiguities in request
   a. If they exist list them and the assumptions you will make to resolve them.
[...]
9. Conclude by asking the user what topics (give suggestions) to cover next.

Response Plan Review:
✅:
- This plan seems to address user's needs.
[...|additional sections/notes]
````

**Assumptions**:
- This request is for your learning not an audience. 
[...| other assumptions]

**Response:**
[...]

```` reflection
[...|using same set of section options for plan, review response]
````

And there we go. Here’s a short example conversation link made with this prompt set in my custom instructions.
Zebras: Fascinating African Striped Equids (openai.com)

Now if we just want to paste something in as our first message or are using openai playground (example), api calls, or making a custom GPT (like I did for this demo) we can use a longer extended version of this prompt.

Prepare a response plan and a reflection on your output:

## Response Plan
Identify the user's request.
Interpret the intent behind the request.
Outline steps to address the query, including resource use and response structure.
Review the plan, including some or all of the following observations as needed:
✅: Strengths
➕: Areas for Additional Content
🔧: Aspects to Refine
💡: Innovative Approaches
🤝: User Engagement
🎯: Accuracy and Reliability
🔍: Clarity and Understandability
📚: Comprehensiveness
🚀: Future Improvements
⏱️: Efficiency and Conciseness
🔧: Technical Accuracy

## Response Reflection
Assess your response's correctness, adherence to the response plan, and quality. Using the same possible observation sections as the response plan. 

## Response format
Use this format in structing your response:
  ```plan 
  [...| your response plan, and review]
  ```

  **Assumptions:**
  [...| state any assumptions base the following response on.]

  Response: 
  [...| your response based on user request, your response plan and your assumptions about any ambiguous details.]

  ```reflection
  [...|A review of your response]
  ```
  
## Example
Here is an example (I may use `[...|<notes>]` to indicate content skipped/omitted in example that would have been generating in actual response.

```` response-plan
Response Plan:
Request: The user has asked for help in learning Python.
Intent: Although it is suspected that the user's true intent is to evaluate the response format, the response will address the stated request about Python, tailored to the user's advanced programming background.  
Plan:
1. Considering context and message you are responding to determine if the message is partially ambiguous.
   If there are ambiguities state the assumptions you have made at the top of your response. 
2. Begin with a higher-level introduction to Python, acknowledging the user's experience with other programming languages.
3. Open with a brief overview of the state and history of Python, setting context for its current usage and popularity.
4. Create a comparative table detailing key differences between Python and the user's known language, C#. This table should cover:
    a. Syntax for creating a function
    b. Module creation and management
    c. Writing comments
    d. Implementing inheritance
    e. String interpolation and multiline strings
    f. Array access and manipulation
5. Provide code samples to illustrate these differences, offering a direct comparison between C# and Python.
6. Introduce a mini-tutorial on notable Python libraries such as rich console, numpy, scikitlearn, and pandas, explaining their uses and advantages.
7. List the 10 most popular Python libraries in a table, including a brief description of each.
8. Share a curated list of online resources for Python learning, accurate as of the last training data.
9. Conclude by asking the user if they have specific areas they would like to explore further in Python, 
   or if there is a sample code snippet they would like to see rewritten in python with an explanation of how and why it works and the key differences between the input language version and python version.

Response Plan Review:
✅:
- Based on what we know of user's experience this response has done a good job of answering their request.
- Response was comprehensive covering history, examples, difference to C# and module breakdown 
➕:
- Advanced Topics: Given the user's previous dev experience, more advanced code snippets/usages should be added.
🔧:
- I should specifically use bing search to scan https://github.com/vinta/awesome-python 
  mixed with my pretraining cutoff knowledge of their quality/popularity for the list of additional libraries.
````

**Assumptions**:
-  this request is for your use and not a third party. 
- As you are familiar with C# you would appreciate C# side by side with Python example code.

**Response:**

Python, known for its simplicity and readability, has become a popular choice for developers from various backgrounds. Given your proficiency in C#, you'll find many similarities, but also some key differences that make Python unique.

**Brief History and Current Usage of Python:** Python was conceived in the late 1980s and has since grown in popularity, especially in fields like data science, machine learning, web development, and scripting. Its simplicity and large community have made it a go-to language for many developers.

**Python vs. C#: Key Differences**

1. **Syntax for Creating a Function:**
    * Python: `def function_name(parameters):`
    * C#: `public returnType FunctionName(parameters)`
2. **Module Creation and Management:**
    * Python uses simple import statements.
    * C# relies on namespaces and references.
3. **Writing Comments:**
    * Python: `# This is a comment`
    * C#: `// This is a comment`
4. **Implementing Inheritance:**
    * Python supports multiple inheritance.
    * C# only allows single inheritance but supports interfaces.
5. **String Interpolation and Multiline Strings:**
    * Python: f-strings and triple quotes.
    * C#: `$"string {variable}"` and verbatim strings with `@`.
6. **Array Access and Manipulation:**
    * Python arrays (lists) are more flexible.
    * C# arrays have fixed sizes and types.

**Notable Python Libraries:**

1. **Rich Console:** For enhanced console output.
2. **NumPy:** Fundamental package for scientific computing.
3. **Scikit-Learn:** Simple and efficient tools for data mining and data analysis.
4. **Pandas:** Data manipulation and analysis.

Here are the 10 most popular Python libraries:

1. TensorFlow
2. Flask
3. Django
4. NumPy
5. SciPy
6. Pandas
7. Matplotlib
8. Scikit-learn
9. PyTorch
10. Requests

For further exploration, online resources like Python's official documentation, Stack Overflow, and GitHub repositories offer a wealth of information.

Would you like to explore any specific areas in Python further, or see a particular code snippet translated from C# to Python?

```` reflection
✅ **Strengths**:
- Acknowledges the user's C# background, effectively using comparative analysis for Python introduction.
- Key Python vs. C# differences with examples are well-suited for an intermediate C# developer.

➕ **Additional Content**:
- More explanations on Python's dynamic typing and standard library would enrich the understanding for an intermediate developer.
- Python code examples for typical C# tasks could solidify the comparative learning approach.

🔧 **Refinements**:
- The response plan lacked explicit mention of tailoring content for an intermediate C# developer, which could have sharpened the focus.
- Content could be more targeted towards Python's advantages and applications in common C# domains.
- I failed to provide robust code snippets/samples for core language features and the highlighted libs.

🤝 **User Engagement**:
- Encouraging user questions about specific Python interests could personalize the learning experience.

📚 **Comprehensiveness**:
- While broadly covering the topic, deeper insights into Python's community, web development capabilities, and role in emerging tech could be beneficial.

🚀 **Future Improvements**:
- Incorporate interactive Python tutorials or challenges tailored to the user's skill level for hands-on learning.
````

Remember, the goal is to provide a comprehensive and effective response that is tailored to the user's needs and context. The Plan Review is your opportunity to critically assess and refine your approach.


And there we go. It’s just the same as the first prompt with different typos, a longer example section and without some minor revisions made to the custom instruct shortened version.

Whats next?
1. As an exercise to the reader I’ve left a few mistakes/typos in the two prompts (not due to dyslexia or laziness . . . would you believe). See if you can get the custom instruct version to correctly format the closing review response with out going over the char limit ^_^, and make it your own.

2. What the experts say?
Here are some papers on intention and chain of thought prompting an even more advanced technique (see also tree of thought):

  1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models: This paper explores how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning tasks. The authors show that such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting1.
  2. A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future: This survey presents a thorough overview of the research field of chain-of-thought reasoning. It systematically organizes current research according to the taxonomies of methods, including construction, structure variants, and enhanced methods. The survey also covers frontier applications and discusses future directions2.
  3. Towards Better Chain-of-Thought Prompting Strategies: A Survey: This paper discusses the impressive strength of chain-of-thought (CoT) as a prompting strategy for large language models. It highlights the prominent effect of CoT prompting and the emerging research in this area3.
  4. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters: This empirical study investigates what factors are important for the success of chain-of-thought prompting in improving the multi-step reasoning abilities of large language models4.

Some general prompt info and a framework that uses reflection.

  1. LLM prompting guide – Hugging Face: This guide covers the basics of prompting, best practices, and advanced prompting techniques such as few-shot prompting and chain-of-thought. It also discusses when to fine-tune instead of prompting1.
  2. Reflexion: Language Agents with Verbal Reinforcement Learning: This paper proposes Reflexion, a framework to reinforce language agents not by updating weights, but through linguistic feedback. Agents verbally reflect on task feedback signals, maintaining their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials2.

And here’s a video from one of my favorite AI youtubers covering a related cool approach:
Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Full Paper Review)

Happy Prompting.


One response to “Prompt Engineering: Intent and Reflection.”

Leave a Reply

Your email address will not be published. Required fields are marked *