Techniques to Enhance Conversations with ChatGPT and Other AI ChatBots

By Marcelo Barbosa

Understanding how an AI ChatBot works can seem complex. Let's try simplifying this with an everyday scenario. Imagine deciding whether to watch a drama movie or a comedy for your Friday night entertainment. You would probably take several factors into account - your mood, movie reviews, the actors, the storyline, and maybe even the film director. This decision-making process involves self-awareness, research, reflection, and decision-making.

However, an AI model like ChatGPT doesn't "think" or "decide" in the same way we do. When presented with a task, it sees the question as a sequence of words or "tokens", processing each token individually. This is a fundamentally different process from human cognition. Unlike us, these models don't possess self-awareness - they can't identify gaps in their knowledge, ponder on the task at hand, or retrospectively correct their mistakes. They merely continue producing word sequences.

Many of the techniques used in crafting prompts can be interpreted as a way of emulating human System 2 thinking. For those not familiar with Dual Process Theory, it posits that people employ two different cognitive systems: System 1 and System 2. System 1 is a fast, automatic process, similar to how a Large Language Model (LLM) samples tokens, managing everyday tasks without conscious effort.

Conversely, System 2 is representative of the slower, more calculated aspect of human cognition. This part of the brain is engaged when we tackle complex problems, make significant decisions, or carry out careful planning. Hence, the techniques used to refine and optimize prompts in language models can be viewed as efforts to approximate this System 2 type of thinking, introducing a more deliberate and thoughtful approach to language processing.

Here are some valuable techniques to amplify the utility of AI ChatBots like ChatGPT, helping them emulate more human-like responses:

1. Be Explicit About What You Want

Remember that these AI models aim to replicate human-like responses, and not necessarily to be accurate all the time. Therefore, if you want a good answer, you need to explicitly ask for it. You might say, "Please consider the movie's genre, reviews, actors, and my preference for light-hearted plots in your recommendation." The more specific you are with your request, the more likely you are to get the response you're looking for.

2. Provide context and relevant information

Feeding the model with pertinent information can significantly improve its performance, because they use the information in the prompt to generate a response. If there's specific context that's relevant to your request, include that in the prompt. For example, when asking for a movie recommendation, providing additional context like your previous favourite movies or your dislike for certain actors can result in a more personalized and satisfactory recommendation.

3. Use explicit constraints and guidelines

If you have specific requirements for the response, make sure to include those in the prompt. For example, if you want a response in a specific format, or if you want the AI to use a tone or style, describe that in the prompt. For instance:

  • If you're seeking a list of key points instead of a detailed explanation, you might specify, "Provide a bulleted list of the key facts about the solar system."
  • If you're aiming for a more formal or academic tone, you could instruct, "In academic language, discuss the implications of AI in today's society."
  • If you want the AI to role-play a specific character or persona, you might specify, "Imagine you are Shakespeare and write a poem about the beauty of nature."
  • If you need the response to be of a certain length, you could say, "Summarize the plot of Lord of the Rings in two paragraphs."
  • If you want the AI to avoid certain topics or content, you might state, "Explain the process of photosynthesis without mentioning sunlight."

4. Ask the Same Question Multiple Times

Repeating the same question can be a valuable technique. By doing this, you can evaluate a variety of responses and choose the one that best suits your needs. This approach can help the model learn from any inconsistencies in its responses and provide better solutions over time. It's a method of harnessing the model's diversity in generating responses, to yield the most fitting answer.

5. Experiment with different phrasings and approaches

There's often more than one way to phrase a prompt to get the response you're looking for. If you're not getting the results you want, try rephrasing your prompt or approaching it from a different angle. Variety in how you ask can lead to diversity in the outcomes you receive.

6. Ask for step-by-step explanations

A powerful method to enhance the utility of the AI model is to request it to elucidate its decision-making process in a sequential manner. This approach provides you with a window into the model's logic, fostering a better understanding of its output. For example, you might prompt, "Could you provide a step-by-step breakdown of your criteria for selecting the most suitable movie for me?"

Interestingly, asking for step-by-step reasoning can be particularly effective because it spreads out the reasoning over many tokens, essentially conditioning the model to focus on providing a correct answer. This can lead to better performance, as the model doesn't have to spend its probability algorithm on lower-quality solutions. This practice not only helps clarify the AI's reasoning but also allows for more informed and productive interactions.

7. Leverage Thought-Action-Observation Sequences

Enhancing the effectiveness of the AI's responses can be achieved by structuring the prompt to elicit a sequence of thought, action, and observation. This method essentially mimics a cognitive process, prompting the AI to engage in a deeper, more thoughtful response, in this sequence:

  • "Thought" refers to the AI's internal reasoning or decision-making process.
  • "Action" refers to the application of that thought process, where the AI uses its training and language capabilities to generate a response.
  • "Observation" refers to the assessment or reflection of the results of the action.

By repeating this pattern, you encourage a full rollout of the AI's 'thinking' process, enabling the model to deliver more comprehensive and insightful responses. Here an example to clarify the thought-action-observation sequence:

Suppose you're asking the AI to help you design a simple experiment to test the effects of sunlight on plant growth.

Thought: The AI “thinks” about the variables involved in the experiment, such as types of plants, amounts of sunlight, control group, and what measures will be used to assess growth.
Example Prompt: "What factors should we consider when designing this experiment?"

Action: The AI takes the thought and generates an action - in this case, a plan for the experiment.
Example Response: "We should consider several factors: the type of plant used, ensuring we have a control group of plants not exposed to sunlight, and measures of growth such as height or number of leaves. We could set up the experiment as follows: Take two identical pots of the same type of plant. Place one in a sunny location and the other in a shaded location. Over a period of a month, measure and record the height of the plants and the number of leaves."

Observation: The AI reflects on the plan it has generated, considering potential pitfalls or improvements.
Example Prompt: "Can you think of any limitations or improvements to this experimental design?"

This sequence can be repeated until you're satisfied with the plan, each cycle refining the thought process and the resulting plan of action. The thought-action-observation sequence not only makes the AI's “thought” process more transparent but also allows for more iterative and in-depth responses.

8. Point Out Errors

AI models can often detect when they are wrong, provided they are prompted to do so. For instance, if you ask for a comedy movie recommendation and it suggests a horror film, it could realize its mistake if you point it out. It's important to note that the model isn't inherently capable of self-review or rectification without such prompting. Therefore, actively highlighting mistakes can enhance the accuracy and relevance of the model's future responses.

9. Use Available Tools and Plug-ins

Some AI ChatBots, like ChatGPT, can use specialized algorithms, often available through plugins, to augment their abilities. For instance, the Wolfram Alpha plug-in can assist the model in solving complex mathematical problems or provide scientifically correct information, enhancing the accuracy and reliability of the model's responses. So, since these models can't always recognize when they lack certain knowledge, you might need to remind them to use these tools when they're needed.

10. Use a Template

Templates can be particularly useful when seeking information that follows a specific format or when consistency in responses is paramount. For instance, if you're gathering data for a report or looking for answers that can be easily compared or analyzed, a template can ensure that each response fits the required framework.

Consider a scenario where you're asking for book recommendations. Instead of a general prompt like "Recommend a book," you could use a template: "Recommend a book in the following format: [Book Title], [Author], [Genre], [Publish Year]. Briefly describe why you're recommending this book in one or two sentences." By doing so, you ensure that the AI's response will be in a consistent, easily digestible format that fits your specific needs.

11. Engage with the Model's Limitations

In situations where the responses aren't aligned with your expectations, directly addressing the model's constraints can be a beneficial approach. This can help you gain a clearer understanding of the model's capabilities and how to shape your inquiries accordingly. For example, you might pose a question such as, "What types of questions are you capable of answering accurately?"

Interestingly, LLMs are not designed to “succeed” but to “imitate.” They are trained on diverse datasets containing both high-quality and low-quality responses. By default, they aim to imitate all of it. Therefore, it may be necessary explicitly demand good performance, a high-quality solution, or even to act as an expert.

However, you should be cautious about asking for 'too much' from the model. For instance, if you ask the model to pretend it has an IQ of 500, it might go beyond its training data distribution or start role-playing based on some sci-fi data it was trained on (e.g. lawyers were sued over allegations based on fake cases generated by AI - check here). Therefore, finding the right balance in your requests is key. This form of dialogue not only enhances your understanding of the model's scope but can also assist you in refining your prompts for more effective interactions.

Limitations and Potential

It's crucial to remember that AI ChatBots have their limitations. They can exhibit bias, create unverified information, misinterpret tasks, and make errors. Their knowledge is confined to the data they were last trained on. On the other hand, they possess a vast, flawless memory, allowing them to recall any piece of information they've been trained on.

The true potential of these AI models lies in the combination of their capabilities with human discernment. With the right guidance, they can be a powerful tool for performing tasks ranging from conducting research to generating creative ideas. Ultimately, the effectiveness of these models largely depends on how we, as users, interact with and guide them toward our goals.