On this occasion we would like to present our take and summary about prompt engineering.
Like everyone else, at Beolle we have been researching the AI space lately, running experiments. On this occasion, our investigation has made us present our take and summary about prompt engineering.
(Skip Ahead note: If you like to get right to it then skip to the section What are prompts.)
Generated image with Microsoft Designer and DALL-E 3 + Modified by Beolle team |
(Parenthesis)
AI has disrupted all industries, and it will continue to do so as we explore new territories. We are still in discovery mode. Sectors and industries are building their know-how, evolving and using this technology; determining the value it brings to the business.
The investment by tech players and executives, in traditional and Gen AI (Generative AI) capabilities, continues rising. The consequence has been an accelerated growth in a short matter of time. Venture capital investment in AI has grown 13x over the last ten (10) years.
Mckinsey partial image from report on GenAI impact on business |
If you like to read more about the Mckinsey report, and get the full images then go to The economic potential of generative AI: The next productivity frontier.
(Parenthesis - end)
What are prompts?
Prompts are queries that users can provide to the Large Language Model (LLM) in order to bring forth responses. They enable us to interact with the model in plain, natural language.
Prompt Engineering is asking the right question to get the best output from the LLM.
Good Prompt Engineers will allow companies to get the best out of the AI models. They are creative and persistent, with good technical knowledge, an understanding of the models and the domain, as well as they are great at experimentation (exploring different parameters and settings for fine-tuning the prompts). Even though it would be a plus, you don’t need to know how to code.
By looking at the NLP progression, you can see how things have evolved to where we are (Well, at least at the time of this article creation as things continue evolving constantly and quickly in this space), which is prompt engineering:
Models/patterns | Engineering progression |
---|---|
Initially we had supervised learning (Non-Neural Networks). This could use SVM (Support Vector Machine), which is a type of supervised learning algorithm that can be leveraged, as an example, for classification and regression tasks. |
Here is where Feature Engineering comes. This is when transforming the raw data into Features that can be used by ML models. A Feature is a variable/attribute that serves as input. |
Following, Supervised learning (Neural Networks).
Neural Networks are based on a collection of Artificial Neurons, called units. The units are organised within layers. The units are connected, passing signals (inputs) between them. |
Architecture Engineering.
A way to look at this Engineering is the Long Short-Term Memory (LSTM). In LSTM the information (data) is gated by the layers, determining what data will be passing through the workflow, and which will be discarded. Within this engineering you can leverage embeddings + LSTM for processing tasks such as classification and sentiment analysis. |
Pre-train and fine-tuning. Using pre-trained LMs. |
Objective Engineering. The model gets trained for an objective, meaning to perform a specific task. The fine-tuning technique becomes handy, as you gain efficiency by taking an existing model and specialising it by feeding it the new datasets based on the needs of your specialisation. |
Pre-train, prompt, predict.
Defining inputs that fits the model. |
Prompt Engineering. |
How to Structure your prompt
- Instruction. A statement that tells the model what is required. You can also frame the instruction as a question.
- Context. Information that can guide the model in a desired direction.
- Constraints. (Optional). This can be used to limit the scope by being explicit, keeping the focus on the context.
- Input data. (Optional).
- Examples. (Optional) The model will have further guidance regarding the answer, and its format, when examples are part of the prompt. Example: Provide a list with the 7 wonders of the world. Only use reliable sources, and list those sources.
Prompt Types
- Direct prompting or Zero shot. It only contains the instruction in the prompt.
- Prompt: Can you give me a list of ideas for blog posts for tourists visiting New York City for the first time?
- Prompting with examples.
- One-shot. Instruction + one clear example.
- Prompt: Come up with a list of ideas for blog posts for visitors to South America.
- 1. Have you visited Machu Picchu? Here are a few tips
- Few-shots. Instruction + more than one example.
- Chain-of-thought prompting: This type of prompt provides a series of related prompts to the model.
Last thoughts
- Provide as many details as needed within your prompt.
- Include the majority of the elements explained within the “How to structure your prompt” section.
- Be mindful of hallucinations. Hallucinations are responses with contradictions or inconsistencies, plain and simple the responses are incorrect, either completely, or partially. Chain-of-thought-prompting can help reduce this risk as you request the LLM to explain its reasoning.
- Play with the temperature parameter when prompting and working with LLMs. A higher temperature can move the model response farther from context as it yields to be more creative and open. While a lower one makes things more factual and accurate.