Skip to main content

AI and prompt engineering

[PLACEHOLDER]

On this occasion we would like to present our take and summary about prompt engineering.

Like everyone else, at Beolle we have been researching the AI space lately, running experiments. On this occasion, our investigation has made us present our take and summary about prompt engineering.

(Skip Ahead note: If you like to get right to it then skip to the section What are prompts.)

Generated image with Microsoft Designer and DALL-E 3 + Modified by Beolle team

 

(Parenthesis)

 AI has disrupted all industries, and it will continue to do so as we explore new territories. We are still in discovery mode. Sectors and industries are building their know-how, evolving and using this technology;  determining the value it brings to the business.

The investment by tech players and executives, in traditional and Gen AI (Generative AI) capabilities, continues rising. The consequence has been an accelerated growth in a short matter of time. Venture capital investment in AI has grown 13x over the last ten (10) years.

Mckinsey partial image from report on GenAI impact on business


If you like to read more about the Mckinsey report, and get the full images then go to The economic potential of generative AI: The next productivity frontier. 

(Parenthesis - end) 


What are prompts?

Prompts are queries that users can provide to the Large Language Model (LLM) in order to bring forth responses. They enable us to interact with the model in plain, natural language. 

Prompt Engineering is asking the right question to get the best output from the LLM. 

Good Prompt Engineers will allow companies to get the best out of the AI models. They are creative and persistent, with good technical knowledge, an understanding of the models and the domain, as well as they are great at experimentation (exploring different parameters and settings for fine-tuning the prompts). Even though it would be a plus, you don’t need to know how to code.

By looking at the NLP progression, you can see how things have evolved to where we are (Well, at least at the time of this article creation as things continue evolving constantly and quickly in this space), which is prompt engineering:

Models/patterns Engineering progression
Initially we had supervised learning (Non-Neural Networks).
This could use SVM (Support Vector Machine), which is a type of supervised learning algorithm that can be leveraged, as an example, for classification and regression tasks.
Here is where Feature Engineering comes. This is when transforming the raw data into Features that can be used by ML models.
A Feature is a variable/attribute that serves as input.
Following, Supervised learning (Neural Networks).
Neural Networks are based on a collection of Artificial Neurons, called units. The units are organised within layers. The units are connected, passing signals (inputs) between them.
Architecture Engineering.
A way to look at this Engineering is the Long Short-Term Memory (LSTM). In LSTM the information (data) is gated by the layers, determining what data will be passing through the workflow, and which will be discarded.
Within this engineering you can leverage embeddings + LSTM for processing tasks such as classification and sentiment analysis.
Pre-train and fine-tuning.
Using pre-trained LMs.
Objective Engineering.
The model gets trained for an objective, meaning to perform a specific task.
The fine-tuning technique becomes handy, as you gain efficiency by taking an existing model and specialising it by feeding it the new datasets based on the needs of your specialisation.
Pre-train, prompt, predict.

Defining inputs that fits the model.

Prompt Engineering.

How to Structure your prompt 

  • Instruction. A statement that tells the model what is required. You can also frame the instruction as a question. 
  • Context. Information that can guide the model in a desired direction. 
  • Constraints. (Optional). This can be used to limit the scope by being explicit, keeping the focus on the context. 
  • Input data. (Optional). 
  • Examples. (Optional) The model will have further guidance regarding the answer, and its format, when examples are part of the prompt. Example: Provide a list with the 7 wonders of the world. Only use reliable sources, and list those sources.

Prompt Types

  • Direct prompting or Zero shot. It only contains the instruction in the prompt. 
    • Prompt: Can you give me a list of ideas for blog posts for tourists visiting New York City for the first time? 
  • Prompting with examples. 
    • One-shot. Instruction + one clear example. 
      • Prompt: Come up with a list of ideas for blog posts for visitors to South America. 
        • 1. Have you visited Machu Picchu? Here are a few tips
    • Few-shots. Instruction + more than one example. 
  • Chain-of-thought prompting: This type of prompt provides a series of related prompts to the model. 

Last thoughts

  1. Provide as many details as needed within your prompt. 
  2. Include the majority of the elements explained within the “How to structure your prompt” section. 
  3. Be mindful of hallucinations. Hallucinations are responses with contradictions or inconsistencies, plain and simple the responses are incorrect, either completely, or partially. Chain-of-thought-prompting can help reduce this risk as you request the LLM to explain its reasoning. 
  4. Play with the temperature parameter when prompting and working with LLMs. A higher temperature can move the model response farther from context as it yields to be more creative and open. While a lower one makes things more factual and accurate.

Trending posts

Apple's App Tracking Transparency sealing Meta's fate

If you have been following the recent news on Meta (formerly Facebook) you may have read that Meta recently projected their ad revenue will be cut by a staggering $10 billion in 2022 due to Apple’s new App Tracking Transparency feature (also known as ATT). This has resulted in Meta’s stock to plummet by over 20%. Photo by julien Tromeur on Unsplash - modified by Beolle So what is Apple’s ATT and how does it impact ad revenue? Apple has been releasing multiple privacy features for the last few years. This included Apple’s Mail Privacy Protection and Apple’s App Tracking Transparency feature. You can learn more about Apple’s Mail Privacy Protection in our earlier post by clicking here .  Apple’s App Tracking Transparency (ATT) was launched in iOS 14.5 and iPadOS 14.5 where it prompted users to select if they wanted the app to track their activities across other apps on the device. The prompt is displayed when the user opens an app like Facebook or Instagram for the first time o...

Goal setting frameworks for Product Management - OKR and HOSKR

As a business analyst and product manager we often use various frameworks to synthesize and organize our product ideas and goals. I think of frameworks as tools in our product management tool kit which we use depending on the task at hand.  And speaking of goals, OKR is a very popular framework that I often use to set the goals for the products I am managing. However recently I participated the #ProductCon conference hosted by Product School  and I stumbled upon one of the talks in which Rapha Cohen, the CPO at Google Waze introduced a more effective framework for setting product goals. The framework is called HOSKR.  In this post I'll describe both the OKR and HOSKR frameworks in more details using examples. I hope this will provide you, our readers, more practical insights on how to effectively use these frameworks to set your product goals.  OKR OKR stands for O bjectives and K ey R esults. If you are reading this post then you are on our Beolle blog and I am goi...

Assembling MLOps practice - part 2

 Part I of this series, published in May, discussed the definition of MLOps and outlined the requirements for implementing this practice within an organisation. It also addressed some of the roles necessary within the team to support MLOps. Lego Alike data assembly - Generated with Gemini   This time, we move forward by exploring part of the technical stack that could be an option for implementing MLOps.  Before proceeding, below is a CTA to the first part of the article for reference. Assembling an MLOps Practice - Part 1 ML components are key parts of the ecosystem, supporting the solutions provided to clients. As a result, DevOps and MLOps have become part of the "secret sauce" for success... Take me there Components of your MLOps stack. The MLOps stack optimises the machine learning life-cycle by fostering collaboration across teams, delivering continuous integration and depl...

Unlocking the Future of Brand Visibility with Adobe's LLM Optimizer

The rapid rise of AI tools like ChatGPT, Gemini, and Perplexity is transforming how consumers interact with brands and make purchasing decisions. These tools are quickly becoming the go-to resources for research, leading to higher conversion rates and more informed buying choices. As traffic from AI technologies continues to surge, brands must adapt to stay relevant in this evolving landscape. Leaves falling - search box - Created with Adobe Express Enter Adobe’s innovative LLM Optimizer, an AI-first tool designed to help brands navigate the complexities of this new reality and ensure they capitalize on the benefits of AI-driven engagement. Here’s how LLM Optimizer can drive significant value for your brand through Generative Engine Optimization (GEO): 1. Gain Insights into Your Brand's Current Standing. LLM Optimizer empowers brands to understand their visibility in AI-driven search results. By providing comprehensive reports on current mentions, citations, and recommendations in ...

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed