ChatGPT Prompt Engineering for Developers
Intro
This tutorial will cover some some prompting best practices for software development, common use cases, summarizing, inferring, tranforming, expanding, and build a chatbot using an LLM (Large Language Model).
In the development of LLMs, there have been broadly two types of LLMs:
- Base LLM: Base LLM has been trained to predict the next word based on text training data, often trained on a large amount of data from the internet and other sources to figure out what is the next most likely word to follow.
- Instruction Tuned LLM: Instruction Tuned LLM has been trained to follow instructions. They are typically trained as a base LLM for starting off, and then further fine-tune it with inputs and outputs that are instructions and good attempts to follow those instructions, and then often further refine using a technique called RLHF, reinforcement learning from human feedback, to make the system better able to be helpful and follow instructions.
This tutorial will forcus on best practices for instructions-tuned LLMs, which is more reccommend for applications.
Guidelines
Principles and tactics that will be helpful while working with language models like ChatGPT:
- Write clear and specific instructions
- Give the model time to think
All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.
Comment
Anonymous Comment
You don't need to delete empty lines. Comment directly for the best display effect.