-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
activator-ghclient DEMO #2
base: main
Are you sure you want to change the base?
Conversation
/activator docs/how_to_work_with_large_language_models.md |
Activator suggestions for |
|
||
[Large language models][Large language models Blog Post] are functions that map text to text. Given an input string of text, a large language model predicts the text that should come next. | ||
|
||
The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn: | |
The magic of large language models lies in their training to minimize prediction error across vast quantities of text, which leads them to learn concepts useful for these predictions. For example, they learn: |
Explanation: The subject 'their training' is chosen to emphasize the process that the models undergo, which is the focus of the sentence.
|
||
Of all the inputs to a large language model, by far the most influential is the text prompt. | ||
|
||
Large language models can be prompted to produce output in a few ways: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
Large language models can be prompted to produce output in a few ways: | |
You can prompt large language models to produce output in a few ways: |
Explanation: The subject 'You' is chosen to directly address the reader, indicating who can perform the action of prompting the models.
* A few examples in the prompt | ||
* Many hundreds or thousands of examples in a fine-tuning training dataset | ||
|
||
An example of each is shown below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
An example of each is shown below. | |
The article shows an example of each below. |
Explanation: The subject 'The article' is chosen because it is the medium through which the examples are presented to the reader.
|
||
### Instruction prompts | ||
|
||
Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want, just stay aware of how many [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) the model can process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want, just stay aware of how many [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) the model can process. | |
Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. You can provide detailed instructions, so don't be afraid to write a paragraph that explicitly details the output you want, but just stay aware of how many [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) the model can process. |
Explanation: The subject 'You' is chosen to directly instruct the reader on how to interact with the model.
|
||
### Completion prompt example | ||
|
||
Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output. | |
Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, begin with a pattern or sentence that you want the model to complete with the desired output. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output. |
Explanation: The subject 'you' is chosen to directly instruct the reader on how to guide the model's output.
|
||
* **Be more specific** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.' The more specific your instructions, the better the model can respond. | ||
* **Provide Context**: Help the model understand the bigger picture of your request. This could be background information, examples/demonstrations of what you want or explaining the purpose of your task. | ||
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective. | |
* **Ask the model to answer as if it was an expert.When you explicitly ask the model to produce high-quality output or to write as if it were an expert, you can induce it to provide higher quality answers that it believes an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective. |
Explanation: The subject 'you' is chosen to directly address the reader's potential actions to influence the model's output.
* **Be more specific** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.' The more specific your instructions, the better the model can respond. | ||
* **Provide Context**: Help the model understand the bigger picture of your request. This could be background information, examples/demonstrations of what you want or explaining the purpose of your task. | ||
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. Phrases like "Explain in detail" or "Describe step-by-step" can be effective. | ||
* **Prompt the model to write down the series of steps explaining its reasoning.** If understanding the 'why' behind an answer is important, prompt the model to include its reasoning. This can be done by simply adding a line like "[Let's think step by step](https://arxiv.org/abs/2205.11916)" before each answer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested Change:
* **Prompt the model to write down the series of steps explaining its reasoning.** If understanding the 'why' behind an answer is important, prompt the model to include its reasoning. This can be done by simply adding a line like "[Let's think step by step](https://arxiv.org/abs/2205.11916)" before each answer. | |
* **Prompt the model to write down the series of steps explaining its reasoning.** If understanding the 'why' behind an answer is important, prompt the model to include its reasoning. You can achieve this by simply adding a line like "[Let's think step by step](https://arxiv.org/abs/2205.11916)" before each answer. |
Explanation: The subject 'You' is chosen to provide a direct instruction to the reader on how to prompt the model for detailed reasoning.
The # Extract passive sentences using Vale
passive_sentences = process_with_vale(text) (source) and uses that output as an input to the language model: # Send to LLM
active_response, run_url = activator(text, passive_sentences) (source) The action makes review suggestions in a way that is easy to review and accept. The prompt is written to request the model explain the choice of subject in the review comment. The (optionally) returned public LangSmith trace URL is the top comment is powerful way to get folks with domain expertise—technical writing in this case—but not necessarily Python expertise interested in improving the prompt. It means someone can click the link, open the model call in the LangSmith Playground and then tweak the prompt to see how that steers the output: Such interaction encourages feedback in the comments both about the specific output with the current prompt and improvement of the prompt itself. The UX for modified content in the PR is slightly different as it's not possible to post a review comment beyond the lines updated and the few lines surrounding them: |
/activator docs/How_to_automate_S3_storage_with_functions.ipynb |
Activator suggestions for Original: The model is expected to clarify the ask from the user in case of ambiguity in the parameters values as described in the system message. Use |
/activator docs/How_to_automate_S3_storage_with_functions.ipynb --commit |
…h_functions.ipynb
/activator docs/How_to_automate_S3_storage_with_functions.ipynb |
There appear to be no instances of sentences written in passive voice in |
Writing consistently in an active voice is hard. Many writing assistants do a good job at identifying instances of the passive voice but not such a good job at transforming to the active voice—which can often require a nuanced understanding of context in the article beyond the sentence. As part of a docs-as-code process, it would be helpful to get high-quality suggestions for active voice transformation at the point of review.
Let's commit a new doc and open a PR (borrowed from openai/openai-cookbook for our demo purposes).