Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic context-aware prompting #24

Closed
w0rp opened this issue Feb 18, 2023 · 1 comment
Closed

Automatic context-aware prompting #24

w0rp opened this issue Feb 18, 2023 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@w0rp
Copy link
Member

w0rp commented Feb 18, 2023

This issue will likely share some of the same requirements as #16.

We should use awareness of where the user's cursor lies and the surrounding context to automatically modify prompts to produce more accurate results. I believe this prefixing of the prompt should be automatically on by default, but it should be possible to disable it, and maybe even have an extra command to temporarily forgo automatic prompt enhancement.

The Story

Speaking in a broad sense, say you are editing the following code in Go.

package main

func main() {
    // Your cursor lies here!
}

You enter the prompt glob files ending in .csv. Neural should automatically change that prompt to something like Write code in the Go programming language. Do not write a "package" or a main function. glob files ending in .csv.. All of this can be achieved through knowledge of the surrounding text and any semantic information we can get.

Implementation

As in #16, we can integrate with Language Server Protocol (LSP) to gain knowledge of the surrounding code. We can also access basic information from Vim, such as &filetype, and the surrounding text in the buffer. Through some combination of all of the available information, we can build up a library of prompt prefixes.

Note that future machine learning tools will likely make it easier to introduce negative prompts, and to specify context, through separate parameters to the prompt itself. When we build this functionality, we should be sure to logically separate what strings are for context, and what the negative prompts are, and then produce a function that builds a single prompt string. That way, when future tools are ready, we'll be able to integrate with them quickly, without having to go back and re-do our code.

We may be able to automatically adjust the tokens requested for a single prompt. Machine learning text generation tools sometimes need to be told how much text it is that you want exactly. There will likely be some common natural language phrases we can recognise, and automatically adjust the requested tokens for the user to get better results. This too should be configurable.

@w0rp w0rp added the enhancement New feature or request label Feb 18, 2023
@w0rp w0rp self-assigned this Mar 2, 2023
@w0rp w0rp closed this as completed in cd82f3e Mar 17, 2023
@w0rp
Copy link
Member Author

w0rp commented Mar 17, 2023

I've implemented a basic framework for doing this with rules for just Go and Markdown as a start by using simple Vim script. We come back in future and integrate with Language Server Protocol later. I did some research, and only very recent versions of the spec actually provide us with just about any information that would be useful for editing prompts. The most useful of which provides informations like ctags.

Angelchev pushed a commit that referenced this issue Sep 17, 2023
Add initial context-aware prompt editing by writing simple scripts.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant