You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue will likely share some of the same requirements as #16.
We should use awareness of where the user's cursor lies and the surrounding context to automatically modify prompts to produce more accurate results. I believe this prefixing of the prompt should be automatically on by default, but it should be possible to disable it, and maybe even have an extra command to temporarily forgo automatic prompt enhancement.
The Story
Speaking in a broad sense, say you are editing the following code in Go.
package main
funcmain() {
// Your cursor lies here!
}
You enter the prompt glob files ending in .csv. Neural should automatically change that prompt to something like Write code in the Go programming language. Do not write a "package" or a main function. glob files ending in .csv.. All of this can be achieved through knowledge of the surrounding text and any semantic information we can get.
Implementation
As in #16, we can integrate with Language Server Protocol (LSP) to gain knowledge of the surrounding code. We can also access basic information from Vim, such as &filetype, and the surrounding text in the buffer. Through some combination of all of the available information, we can build up a library of prompt prefixes.
Note that future machine learning tools will likely make it easier to introduce negative prompts, and to specify context, through separate parameters to the prompt itself. When we build this functionality, we should be sure to logically separate what strings are for context, and what the negative prompts are, and then produce a function that builds a single prompt string. That way, when future tools are ready, we'll be able to integrate with them quickly, without having to go back and re-do our code.
We may be able to automatically adjust the tokens requested for a single prompt. Machine learning text generation tools sometimes need to be told how much text it is that you want exactly. There will likely be some common natural language phrases we can recognise, and automatically adjust the requested tokens for the user to get better results. This too should be configurable.
The text was updated successfully, but these errors were encountered:
I've implemented a basic framework for doing this with rules for just Go and Markdown as a start by using simple Vim script. We come back in future and integrate with Language Server Protocol later. I did some research, and only very recent versions of the spec actually provide us with just about any information that would be useful for editing prompts. The most useful of which provides informations like ctags.
This issue will likely share some of the same requirements as #16.
We should use awareness of where the user's cursor lies and the surrounding context to automatically modify prompts to produce more accurate results. I believe this prefixing of the prompt should be automatically on by default, but it should be possible to disable it, and maybe even have an extra command to temporarily forgo automatic prompt enhancement.
The Story
Speaking in a broad sense, say you are editing the following code in Go.
You enter the prompt
glob files ending in .csv.
Neural should automatically change that prompt to something likeWrite code in the Go programming language. Do not write a "package" or a main function. glob files ending in .csv.
. All of this can be achieved through knowledge of the surrounding text and any semantic information we can get.Implementation
As in #16, we can integrate with Language Server Protocol (LSP) to gain knowledge of the surrounding code. We can also access basic information from Vim, such as
&filetype
, and the surrounding text in the buffer. Through some combination of all of the available information, we can build up a library of prompt prefixes.Note that future machine learning tools will likely make it easier to introduce negative prompts, and to specify context, through separate parameters to the prompt itself. When we build this functionality, we should be sure to logically separate what strings are for context, and what the negative prompts are, and then produce a function that builds a single prompt string. That way, when future tools are ready, we'll be able to integrate with them quickly, without having to go back and re-do our code.
We may be able to automatically adjust the tokens requested for a single prompt. Machine learning text generation tools sometimes need to be told how much text it is that you want exactly. There will likely be some common natural language phrases we can recognise, and automatically adjust the requested tokens for the user to get better results. This too should be configurable.
The text was updated successfully, but these errors were encountered: