-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single command to initialize, send and end a COPY? #27
Comments
It seems a little odd to me, although I've only used COPY when I'm streaming data through so this use-case hadn't occurred to me outside of tests. In my code, I always use I don't think you should share a connection among multiple processes simultaneously. |
It seems my use case is a bit different: I have a pool of established connections. When picking one I try to avoid the overhead of acquiring a lock as episcina does. Instead I rely on the ability of In this context it seems odd that in order to use copy you have to issue a command (simple_query("COPY...")) which will put the server in a state where its API becomes meaningless unless you continue with the right sequence of commands (send_copy_data/2, send_copy_end/1). I think sharing a connection could easily made possible. |
You get the same behavior with transactions, and COPY is pretty much a special-case transaction. If your code did
You would find that it was unreliable because another process might come in in the middle and issue an error-generating command which would abort your transaction. But I think I understand: you're thinking of COPY in terms of how it works in the What about the delay, though? Does it seem OK to you that sometimes a simple SQL command will block for a long time (because it's stuck in line behind a huge COPY) and other times it will respond instantly? |
@odo ping. Any opinion on my question about delays? |
The example with the transaction makes perfect sense. From my perspective this use case would also be better served with a pgsql_connection:transaction/2 command taking the connection and a list of queries as an argument. Regarding delays I think also a simple query like "SELECT COUNT(*) FROM..." could potentially take considerable longer than a small COPY. In my case all the queries are very similar so round-robin works well. I you consider a scenario where you have many fast queries and some long running ones it make sense to acquire a lock and have exclusive access to the connection. Considering this I think the way it works currently makes sense for most and I will just continue with my fork 8). |
Unfortunately, |
I was thinking exactly this, a bunch of queries that should be enclosed in BEGIN...COMMIT. That way |
|
Hi,
in my code I do a copy like this (Connection is the PID of a pgsql_connection process):
Since the Connection is shared by several client processes, there can be some interference here, resulting in errors like "unexpected message type 0x51".
Would it make sense to wrap this three calls into one so they are always executed in sequence?
I can implement this, just wanted to check if this makes sense or if I am using it incorrectly.
The text was updated successfully, but these errors were encountered: