-
Notifications
You must be signed in to change notification settings - Fork 854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Libpcap API for stopping delivering new packets #1374
Comments
My first question to the audience: What should the routine be called? It could be |
Note that |
@guyharris, what about I do not know if the input buffer is also used for offline reading, but if it is, it can help avoid focus on the "capture" part as it is connoted with live reading. My 2 cents. Garri |
I.e., the input buffer will be deprived of any further packets, as in "further packets will be withheld from it"? I guess that works, although 1) it's a bit of a complicated explanation of the deprivation and 2) people might not be thinking of the input buffer, they may be thinking of the enitre
It isn't. When capturing, the packets arrive spontaneously, rather than being individually requested; when reading, packets are individually requested by the loop in |
|
|
|
@fxlb, I think the verbs drain, consume, and exhaust would describe the point of view of the program using libpcap's API as it would be the job of the program itself to drain/consume/exhaust the buffer once it is "abandoned" unless I am missing something. |
|
I am also thinking of these ones, but the buffer will not be readonly in fact as the packets will be popped out from it. |
|
? |
:) |
"The two most difficult problems in software engineering are cache invalidation and naming". On a serious note, would it fit the problem space better to provide an optional call such as |
There are cases when it is desirable to fully drain the OS-level input packet buffer before terminating programs using libpcap like tcpdump, Wireshark, and others. For example, it might be necessary when large input buffers are in use to accommodate faster input packet streams with slower storage devices.
Currently, when programs like tcpdump terminate, they cannot drain the input buffer to a dump file as there is no way to instruct libpcap to stop delivering new packets to the capturing program first. As a result, there is no way to prevent loss of packets stored in large input buffers when the programs stop. If libpcap provides such an API, consuming program will be able to ask for no more new packets and safely drain the input buffer before quitting.
More details on this use case can be found in the following mailing list discussion:
https://seclists.org/tcpdump/2024/q4/10
Thank you.
Regards,
Garri
The text was updated successfully, but these errors were encountered: