-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generated code segfaults when parsing bytes
with &until
#1918
Comments
This seems to cause spicy segfaults with some big pcaps: zeek/spicy#1918
Okay, I have this narrowed down to an 8K pcap and less than 50 lines of code, but I am probably further away from understanding the issue than I thought. I still can't share the PCAP. :( Okay, first, here's the reproducing code:
Command line:
The segfault goes away for a lot of things, some more surprising than others. Decreasing the Note, the Overall I'm not really sure what's going on here at all. My guess is still a use-after-free, all things considered. I can always go in and test with various changes to the file, but I don't really know if that'd get more info than "memory stuff" honestly. It seems related to the My next goal will be to find a pcap somewhere that will segfault with those minimal files that I can share. |
Given those files, I found some pcaps that produce the segfault. The pcaps on this site (not the first one though) produce it for me. Just run the commands I gave before with those files, but swap the pcap file out for the reproducer. I grabbed it actually doesn't segfault from the Redis analyzer on all ports, but debugging it's the same spot, so my guess is it's the same issue. |
This seems to cause spicy segfaults with some big pcaps: zeek/spicy#1918
This seems to cause spicy segfaults with some big pcaps: zeek/spicy#1918
Here's an ASAN dump - seem trim() removes something that's later accessed. Here's just the flow extracted from segfault.pcap that triggers it (looking at Zeek's debug.log which was the last packet to be processed , then extracting just that flow - port 47013): So that's now 15 packets, hopefully a bit better :-)
|
This was somewhat from #1918, but it isn't the cause of the segfault. You could get in a scenario where decrementing an iterator technically went into memory freed from `trim` but was never actually used, so only do that calculation if we must.
I've gotten a little closer to solving this, but a lot of time was spent just understanding how For the record, this is "caused" by 306d141 - removing the I went back with the sanitizer and found that this happens on one of the Redis analyzer's tests, with perfectly valid Redis that is getting parsed correctly, so that might be a good place to start. It's not as minimal as the other example above. The good thing is, for this one I have more detailed logs, which I have attached: The Whenever I've seen the use-after-free, it always has something along these lines from those logs right before:
Namely going to parse the bytes, suspending to wait for more input, resuming, trimming after finding the anyways I'm going away until Wednesday so if anyone wants to take that and run with it feel free. Otherwise I'll grab it when I'm back |
I'll take a look. |
When trimming off the beginning of a stream, an existing iterator could end up dereferencing its internal chunk pointer even if the chunk now no longer existed. The issue was inside the increment/decrement operations, which didn't check if the *current* iterator offset was still valid (because only then the current chunk is guaranteed to be valid too). TODO: - Test for original problem missing, need to see if I can build one Closes #1918.
I have a potential fix in #1923. Thanks for |
When trimming off the beginning of a stream, an existing iterator could end up dereferencing its internal chunk pointer even if the chunk now no longer existed. The issue was inside the increment/decrement operations, which didn't check if the *current* iterator offset was still valid (because only then the current chunk is guaranteed to be valid too). Closes #1918.
When trimming off the beginning of a stream, an existing iterator could end up dereferencing its internal chunk pointer even if the chunk now no longer existed. The issue was inside the increment/decrement operations, which didn't check if the *current* iterator offset was still valid (because only then the current chunk is guaranteed to be valid too). Closes #1918. (cherry picked from commit 153eca3)
I can't actually provide a reproducer - the one I have is with production data and I haven't been able to narrow it down. But I do have some info.
For context, I'm getting this when running the Redis analyzer on all TCP ports 1-10000 for some big pcaps. So not an every day use case, but I wouldn't expect a segfault :)
The segfault occurs here:
spicy/hilti/runtime/src/types/stream.cc
Line 342 in 99049d8
where that function is called when parsing
RedisBytes
, namely in the__parse_RESP__RedisBytes_6_stage2
function.The segfault is here when dereferencing
i
. Note that it does check! byte
but the value is notnullptr
. In fact, when I runi->_chunk->_data
, it also is invalid memory in the debugger ("error: Cannot access memory at address 0xXX")first
in this case is\r
- which is the first byte of the&until
clause. I removed synchronization and still get the segfault.This was from a
find
call in parser code (when searching for the__until_bytes
)That's basically all I have for here. Maybe it's a use-after-free of some sort?
I'll continue to try to narrow this down a bit, but for now it may be useful to have input. I've had weird issues with spicy segfaults on rebuilds in the past that get magically fixed when I rebuild, but this seems to be different (I've deleted & reinstalled many times and rebuilt Zeek in debug).
The text was updated successfully, but these errors were encountered: