Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netsim error: Expected header frame #2399

Closed
dignifiedquire opened this issue Jun 24, 2024 · 7 comments
Closed

Netsim error: Expected header frame #2399

dignifiedquire opened this issue Jun 24, 2024 · 7 comments
Labels
bug Something isn't working
Milestone

Comments

@dignifiedquire
Copy link
Contributor

[INFO] Log file: logs/iroh__1_to_10__iroh_get_2.txt
[INFO][iroh__1_to_10__iroh_get_2] cmd: time ./bins/iroh blob get --start blobaceijw4w2gduhkl6ojtp5rrnrkim4mqdne4yxhnlfgiuxz7nvk5uoajdnb2hi4dthixs65ltmuys2mjoojswyylzfzuxe33ifzxgk5dxn5zgwlrpaeaauaaabpcfoaclq2d7pvok4kc7sq2fym3zdbm6uoum3bgd5tgnrfdqtcbzlff7d4 --out STDOUT > /dev/null
[INFO][iroh__1_to_10__iroh_get_2] 
[INFO][iroh__1_to_10__iroh_get_2] Iroh is running
[INFO][iroh__1_to_10__iroh_get_2] Node ID: 6p2qtgd4bst4ym3u4ckkhzascazo4cyvqx6e7elxjvscp7sop5vq
[INFO][iroh__1_to_10__iroh_get_2] 
[INFO][iroh__1_to_10__iroh_get_2] Fetching: jodip56vzlril6kdixbtpemft2r2rtmeypwmzweuocmihfmux4pq
[INFO][iroh__1_to_10__iroh_get_2] Transferred 1004.30 MiB in 20 seconds, 50.71 MiB/s
[INFO][iroh__1_to_10__iroh_get_2] Error: Expected header frame
[INFO][iroh__1_to_10__iroh_get_2] 

Error source: https://github.com/n0-computer/iroh/blob/main/iroh/src/client/blobs.rs#L801

@dignifiedquire dignifiedquire added the bug Something isn't working label Jun 24, 2024
@dignifiedquire dignifiedquire added this to the v0.19.0 milestone Jun 24, 2024
@matheus23
Copy link
Contributor

First commit we've seen this happen on in CI: f73c506
(not necessarily the cause - it's a somewhat random error)

@matheus23
Copy link
Contributor

Here's the link to one of the netsim jobs failing: https://github.com/n0-computer/iroh/actions/runs/9585622113/job/26431903382

@Frando
Copy link
Member

Frando commented Jun 24, 2024

So going through the flow once end-to-end:

  • netsim runs ./bins/iroh blob get --start %s --out STDOUT > /dev/null". This does the following, starting from iroh-cli/src/main.rs:
  • Starts an iroh node
  • then executes BlobCommands::Get in a task spawned on node.local_pool_handle:
  • .. which first runs the download (which completes without errors) and then goes to
let mut blob_read = iroh.blobs().read(hash).await?; 
tokio::io::copy(&mut blob_read, &mut tokio::io::stdout()).
  • which sometimes fails with the mentioned errors
  • there's no reason the node should shutdown at that place, unless Ctrl-C is invoked:
// iroh-cli/src/commands/start.rs:100
    tokio::select! {
        biased;
        // always abort on signal-c
        _ = tokio::signal::ctrl_c(), if run_type != RunType::SingleCommandNoAbort => {
            command_task.abort();
            node.shutdown().await?;
        }
        // abort if the command task finishes (will run forever if not in single-command mode)
        res = &mut command_task => {
            let _ = node.shutdown().await;
            res??;
        }
    }

see how the shutdown is clearly happening after the command task completed.

the read RPC call is read, a new task in the JoinSet in NodeInner::run is spawned for the RPC call. It goes to RpcHandler::blob_read_at, which spawns a new task on the node.local_pool_handle() for the actual read_loop:

self.inner.rt.spawn_pinned(move || async move {
    if let Err(err) = read_loop(req, db, tx.clone(), RPC_BLOB_GET_CHUNK_SIZE).await {
        tx.send_async(RpcResult::Err(err.into())).await.ok();
    }
});

The JoinHandle for the read_loop task is dropped immediately, so that task runs unsupervised.

Now I'm reading the source code of LocalPoolHandle: It starts a thread for each CPU. The thread creates a tokio LocalSet and blocks_on that local set, with a loop until an UnboundedReceiver stream ends (here's the loop and the unbounded channel is created here). The sender to that channel is stored in the LocalPoolHandle, so it will be closed once the LocalPoolHandle is dropped.

However! This is an unbounded channel, so dropping the LocalPoolHandle does not immediately cancel the actual worker thread, but keeps processing the queued futures, I think?
Which would mean that once the node is shutdown and NodeInner is dropped, there could still be RPC futures running on the local pool?

But still I can't see why we would start shutdown before the command finishes. So this is maybe something to do better, but likely not the cause of the bug at hand.

@matheus23
Copy link
Contributor

From @rklaehn, two log cases, one with the bug occuring, one without the bug occuring:
With Bug:

I ran some more sims. Here is one run (presumably the failing one), where the loop exits due to the cancel token being cancelled...
[INFO] Log file: logs/iroh__1_to_10-1__iroh_get_4.txt
[INFO][iroh__1_to_10-1__iroh_get_4] cmd: time RUST_LOG=iroh_bytes=debug,iroh::node=debug ./bins/iroh blob get --start blobadwh6ljnfvpajdqmza62kvqf5ag7qilmshftxmjbb2otpbsv6baikajdnb2hi4dthixs65ltmuys2mjoojswyylzfzuxe33ifzxgk5dxn5zgwlrpaeaauaaabpcfoaclq2d7pvok4kc7sq2fym3zdbm6uoum3bgd5tgnrfdqtcbzlff7d4 --out STDOUT > /dev/null
[INFO][iroh__1_to_10-1__iroh_get_4] 
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:02.468482Z DEBUG node{me=iln3e7xoujdayhpd}: iroh::node: listening at: 0.0.0.0:11204 and [::]:11205
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:02.468498Z DEBUG node{me=iln3e7xoujdayhpd}: iroh::node: rpc listening at: [Socket(127.0.0.1:4919)]
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:02.468510Z DEBUG node{me=iln3e7xoujdayhpd}: iroh::node: gossip initial update: [DirectAddr { addr: 10.0.0.5:11204, typ: Local }] me=PublicKey(iln3e7xoujdayhpd)
[INFO][iroh__1_to_10-1__iroh_get_4] Iroh is running
[INFO][iroh__1_to_10-1__iroh_get_4] Node ID: iln3e7xoujdayhpdqfov2dkni7bdsut5pazzcndcognq4hlxbf6a
[INFO][iroh__1_to_10-1__iroh_get_4] 
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:02.468588Z DEBUG iroh::node::rpc: handling rpc request: AuthorGetDefault
[INFO][iroh__1_to_10-1__iroh_get_4] Fetching: jodip56vzlril6kdixbtpemft2r2rtmeypwmzweuocmihfmux4pq
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:02.468630Z DEBUG iroh::node::rpc: handling rpc request: BlobDownload
[INFO][iroh__1_to_10-1__iroh_get_4] Transferred 1004.30 MiB in 13 seconds, 78.44 MiB/s
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:15.277094Z DEBUG node{me=iln3e7xoujdayhpd}: iroh::node: cancel_token cancelled
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:15.277108Z DEBUG node{me=iln3e7xoujdayhpd}: iroh::node: end of node run loop - shutdown
[INFO][iroh__1_to_10-1__iroh_get_4] 2024-06-25T06:16:15.425386Z  WARN iroh::node: failed to retrieve local endpoints
[INFO][iroh__1_to_10-1__iroh_get_4] Error: Expected header frame, but RPC stream was dropped
[INFO][iroh__1_to_10-1__iroh_get_4] 
[INFO][iroh__1_to_10-1__iroh_get_4] real0m16.353s
[INFO][iroh__1_to_10-1__iroh_get_4] user0m5.112s
[INFO][iroh__1_to_10-1__iroh_get_4] sys0m3.137s

Without Bug:

[INFO] Log file: logs/iroh__1_to_10-1__iroh_get_5.txt
[INFO][iroh__1_to_10-1__iroh_get_5] cmd: time RUST_LOG=iroh_bytes=debug,iroh::node=debug ./bins/iroh blob get --start blobadwh6ljnfvpajdqmza62kvqf5ag7qilmshftxmjbb2otpbsv6baikajdnb2hi4dthixs65ltmuys2mjoojswyylzfzuxe33ifzxgk5dxn5zgwlrpaeaauaaabpcfoaclq2d7pvok4kc7sq2fym3zdbm6uoum3bgd5tgnrfdqtcbzlff7d4 --out STDOUT > /dev/null
[INFO][iroh__1_to_10-1__iroh_get_5] 
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:02.563087Z DEBUG node{me=krc4zxntwptchp2t}: iroh::node: listening at: 0.0.0.0:11204 and [::]:11205
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:02.563104Z DEBUG node{me=krc4zxntwptchp2t}: iroh::node: rpc listening at: [Socket(127.0.0.1:4919)]
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:02.563116Z DEBUG node{me=krc4zxntwptchp2t}: iroh::node: gossip initial update: [DirectAddr { addr: 10.0.0.6:11204, typ: Local }] me=PublicKey(krc4zxntwptchp2t)
[INFO][iroh__1_to_10-1__iroh_get_5] Iroh is running
[INFO][iroh__1_to_10-1__iroh_get_5] Node ID: krc4zxntwptchp2todbrecnunkc4wldi4cvsixqh7ympaqu6m4aq
[INFO][iroh__1_to_10-1__iroh_get_5] 
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:02.563204Z DEBUG iroh::node::rpc: handling rpc request: AuthorGetDefault
[INFO][iroh__1_to_10-1__iroh_get_5] Fetching: jodip56vzlril6kdixbtpemft2r2rtmeypwmzweuocmihfmux4pq
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:02.563240Z DEBUG iroh::node::rpc: handling rpc request: BlobDownload
[INFO][iroh__1_to_10-1__iroh_get_5] Transferred 1004.30 MiB in 13 seconds, 76.28 MiB/s
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:15.735978Z DEBUG iroh::node::rpc: handling rpc request: BlobReadAt
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:16.813455Z DEBUG node{me=krc4zxntwptchp2t}: iroh::node: cancel_token cancelled
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:16.813470Z DEBUG node{me=krc4zxntwptchp2t}: iroh::node: end of node run loop - shutdown
[INFO][iroh__1_to_10-1__iroh_get_5] 2024-06-25T06:16:16.813555Z  WARN iroh::node: failed to retrieve local endpoints
[INFO][iroh__1_to_10-1__iroh_get_5] 
[INFO][iroh__1_to_10-1__iroh_get_5] real0m17.432s
[INFO][iroh__1_to_10-1__iroh_get_5] user0m5.839s
[INFO][iroh__1_to_10-1__iroh_get_5] sys0m3.779s

It seems like cancel_token cancelled happening before the BlobReadAt is handled is what causes the issue.

@rklaehn
Copy link
Contributor

rklaehn commented Jun 25, 2024

When removing the polling of the JoinSet, the issue seems to go away. We probably don't want this, but it is an interesting data point.

#2406

@rklaehn
Copy link
Contributor

rklaehn commented Jun 26, 2024

I think I found it.

The issue is because fn accept() in quic-rpc is not cancel-safe. It does two things, first accepting a stream pair, then awaiting the first message of the stream pair.

Now if this future is dropped after it has accepted a stream pair but before it has received the first message, the stream pair is lost. This will look to the caller like the stream not being answered at all.

Here is a log of a failure with extended custom logging in quic-rpc:

[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.606899Z DEBUG node{me=f6wu7n7hupgz6kxh}: iroh::node: rpc listening at: [Socket(127.0.0.1:4919)]
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.606956Z DEBUG node{me=f6wu7n7hupgz6kxh}: iroh::node: gossip initial update: [DirectAddr { addr: 10.0.0.3:11204, typ: Local }] me=PublicKey(f6wu7n7hupgz6kxh)
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607000Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: wait for tick
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607037Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 705270839754053100 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607064Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 1207704470532202953 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607190Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 1207704470532202953 accepted new channel - awaiting first message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607273Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 1207704470532202953 received first message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607299Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: tick: internal_rpc
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607332Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: wait for tick
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607362Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 11717672961806680036 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607385Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 11186712929159262393 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607491Z DEBUG iroh::node::rpc: handling rpc request: AuthorGetDefault
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607642Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: tick: join_set.join_next
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607663Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: wait for tick
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607695Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 5123050420266117334 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607735Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 17279674686042460736 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607735Z DEBUG quic_rpc::pattern::server_streaming: opening connection
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607815Z DEBUG quic_rpc::pattern::server_streaming: sending message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607818Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 17279674686042460736 accepted new channel - awaiting first message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607855Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 17279674686042460736 received first message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607877Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: tick: internal_rpc
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607879Z DEBUG quic_rpc::pattern::server_streaming: send successful. returning lazy response stream
[INFO][iroh__2_to_10-2__iroh_get_2] Fetching: ii5uik6un7s2ufzeztkblvyijntcg6cabqxcafoigbish6h6xm6q
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607898Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: wait for tick
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607972Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 10773263779823536727 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607997Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 6996408850723385803 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.608031Z DEBUG iroh::node::rpc: handling rpc request: BlobDownload
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.608053Z DEBUG quic_rpc::pattern::server_streaming: got a server streaming request
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.608244Z DEBUG iroh_blobs::store::fs: starting read transaction
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.637741Z DEBUG iroh_blobs::get::fsm: sending request
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.673236Z DEBUG iroh_blobs::store::fs: done with read transaction
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.673272Z DEBUG iroh_blobs::store::fs: starting write transaction
[INFO][iroh__2_to_10-2__iroh_get_2] Transferred 1.00 MiB in 0 seconds, 17.20 MiB/s
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696119Z DEBUG iroh_blobs::store::fs: inserting complete entry for 423b442bd46fe5aa1724ccd415d7084b662378400c2e2015c8305123f8febb3d, 1048576 bytes
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696371Z DEBUG quic_rpc::pattern::server_streaming: opening connection
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696485Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 6996408850723385803 accepted new channel - awaiting first message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696479Z DEBUG quic_rpc::pattern::server_streaming: sending message
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696539Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: tick: join_set.join_next
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696557Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: wait for tick
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696574Z DEBUG quic_rpc::pattern::server_streaming: send successful. returning lazy response stream
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696586Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 55648562167519244 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696630Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 12833161595167831623 accepting new channel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696903Z TRACE node{me=f6wu7n7hupgz6kxh}: iroh::node: tick: cancel
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696952Z DEBUG node{me=f6wu7n7hupgz6kxh}: iroh::node: node shutdown services: start
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.697508Z DEBUG iroh_blobs::store::fs::tables: deleting Outboard for ii5uik6un7s2ufzeztkblvyijntcg6cabqxcafoigbish6h6xm6q
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.697611Z DEBUG iroh_blobs::store::fs::tables: deleting Sizes for ii5uik6un7s2ufzeztkblvyijntcg6cabqxcafoigbish6h6xm6q
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.697655Z DEBUG iroh_blobs::store::fs: write transaction committed
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.720731Z DEBUG iroh_blobs::store::fs: redb actor done
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.983831Z  WARN iroh::node: failed to retrieve local endpoints
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:23.085061Z DEBUG node{me=f6wu7n7hupgz6kxh}: iroh::node: node shutdown services: done
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:23.085158Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::transport::quinn: Dropping server endpoint
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:23.085341Z DEBUG iroh::node: node shutdown complete
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:23.085479Z DEBUG downloader{me=f6wu7n7hupgz6kxh}: iroh_blobs::downloader: shutting down
[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:23.085662Z TRACE iroh::node::rpc_status: clearing RPC lock: /tmp/netsim2ed0k34siroh__2_to_10-2_iroh_get_2/rpc.lock
[INFO][iroh__2_to_10-2__iroh_get_2] Error: Expected header frame, but RPC stream was dropped

Here is how it fails:

we are awaiting a new stream pair

[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.607997Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 6996408850723385803 accepting new channel

then awaiting the first message of the stream pair

[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696485Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 6996408850723385803 accepted new channel - awaiting first message

then the future gets dropped and recreated

[INFO][iroh__2_to_10-2__iroh_get_2] 2024-06-26T10:35:22.696586Z DEBUG node{me=f6wu7n7hupgz6kxh}: quic_rpc::server: 55648562167519244 accepting new channel

and the stream pair is gone. The client side will see the stream pair dropped, and will produce the error

[INFO][iroh__2_to_10-2__iroh_get_2] Error: Expected header frame, but RPC stream was dropped

@dignifiedquire
Copy link
Contributor Author

fixed in #2416

@github-project-automation github-project-automation bot moved this to ✅ Done in iroh Jun 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
4 participants