You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Build a “property testing” style integration test runner that generates a random set of events to apply to an in memory model and the snapshot cache, and then compare Envoy to the in memory model.
We need to support four cases:
SoT with single streams
SoT with aggregated streams
Delta with single streams
Delta with aggregated streams
The tool will let you run of the above cases for either N random events or until the harness is manually stopped (which is useful for perf testing). The harness will generate and display a seed so that it’s reproducible if a run fails.
Preferably we’d like to support all resource types, but just clusters and endpoints would be a good start.
The main test loop will look something like:
Generate N random events
Apply them to the model
Apply then to the cache
Compare Envoy to the model
As far as event types:
We must support add, remove, and update of resources.
It would be cool to support forcibly terminating the connection and/or restarting Envoy.
I’d also like to support testing ADS invariants and sending invalid config, but I’m not exactly sure how best to model that.
Otherwise, it would be cool to get the harness to spawn 1-N Envoy clients so we can use the same code for perf testing.
The text was updated successfully, but these errors were encountered:
Build a “property testing” style integration test runner that generates a random set of events to apply to an in memory model and the snapshot cache, and then compare Envoy to the in memory model.
We need to support four cases:
The tool will let you run of the above cases for either N random events or until the harness is manually stopped (which is useful for perf testing). The harness will generate and display a seed so that it’s reproducible if a run fails.
Preferably we’d like to support all resource types, but just clusters and endpoints would be a good start.
The main test loop will look something like:
As far as event types:
I’d also like to support testing ADS invariants and sending invalid config, but I’m not exactly sure how best to model that.
Otherwise, it would be cool to get the harness to spawn 1-N Envoy clients so we can use the same code for perf testing.
The text was updated successfully, but these errors were encountered: