Fix an issue of all lldp entries take some time to be in DB after reboot in scaling setup. #15731
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of PR
It's found in scaling setup (34K BGP routes) test_lldp_entry_table_after_reboot test may fail as lldp table entries in DB and from show are not in sync. Further analysis shows that after reboot, full lldp entries takes longer time to come back into DB. Entries are coming to DB one by one and takes few seconds to have full data there.
Current test is calling reboot and check all critical services are up and all admin up ports are back to line then will begin lldp entries check. This could be too early as lldp packets are coming in one by one and lldp entries are written to DB one by one. The following few lldp queries could be out of sync in this scenario.
Solution is to take a query of how many lldp entries are in DB before reboot. After reboot will wait to check till all the lldp entries are in DB before further queries.
With this check the test passed on scaling setup.
Summary:
Fixes # (issue)
Type of change
Back port request
Approach
What is the motivation for this PR?
Fix test failure
How did you verify/test it?
Run lldp syncd OC tests with the fix. Did not see the issue.