-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: shared staking queries #212
Conversation
# Conflicts: # x/bundles/keeper/keeper_suite_invalid_bundles_test.go # x/bundles/keeper/logic_bundles.go # x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go # x/bundles/keeper/logic_round_robin.go # x/bundles/keeper/logic_round_robin_test.go # x/bundles/types/expected_keepers.go # x/stakers/keeper/exported_functions.go # x/stakers/keeper/getters_params.go # x/stakers/keeper/getters_stake_fraction.go # x/stakers/keeper/getters_valaccount.go # x/stakers/keeper/logic_stakers.go # x/stakers/keeper/msg_server_join_pool.go # x/stakers/keeper/msg_server_join_pool_test.go # x/stakers/keeper/msg_server_update_stake_fraction_test.go # x/stakers/types/events.pb.go
WalkthroughThis pull request introduces a comprehensive overhaul of the staking and querying mechanisms in the Kyve protocol. The changes primarily focus on transitioning from a delegation-based model to a more nuanced stake-based approach, introducing new concepts like stake fractions, voting power caps, and more detailed validator representations. The modifications span multiple components, including protocol buffers, keeper logic, test suites, and query handlers, reflecting a holistic redesign of how stakers and pools interact. Changes
Sequence DiagramsequenceDiagram
participant Staker
participant Pool
participant StakersKeeper
participant ValidatorRegistry
Staker->>Pool: Join Pool
Pool->>StakersKeeper: Calculate Effective Stake
StakersKeeper->>ValidatorRegistry: Retrieve Validator Info
ValidatorRegistry-->>StakersKeeper: Return Validator Details
StakersKeeper->>StakersKeeper: Apply Voting Power Cap
StakersKeeper-->>Pool: Return Effective Stake
Possibly Related PRs
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (28)
x/bundles/keeper/logic_round_robin_test.go (2)
25-26
: Keep the documentation consistent with other comments.
These added lines accurately describe the new test cases. Consider also updating any related sections in this file or elsewhere to reflect these new scenarios.
Line range hint
40-49
: Consider validating stakeFraction in tests for boundary conditions.
This method effectively abstracts joining a pool with a custom stake fraction. However, you may want to add a test case or assertion ensuring that stakeFraction is within [0,1], reflecting realistic scenarios for fractional stake.x/stakers/keeper/logic_stakers.go (2)
26-28
: Update function documentation to reflect stake terminology.The code changes correctly implement the transition from delegation to stake calculations, but the function's documentation still refers to "self-delegation + delegation". Consider updating it to align with the new terminology.
-// getLowestStaker returns the staker with the lowest total stake -// (self-delegation + delegation) of a given pool. +// getLowestStaker returns the staker with the lowest total stake of a given pool.
Line range hint
1-58
: Consider performance and concurrency implications.The current implementation has two architectural considerations:
- Performance:
getLowestStaker
iterates through all stakers on each call. For pools with many stakers, this could become a bottleneck.- Concurrency: The implementation should be reviewed for race conditions when multiple stakers attempt to join a pool simultaneously.
Consider these improvements:
- Maintain a sorted index of stakers by stake amount for O(1) lowest staker lookup
- Implement proper transaction ordering or locking mechanisms for concurrent pool joins
x/query/keeper/helper.go (1)
11-11
: Validator retrieval.
Although the code discards the boolean return, consider logging or handling cases where the validator might not be found.x/stakers/keeper/exported_functions.go (1)
50-76
: GetPaginatedStakersByPoolStake.
• Renamed method clarifies the shift from delegation-based logic to stake-based logic.
• Sorting stakers in descending stake order is beneficial for queries but watch out for performance if the list is large.x/stakers/keeper/keeper_suite_effective_stake_test.go (8)
62-104
: Handle potential boundary conditions.
In the test "Test effective stake with all validators below the max pool voting power", your coverage is solid. However, consider adding an assertion to confirm each validator's stake fraction or the logic behind stake fraction rounding, if any.
106-146
: Explain “one validator above the max pool voting power” scenario more clearly.
The test name is correct, but the test scenario effectively checks a cap on stake and adjusts it from 250 to 200 for staker_1. A brief in-code comment clarifying how the 0.5 fraction led to 200 would improve maintainability.
236-276
: Check if zero-delegation validators are feasible in production.
Zero-delegation or zero-stake validators could complicate liveness assumptions. Confirm if these scenarios make sense beyond test coverage or if special checks are required in production code.Would you like a new GitHub issue to explore constraints or guardrails around zero delegation?
278-314
: Edge cases with all zero delegations might hamper network security.
When all delegations are zero, the code gracefully handles the scenario, but real-world usage might degrade network security. Document or log a warning if this scenario arises in production.
358-380
: Check boundary condition at 100% stake.
The test ensures no stake is slashed. It’s good coverage. Consider adding negative test coverage to confirm the boundary holds at 100% but breaks at values slightly above 1.0.
426-466
: Utility for complicated fraction math.
The code reduces a stake from 300 to 200 due to the fraction 0.9. Repeated fraction calculations can be prone to float inaccuracies; a standard library or big.Int approach might be safer if the scale grows.
468-512
: Monitor multi-validator high cap scenario.
When multiple validators exceed the threshold, we confirm final stake is distributed among them. The test is valid. Double-check events or logs to help with debugging if unexpected stake distribution arises.
556-592
: Add negative test coverage.
“All validators having zero delegation due to stake fractions” is well tested. Add a test to ensure partial fractions that sum to < 1 also produce correct partial stakes across all validators.x/bundles/keeper/keeper_suite_invalid_bundles_test.go (8)
26-27
: Clarify the stake fraction role in invalid bundles.
Add a descriptive comment explaining how stake fraction influences bundle validity checks, so future maintainers quickly grasp this relationship.
276-295
: Confirm foreign delegation logic.
The new code delegates to staker_0 and staker_1 from external addresses. Check if reward splits and slashing are working as intended for external delegators in all edge cases.
324-325
: Double-check commit timing.
We commit after 60 seconds. In real block time, only block boundaries matter, not seconds. This may be a test or conceptual approach for guaranteed updates, but ensure it aligns with production block intervals.
475-493
: Check side effects of partial delegations.
When staker_1 and staker_2 have foreign delegations, ensure the slash logic and reward logic remain accurate if partial delegators exist who might not vote.
540-541
: Block intervals vs seconds.
Similar to previous comment. Re-verify that the 60-second commit in test is consistent with how actual blocks flow. Possibly rename to something like “CommitAfterBlocks(1)” for clarity.
716-727
: Refactor repeated code for staker creation and delegation.
Creating validators and joining pools appears multiple times. A small helper in test code could keep logic consistent and reduce duplication.
965-965
: Enhance scoping of partial invalid votes with max voting power.
You test some stakers voting invalid. Add a scenario with partial slashes in combination with near-boundary max voting power to confirm fractional logic remains correct.
Line range hint
967-1166
: Test coverage for max voting power with partial delegations.
The final test checks a single scenario of staker_2 hitting a 0.4 limit. It’s thorough, but consider verifying a scenario with two stakers near the boundary, ensuring the fraction distribution logic remains correct.x/bundles/keeper/keeper_suite_valid_bundles_test.go (1)
28-28
: Add clarifying comment for newly introduced max voting power scenario.
Line 28 suggests a new scenario with maximum voting power. Explain how this scenario differs from standard invalid-vote or abstain scenarios.x/query/keeper/grpc_query_stakers_by_pool.go (1)
32-34
: Consider performance implications for large staker lists.The sorting implementation might impact performance for pools with many stakers. Consider adding pagination support or implementing a more efficient sorting strategy for large datasets.
util/arrays.go (1)
42-52
: Optimize memory allocation and improve documentation.The implementation is correct but could be enhanced:
- Pre-allocate the result slice with the input length for better performance with large inputs
- Add documentation explaining the order preservation guarantee
+// RemoveDuplicateStrings returns a new slice containing unique strings from the input slice. +// The order of elements in the result maintains the first occurrence of each string. func RemoveDuplicateStrings(strSlice []string) []string { allKeys := make(map[string]bool) - list := []string{} + list := make([]string, 0, len(strSlice)) for _, item := range strSlice { if _, value := allKeys[item]; !value { allKeys[item] = true list = append(list, item) } } return list }x/stakers/keeper/getters_staker.go (1)
249-278
: Add input validation and improve error handling.The implementation is correct but could benefit from the following improvements:
- Validate that
slice
is not nil- Validate that
accumulator
function is not nil- Add bounds checking for page/limit calculations
- Consider adding a maximum limit to prevent processing too many items at once
func arrayPaginationAccumulator(slice []string, pagination *query.PageRequest, accumulator func(address string, accumulate bool) bool) (*query.PageResponse, error) { + if slice == nil { + return nil, fmt.Errorf("slice cannot be nil") + } + if accumulator == nil { + return nil, fmt.Errorf("accumulator function cannot be nil") + } + if pagination != nil && pagination.Key != nil { return nil, fmt.Errorf("key pagination not supported") } page, limit, err := query.ParsePagination(pagination) if err != nil { return nil, err } + // Add maximum limit to prevent processing too many items + if limit > 100 { + return nil, fmt.Errorf("limit cannot exceed 100") + } + count := 0 minIndex := (page - 1) * limit maxIndex := (page) * limit + + // Add bounds checking + if minIndex < 0 || maxIndex < 0 { + return nil, fmt.Errorf("invalid page or limit: negative indices") + } for i := 0; i < len(slice); i++ { if accumulator(slice[i], count >= minIndex && count < maxIndex) { count++ } } pageRes := &query.PageResponse{ NextKey: nil, Total: uint64(count), } return pageRes, nil }x/bundles/keeper/keeper_suite_zero_delegation_test.go (1)
332-335
: Consider extracting helper method for setting next uploader.The manual setting of the next uploader is repeated in multiple test cases. Consider extracting this into a test helper method to improve maintainability and reduce code duplication.
+ func (s *KeeperTestSuite) SetNextUploader(poolId uint64, staker string) { + bundleProposal, _ := s.App().BundlesKeeper.GetBundleProposal(s.Ctx(), poolId) + bundleProposal.NextUploader = staker + s.App().BundlesKeeper.SetBundleProposal(s.Ctx(), bundleProposal) + }Then use it in the test cases:
- bundleProposal, _ := s.App().BundlesKeeper.GetBundleProposal(s.Ctx(), 0) - bundleProposal.NextUploader = i.STAKER_0 - s.App().BundlesKeeper.SetBundleProposal(s.Ctx(), bundleProposal) + s.SetNextUploader(0, i.STAKER_0)x/stakers/keeper/msg_server_join_pool_test.go (1)
74-76
: Document the implications of MaxVotingPowerPerPool.The introduction of
MaxVotingPowerPerPool = 1
could have significant implications for pool governance. Consider adding a comment explaining:
- The purpose of this parameter
- Why the value is set to 1
- The impact on pool voting dynamics
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
x/query/types/query.pb.go
is excluded by!**/*.pb.go
x/query/types/stakers.pb.go
is excluded by!**/*.pb.go
x/stakers/types/events.pb.go
is excluded by!**/*.pb.go
📒 Files selected for processing (33)
CHANGELOG.md
(1 hunks)go.mod
(1 hunks)proto/kyve/query/v1beta1/query.proto
(4 hunks)proto/kyve/query/v1beta1/stakers.proto
(2 hunks)proto/kyve/stakers/v1beta1/events.proto
(1 hunks)testutil/integration/checks.go
(2 hunks)util/arrays.go
(1 hunks)x/bundles/keeper/keeper_suite_invalid_bundles_test.go
(17 hunks)x/bundles/keeper/keeper_suite_stakers_leave_test.go
(6 hunks)x/bundles/keeper/keeper_suite_valid_bundles_test.go
(3 hunks)x/bundles/keeper/keeper_suite_zero_delegation_test.go
(7 hunks)x/bundles/keeper/logic_bundles.go
(3 hunks)x/bundles/keeper/logic_bundles_test.go
(2 hunks)x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go
(13 hunks)x/bundles/keeper/logic_round_robin.go
(1 hunks)x/bundles/keeper/logic_round_robin_test.go
(5 hunks)x/bundles/types/errors.go
(1 hunks)x/bundles/types/expected_keepers.go
(1 hunks)x/query/keeper/grpc_account_funded.go
(1 hunks)x/query/keeper/grpc_query_pool.go
(1 hunks)x/query/keeper/grpc_query_staker.go
(2 hunks)x/query/keeper/grpc_query_stakers_by_pool.go
(2 hunks)x/query/keeper/grpc_query_stakers_by_pool_count.go
(2 hunks)x/query/keeper/helper.go
(3 hunks)x/stakers/keeper/exported_functions.go
(9 hunks)x/stakers/keeper/getters_staker.go
(2 hunks)x/stakers/keeper/keeper_suite_effective_stake_test.go
(1 hunks)x/stakers/keeper/logic_stakers.go
(1 hunks)x/stakers/keeper/msg_server_join_pool.go
(0 hunks)x/stakers/keeper/msg_server_join_pool_test.go
(20 hunks)x/stakers/keeper/msg_server_leave_pool_test.go
(7 hunks)x/stakers/keeper/msg_server_update_stake_fraction_test.go
(21 hunks)x/stakers/types/expected_keepers.go
(1 hunks)
💤 Files with no reviewable changes (1)
- x/stakers/keeper/msg_server_join_pool.go
🔇 Additional comments (96)
x/bundles/keeper/logic_round_robin_test.go (4)
37-38
: Good modular approach by reusing joinDummyWithStakeFraction.
Invoking the new helper function here promotes code reuse and clarifies the purpose of each helper.
103-106
: Param initialization is appropriate for the upcoming tests.
This ensures that the default maximum voting power is set to "1" (100%). Confirm that other test suites don’t require a distinct default value.
232-269
: Thorough test coverage for stake fractions.
The frequency analysis correctly checks how a stake fraction of 0, 0.5, and 1 impacts block proposer selection. These assertions appear consistent with the intended functionality.
270-313
: Impressive test scenario for maximum voting power cap.
The logic ensures that any validator exceeding the 50% threshold has their stake adjusted accordingly. This test nicely verifies the capped voting power, ensuring fair distribution of proposer responsibilities.
x/stakers/keeper/logic_stakers.go (1)
Line range hint 47-49
: Verify error message clarity in stake comparison.
The error handling uses lowestAmount
directly in the error message. Consider enhancing the error message to provide more context about the minimum required stake.
Consider updating the error message to be more informative:
- return errors.Wrapf(errorsTypes.ErrLogic, types.ErrStakeTooLow.Error(), lowestAmount)
+ return errors.Wrapf(errorsTypes.ErrLogic, "insufficient stake: minimum required %d", lowestAmount)
x/query/keeper/grpc_query_stakers_by_pool_count.go (8)
6-7
: Import clarity looks good.
These imports align with the Cosmos SDK. No issues found.
17-18
: Appropriately unwrapping SDK context.
Accessing the SDK context ensures correct chain state manipulation. Good job.
19-20
: Data slice creation for stakers collection.
The slice definition is succinct and appropriately typed.
21-27
: Accumulator logic verification.
• Correctly appends to the data slice when accumulate is true.
• The function returns true after processing each entry.
Looks consistent with paginated accumulator usage.
28-30
: Variable declaration for pageRes and err.
Good practice to keep these variables near usage.
31-32
: Retrieving paginated stakers by pool count.
Ensure the internal sorting and retrieval logic in the called function is tested for edge cases (e.g., empty pagination).
33-35
: Error handling.
Returning early on error is appropriate.
37-37
: Response construction.
The new structure effectively bundles both staker data and pagination info.
x/query/keeper/helper.go (9)
14-14
: Initialize totalPoolStake variable.
Straightforward initialization.
22-29
: Commission change logic.
Properly checks for a commission entry and creates a CommissionChangeEntry if found.
31-39
: Stake fraction change logic.
Similar to commission logic, displays consistent handling for existence checks.
40-41
: Accumulate pool stake.
Ensures totalPoolStake is aggregated across multiple pools.
53-53
: Renamed total_delegation to total_stake.
The new naming is clearer and aligns with the revised stake-based approach.
56-64
: PoolMembership fields.
• Commission, pendingCommissionChange, stakeFraction, and pendingStakeFractionChange provide vital detail.
• Balanced approach to track dynamic changes.
70-73
: Returning FullStaker.
• Contains validator data, total pool stake, and membership details.
• Matches the updated proto definitions.
87-87
: Pool status “VOTING_POWER_TOO_HIGH”.
A new state that is consistent with the introduced maximum voting power constraint.
89-90
: Minimum stake threshold check.
Changing from “delegation” to “stake” is consistent with the new approach.
proto/kyve/query/v1beta1/query.proto (15)
9-9
: Importing staking.proto.
Indicates deeper ties with Cosmos staking functionalities.
61-62
: Field renamed to total_stake.
Matches the code shift from delegation to stake.
76-76
: Replacing metadata with Validator.
Cleanly referencing the Cosmos staking validator.
78-80
: New total_pool_stake field.
A straightforward reflection of the stake across pools.
84-84
: Pools field updated.
Stores multiple PoolMembership entries. Aligned with expanded staker logic.
87-89
: CommissionChangeEntry struct definition.
The new message properly captures commission changes.
90-91
: Commission with LegacyDec.
Ensures decimal precision for the commission.
97-99
: Creation date field.
UNIX timestamp usage is consistent.
102-104
: StakeFractionChangeEntry struct definition.
Analogous to commission changes but for stake fraction.
105-107
: stake_fraction uses LegacyDec.
Maintains consistent decimal-based approach.
145-149
: Commission field in PoolMembership.
Properly references the same decimal-based approach.
151-156
: pending_commission_change.
Allows for user warnings and a grace period.
158-164
: stake_fraction in PoolMembership.
A crucial fraction specifying how much of total stake is allocated to the pool.
166-171
: pending_stake_fraction_change.
Mirrors commission change logic, offering a delay mechanism.
173-176
: pool_stake field.
Represents actual stake in the pool, possibly reduced by max voting power constraints.
x/stakers/keeper/exported_functions.go (19)
5-8
: New imports.
Inclusion of gRPC status and query types is consistent with the updated staker query logic.
20-20
: EventLeavePool usage.
Switching to stakertypes ensures consistency with the stakers module events.
33-33
: EmitTypedEvent for stakertypes.EventLeavePool.
Good usage of typed events for clarity and typed-based event handling.
77-77
: Appending addresses array.
Addresses are collected for pagination. Straightforward approach.
80-86
: Sorting overhead.
Sorting can be expensive for large sets but is likely acceptable given typical pool sizes. If needed, consider additional indexing or caching.
87-93
: arrayPaginationAccumulator usage.
Aggregates addresses for the final paginated response. Straightforward.
95-116
: GetPaginatedStakersByPoolCount.
• Sorts validators by number of valaccounts.
• Logic for counting valaccounts is correct and consistent with usage in grpc_query_stakers_by_pool_count.go.
126-126
: Unauthorized valaccount error changes.
Using stakertypes for error references is more accurate.
130-130
: Check for matching valaddress.
This ensures the correct node is validated.
142-144
: GetTotalStakeOfPool method.
Replacing “delegation” with “stake” aligns with the revised approach.
145-149
: Aggregating total stake via GetValidatorPoolStakes.
Summing each stake is correct.
152-159
: IsVotingPowerTooHigh function.
- Retrieves number of stakers in the pool.
- Compares to maxVotingPower.
- Returns true if 1 / maxVotingPower > #validators.
Double-check boundary conditions for single validator or zero maxVotingPower.
162-162
: Returning condition.
No further stakers allowed if maxVotingPower is zero. Possibly validated elsewhere.
185-186
: GetValidatorPoolStake method comment.
Clear docstring reflecting the direct retrieval of effective stake.
187-187
: Delegate to GetValidatorPoolStakes.
Centralizing the stake calculation logic is consistent and maintainable.
190-198
: GetValidatorTotalPoolStake.
Accumulates each pool’s stake for a single validator. Straightforward approach.
200-316
: GetValidatorPoolStakes logic.
• Complex distribution ensuring the maximum voting power constraints.
• Steps to redistribute stake, then scale down if oversubscribed, appear correct.
• Consider thoroughly testing edge cases (one validator, all equal stakes, large sets, etc.).
375-395
: Slash logic updates to reflect effective stake fraction.
• Correct usage of the actual fraction.
• Minimizes slashing beyond user-specified stake per pool.
408-413
: Emitting EventSlash.
Recording slash details fosters transparency.
x/stakers/keeper/keeper_suite_effective_stake_test.go (4)
514-554
: Zero delegation fraction check.
This scenario is distinct from absolute zero delegation—it uses stake fraction = 0. The code sets staker_1 to 0 stake effectively. Confirm no staker structural constraints are broken (e.g., block can’t be produced by zero stake).
53-55
: Validate the parameter update logic.
The code updates the pool parameters via GetParams() → modify → SetParams(). This approach is correct and consistent with Cosmos SDK patterns. Ensure that other modules reliant on these parameters handle changes properly (e.g., watchers or cached references).
✅ Verification successful
Based on the search results, I can now generate the final response as I have sufficient information about how MaxVotingPowerPerPool
is used across the codebase:
Parameter update logic is properly handled across modules
The MaxVotingPowerPerPool
parameter is correctly integrated across the codebase:
- Pool module provides getter/setter methods and proper parameter validation
- Stakers module correctly uses this parameter through the expected keeper interface
- Parameter changes are properly handled in stake calculations and voting power validations
- Test suites across modules demonstrate consistent parameter update handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search the codebase for references to MaxVotingPowerPerPool outside this file
# to ensure that on-parameter-change logic is consistent across modules.
rg -A 5 "MaxVotingPowerPerPool"
Length of output: 18482
194-234
: Verify zero total stake condition.
When expectation is “To(BeZero())” for total stake, it implies that all effective stakes get slashed. Double-check that this scenario is the desired outcome and not an unintended side effect of the logic.
382-424
: Validate stake fraction rounding.
In “Test effective stake with all validators below the max pool voting power due to stake fractions”, the stake fraction logic might need rounding or floor/ceiling checks. If not, floating inaccuracies might be introduced.
✅ Verification successful
Let me search for the actual stake fraction calculation and validation logic.
Let me try a different search pattern to find stake fraction validation logic.
Let me check the actual validation function being used.
Based on the code analysis, I can now provide a final response:
Stake fraction calculations are properly handled with no rounding concerns.
The stake fraction validation and calculations are well-implemented:
- Validation ensures stake fractions are valid percentages between 0 and 1 via
ValidatePercentage
inutil/validate.go
- Stake fraction calculations use
TruncateInt64()
consistently for deterministic rounding behavior - The actual stake calculation in
exported_functions.go
usesMulInt().TruncateInt64()
which is the standard way to handle decimal calculations in Cosmos SDK
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Searching for float usage or rounding logic in stake fraction calculations:
rg -A 5 "StakeFraction"
Length of output: 65560
Script:
#!/bin/bash
# Search for stake fraction validation and calculation logic
ast-grep --pattern 'func $_($_, $_, $_) {
$$$
StakeFraction $$$
$$$
}'
# Also search for any validation checks on stake fraction
rg -A 5 "validateBasic|ValidateBasic.*StakeFraction"
Length of output: 133
Script:
#!/bin/bash
# Search for stake fraction validation in message types
rg -A 10 "func.*ValidateBasic.*MsgJoinPool|func.*ValidateBasic.*MsgUpdateStakeFraction"
# Search for stake fraction validation in keeper
rg -A 10 "validate.*stake.*fraction|check.*stake.*fraction" -i
Length of output: 5810
Script:
#!/bin/bash
# Check the ValidatePercentage function implementation
rg -A 10 "func.*ValidatePercentage"
# Also check for any math operations on stake fraction
rg -A 5 "MulInt|TruncateInt|RoundInt.*StakeFraction"
Length of output: 41580
x/bundles/keeper/keeper_suite_invalid_bundles_test.go (7)
67-69
: Ensure adequacy of default MaxVotingPowerPerPool.
This sets MaxVotingPowerPerPool to 1 (i.e., 100%). Consider verifying if a large default might unintentionally allow single validator dominance in certain pools if not reconfigured.
145-152
: Check slash calculation usage.
The code reassigns initial balances for staker_0, then calculates the slash amount. If the initial balances were updated incorrectly, the slash might mismatch. Revisit if tests pass consistently with partial delegates.
240-240
: Consider partial slash distribution among multiple stakers.
This line ensures total stake of the pool after a slash remains consistent. Confirm that partial slash distribution is accounted for, especially if multiple stakers or delegators are involved.
340-347
: Verify reassigning initial balances post-commit.
You recapture staker_0’s balance after a block commit. Confirm that no intermediate steps rely on old values. This approach seems correct, but be mindful if you add further logic after commits.
[approve]
439-439
: Confirm the total stake after foreign delegation slash.
The test expects 400 * i.KYVE. Thoroughly ensure that half stake from each staker, plus foreign delegations, are accounted for.
556-558
: Make sure zero initialization is accurate.
We consistently recapture staker_0’s, staker_1’s, staker_2’s balances. Confirm no leftover references to old balance states.
[approve]
671-671
: Check total stake after multiple voter slashes.
The test sets the final total stake to 450 * i.KYVE. Cross-check sums for staker_0, staker_1, staker_2, staker_3, and external delegators so no rounding errors occur.
x/bundles/keeper/keeper_suite_valid_bundles_test.go (1)
926-926
: Double-check total stake of 800 * i.KYVE after slash.
At line 926, you expect 800 * i.KYVE. Confirm the slash logic and fraction calculations if staker_2 is slashed. Another staker might be partially slashed (like staker_3) if they also voted incorrectly.
x/stakers/types/expected_keepers.go (1)
4-10
: Validate the new GetMaxVotingPowerPerPool method.
This public interface addition aligns with the usage in the test suites. Ensure default values or zero-checks are handled so calls to this method do not return an uninitialized or missing param.
x/query/keeper/grpc_query_stakers_by_pool.go (2)
21-24
: LGTM! Clean pool existence check.
The simplified pool existence check improves code readability while maintaining the same functionality.
33-33
: Verify GetValidatorPoolStake implementation.
Ensure that the GetValidatorPoolStake method correctly handles the transition from delegation to stake calculations.
✅ Verification successful
Based on the shell script results, I can now generate the final response since I have found the implementation and related tests for GetValidatorPoolStake
:
GetValidatorPoolStake correctly handles stake calculations
The implementation in x/stakers/keeper/exported_functions.go
shows that GetValidatorPoolStake
properly handles stake calculations by:
- Delegating to
GetValidatorPoolStakes
which handles the effective stake calculation considering:- Validator's bonded tokens
- Stake fractions
- Maximum voting power limits
- Extensive test coverage in multiple test files verifying stake calculations for:
- Pool joining/leaving
- Stake fraction updates
- Multiple validator scenarios
- Delegation changes
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for GetValidatorPoolStake implementation and related tests
ast-grep --pattern 'func $_ GetValidatorPoolStake($_) $_'
rg -A 5 "GetValidatorPoolStake"
Length of output: 51125
x/query/keeper/grpc_query_staker.go (1)
29-29
: LGTM: Improved moniker retrieval from validator
The change to use Validator.GetMoniker()
is more direct and consistent with the validator-centric approach.
x/bundles/types/errors.go (1)
27-27
: LGTM: Well-defined error for zero delegation check
The new error constant is properly registered with a unique code and clear message.
x/bundles/types/expected_keepers.go (1)
44-46
: LGTM! Interface changes align well with the maximum voting power implementation.
The new methods provide a comprehensive approach to managing pool stakes and voting power:
GetTotalStakeOfPool
: Provides aggregate stake viewGetValidatorPoolStakes
: Enables granular stake trackingIsVotingPowerTooHigh
: Enforces voting power constraints
Let's verify the implementation of these interface methods:
✅ Verification successful
Interface methods are properly implemented in the stakers keeper
All three interface methods are implemented in x/stakers/keeper/exported_functions.go
:
GetTotalStakeOfPool
: Correctly aggregates stakes by summing up individual validator stakesGetValidatorPoolStakes
: Implements the stake tracking functionality with proper validator address mappingIsVotingPowerTooHigh
: Implements the voting power check using pool keeper's max voting power configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for implementations of the new interface methods
ast-grep --pattern 'func ($receiver $_) GetTotalStakeOfPool(ctx sdk.Context, poolId uint64) (totalStake uint64) {
$$$
}'
ast-grep --pattern 'func ($receiver $_) GetValidatorPoolStakes(ctx sdk.Context, poolId uint64, mustIncludeStakers ...string) map[string]uint64 {
$$$
}'
ast-grep --pattern 'func ($receiver $_) IsVotingPowerTooHigh(ctx sdk.Context, poolId uint64) bool {
$$$
}'
Length of output: 391
Script:
#!/bin/bash
# Let's try a different approach using ripgrep to find implementations
# Search for the method names with some context
echo "=== Searching for GetTotalStakeOfPool ==="
rg -A 5 "func.*GetTotalStakeOfPool"
echo -e "\n=== Searching for GetValidatorPoolStakes ==="
rg -A 5 "func.*GetValidatorPoolStakes"
echo -e "\n=== Searching for IsVotingPowerTooHigh ==="
rg -A 5 "func.*IsVotingPowerTooHigh"
# Also search for the keeper implementation file
echo -e "\n=== Looking for keeper implementation files ==="
fd "keeper.*\.go$"
Length of output: 3181
proto/kyve/stakers/v1beta1/events.proto (2)
99-101
: LGTM! Improved field documentation clarity.
The updated comments provide better clarity about the purpose of these fields.
103-108
: LGTM! Well-structured addition of stake fraction tracking.
The new stake_fraction
field with LegacyDec
type is appropriate for representing percentage values, and the documentation clearly explains its purpose.
Let's verify the usage of this new field:
✅ Verification successful
LGTM! The stake_fraction field is properly implemented and used.
The verification confirms that the stake_fraction
field in EventSlash
is correctly used in the codebase:
- In
x/stakers/keeper/exported_functions.go
, theSlash
function properly calculates and sets thestake_fraction
when emitting the event:
stakeFraction := math.LegacyNewDec(int64(k.GetValidatorPoolStake(ctx, staker, poolId))).QuoInt(validator.GetBondedTokens())
...
_ = ctx.EventManager().EmitTypedEvent(&stakertypes.EventSlash{
PoolId: poolId,
Staker: staker,
Amount: amount.Uint64(),
SlashType: slashType,
StakeFraction: stakeFraction,
})
The implementation correctly:
- Calculates the stake fraction based on the validator's effective stake in the pool
- Uses it to determine the actual slash amount
- Includes it in the emitted event for tracking purposes
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for EventSlash emission points to ensure stake_fraction is properly set
rg -A 5 'EmitEvent\(.*EventSlash'
Length of output: 35
Script:
#!/bin/bash
# Let's try a broader search for EventSlash usage and related code
rg -l "EventSlash"
# Also search for potential slash-related code
rg -A 5 "Slash.*stake.*fraction"
# Check for any proto imports of this event
rg "events\.EventSlash"
Length of output: 2594
Script:
#!/bin/bash
# Let's examine the slash implementation in the stakers keeper
cat x/stakers/keeper/exported_functions.go
# Also check the delegation keeper's slash implementation
cat x/delegation/keeper/exported_functions.go
# And check the proto definition in delegation module for comparison
cat proto/kyve/delegation/v1beta1/events.proto
Length of output: 29554
proto/kyve/query/v1beta1/stakers.proto (2)
59-66
: LGTM! Enhanced staker status granularity.
The new status enum values provide better distinction between protocol-level and chain-level activity states, which is valuable for status tracking.
Let's verify the usage of these new status values:
✅ Verification successful
New status values are properly integrated and used correctly
The verification shows the new status values are well-integrated across the codebase:
- Generated protobuf code in
x/query/types/stakers.pb.go
correctly defines the enum values - Proper usage in business logic (
x/stakers/keeper/exported_functions.go
) with meaningful conditions:- Protocol active/inactive checks valaccount existence
- Chain active/inactive checks validator bonding status
- OpenAPI documentation updated accordingly
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of the new status values
rg -A 3 'STAKER_STATUS_(PROTOCOL|CHAIN)_(ACTIVE|INACTIVE)'
Length of output: 4929
106-106
: LGTM! Consistent response structure.
Using FullStaker
type in QueryStakersByPoolResponse
maintains consistency with other query responses in the file.
Let's verify the FullStaker type usage:
Also applies to: 110-110
✅ Verification successful
LGTM! The FullStaker type usage is consistent across the codebase
The verification confirms that:
- The
FullStaker
type is properly defined in the proto files and is used consistently across multiple response types - It's used in the same way in
QueryStakersByPoolResponse
as in other similar query responses likeQueryStakersResponse
,QueryStakerResponse
, etc. - The implementation in the keeper layer correctly handles the type with proper initialization and population of data
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for FullStaker type definition and usage
ast-grep --pattern 'message FullStaker {
$$$
}'
Length of output: 51
Script:
#!/bin/bash
# Let's try a different approach to find FullStaker definition and usage
# First, let's use ripgrep to find any mentions of FullStaker
rg "FullStaker" -A 5 -B 5
# Also search for the proto file containing the message definition
fd "stakers.proto" --type f
Length of output: 38742
x/bundles/keeper/logic_round_robin.go (1)
78-86
: LGTM! Consistent terminology change.
The renaming from 'delegation' to 'stake' is applied consistently and maintains the original logic.
x/stakers/keeper/msg_server_leave_pool_test.go (2)
46-48
: LGTM! Good test coverage for max voting power.
Setting MaxVotingPowerPerPool parameter ensures proper testing of voting power limits.
93-93
: LGTM! Consistent terminology updates.
The test cases have been properly updated to use GetTotalStakeOfPool instead of GetDelegationOfPool, maintaining consistency with the new terminology.
Also applies to: 117-117, 166-166, 190-190, 271-271, 293-293
CHANGELOG.md (1)
23-23
: LGTM! Changelog entry follows the standard format.
The entry correctly indicates a state machine breaking change and provides a clear description of the feature.
x/stakers/keeper/msg_server_update_stake_fraction_test.go (2)
53-55
: LGTM! Proper test setup for maximum voting power.
The test correctly initializes the MaxVotingPowerPerPool parameter, which is essential for testing the new shared staking feature.
78-78
: LGTM! Consistent method updates across test cases.
The changes consistently replace GetDelegationOfPool with GetTotalStakeOfPool across all test assertions, maintaining test coverage while adapting to the new staking mechanism.
Also applies to: 94-94, 118-118, 134-134, 143-143, 159-159, 175-175, 191-191, 207-207, 230-230, 239-239, 262-262, 271-271, 285-285, 300-300, 309-309, 351-351, 356-356, 365-365, 370-370, 378-378, 387-387, 395-395, 409-409
go.mod (1)
164-164
: LGTM! Safe dependency management change.
Moving gogo/protobuf to indirect dependencies is appropriate as it's a transitive dependency.
testutil/integration/checks.go (1)
104-104
: LGTM: Consistent with the shift from delegation to stake terminology.
The change from GetDelegationOfPool
to GetTotalStakeOfPool
aligns with the broader changes in the codebase.
x/bundles/keeper/keeper_suite_stakers_leave_test.go (1)
143-143
: LGTM: Test assertions updated for stake terminology.
The test assertions have been consistently updated to use GetTotalStakeOfPool
instead of GetDelegationOfPool
, maintaining alignment with the codebase changes.
Also applies to: 231-231, 347-347, 447-447, 539-539, 634-634
x/bundles/keeper/logic_bundles.go (3)
39-43
: LGTM: Added validation for voting power constraints.
The new check ensures that the pool has enough stakers to achieve the required voting power distribution, preventing scenarios where voting power requirements cannot be met.
46-46
: LGTM: Updated minimum stake check.
Consistent change from delegation to stake terminology while maintaining the same functional check against pool's minimum requirement.
500-519
: Verify the voting power calculation logic.
The voting power calculation has been significantly refactored to use stakes directly. While the change looks correct, let's verify:
- The stakes map contains all necessary stakers
- The voting power distribution remains accurate
✅ Verification successful
Let me gather more information about the stakes calculation to verify its completeness.
Let me gather one final piece of information about the stake calculation implementation.
Based on the gathered information, I can now generate the final response:
Voting power calculation is correctly implemented and secure
The voting power calculation has been verified to be accurate and complete:
GetValidatorPoolStakes
returns a complete map of all staker stakes for a pool, including any required stakers that might have been kicked outGetAllStakerAddressesOfPool
correctly retrieves all stakers from the pool's valaccounts- The total voting power calculation includes all stakers in the pool, ensuring no votes are missed
- The distribution across Valid/Invalid/Abstain votes is correctly summed from the respective voter lists using the same stake map
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the completeness of stakes map and voting power calculation
# Check if there are any direct references to delegation in voting power calculations
rg -A 5 "votingPower|delegation.*pool" --type go
# Check for any potential edge cases in stake calculation
ast-grep --pattern 'func $_ {
$$$
stakes := $_.GetValidatorPoolStakes($_, $_)
$$$
}'
Length of output: 4332
Script:
#!/bin/bash
# Check the implementation of GetValidatorPoolStakes
ast-grep --pattern 'func (k $_) GetValidatorPoolStakes($_, $_) map[string]uint64 {
$$$
}'
# Check how staker addresses are managed for a pool
ast-grep --pattern 'func (k $_) GetAllStakerAddressesOfPool($_, $_) []string {
$$$
}'
# Look for any stake-related error handling
rg -A 3 "Error.*stake" --type go
Length of output: 17311
Script:
#!/bin/bash
# Check the implementation of stake-related functions in the staker keeper
rg -A 10 "func.*GetValidatorPoolStakes|func.*GetAllStakerAddressesOfPool" --type go
# Look for any stake-related state management
rg -A 5 "SetStake|UpdateStake|RemoveStake" --type go
Length of output: 48205
x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (4)
27-27
: Test case description updated to reflect new behavior
The test case description has been updated from checking if "next uploader gets removed" to "next uploader gets not removed", indicating a change in the expected behavior when a pool has one node with more than 50% voting power.
Line range hint 284-313
: New test case validates max voting power per pool constraints
The test case verifies that when validators exceed the maximum voting power per pool (set to 20%), the next uploader is removed and the validator pool stakes are zeroed. This helps ensure the decentralization of the network by preventing any single validator from having too much influence.
402-402
: Method name updated to reflect stake-based terminology
The method name has been updated from GetDelegationOfPool
to GetTotalStakeOfPool
across multiple test cases. This change aligns with the shift from delegation-based to stake-based terminology in the KYVE network.
Also applies to: 579-579, 642-642, 740-740, 870-870, 1031-1031, 1119-1119, 1202-1202, 1368-1368
Line range hint 1-1374
: Overall test coverage is comprehensive
The test file provides thorough coverage of bundle upload timeout scenarios, including:
- Pool funding conditions
- Validator voting power constraints
- Bundle proposal states
- Upload timeout handling
- Slashing conditions
- Multiple pool interactions
The implementation is well-structured and maintains good test practices.
x/bundles/keeper/keeper_suite_zero_delegation_test.go (1)
294-294
: Verify the impact of switching to GetTotalStakeOfPool.
The change from GetDelegationOfPool
to GetTotalStakeOfPool
aligns with the broader changes in the staking mechanism. However, we should verify that all callers of this method understand the semantic difference between delegation and total stake.
Also applies to: 622-622
✅ Verification successful
Let me analyze the results. The search shows:
-
GetDelegationOfPool
is only used in:- Query module for account funding checks
- Delegation module's exported functions and specs
-
GetTotalStakeOfPool
is widely used in:- Stakers module tests and core logic
- Query module for pool information
- Bundles module for validation and tests
- All test cases properly handle the semantic difference
Let me generate the final response:
The switch to GetTotalStakeOfPool is consistently implemented
The codebase shows a clear separation where GetDelegationOfPool
remains in the delegation module for specific delegation queries, while GetTotalStakeOfPool
is correctly used in staking and bundle validation contexts. The semantic difference is well-understood and properly handled across all callers.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all usages of GetDelegationOfPool and GetTotalStakeOfPool to ensure consistent migration
# Search for any remaining uses of GetDelegationOfPool
echo "Searching for remaining GetDelegationOfPool usage..."
rg "GetDelegationOfPool"
# Search for new GetTotalStakeOfPool usage
echo "Searching for new GetTotalStakeOfPool usage..."
rg "GetTotalStakeOfPool"
Length of output: 14077
x/bundles/keeper/logic_bundles_test.go (1)
245-245
:
Verify the implications of allowing high voting power.
The test changes indicate that pools can now operate with nodes having high voting power, which is a significant behavioral change. This could have security implications and affect the network's decentralization.
Please confirm if this change is intentional and if additional security measures have been implemented to prevent centralization risks.
Also applies to: 369-369
✅ Verification successful
Based on the search results, I can now generate the final response since I have enough context about how voting power is handled in the codebase:
The test changes align with the intended voting power behavior
The changes to expect nil errors instead of errors are correct because:
- The codebase has a configurable
MaxVotingPowerPerPool
parameter (default 0.5 or 50%) that controls the maximum allowed voting power per pool - When voting power exceeds the limit, the system automatically adjusts stakes to keep validators below the maximum threshold rather than returning errors
- The
IsVotingPowerTooHigh()
check is used to determine pool status (POOL_STATUS_VOTING_POWER_TOO_HIGH
) rather than failing operations - There are comprehensive test cases covering various voting power scenarios (0%, 20%, 35%, 50%, 100%) showing this behavior
The changes reflect the system's design where high voting power is handled through stake adjustments rather than errors, maintaining decentralization without failing operations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Analyze the impact of allowing high voting power
# Search for voting power related code and configurations
echo "Searching for voting power related code..."
rg -A 5 "VotingPower|voting_power"
# Search for related parameters and their usage
echo "Searching for max voting power parameter..."
rg -A 5 "MaxVotingPowerPerPool"
# Search for security measures around voting power
echo "Searching for voting power security checks..."
ast-grep --pattern 'if $COND {
$$$
VotingPower
$$$
}'
Length of output: 64776
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
proto/kyve/query/v1beta1/query.proto (1)
76-80
: Review validator integration and stake calculationThe integration with
cosmos.staking.v1beta1.Validator
and addition oftotal_pool_stake
represents a significant architectural change:
- Validator data is now sourced from Cosmos SDK
- Total stake is now tracked across all pools
This change improves standardization but requires careful handling of validator state.
Consider implementing a caching mechanism for validator data to optimize query performance, especially when dealing with multiple pools.
x/query/keeper/grpc_query_stakers_by_pool_count.go (1)
22-27
: Review accumulator function implementationThe accumulator function uses a closure to append FullStaker data. While functional, consider these aspects:
- Memory efficiency with large datasets
- Error handling in GetFullStaker
Consider adding error handling:
accumulator := func(address string, accumulate bool) bool { if accumulate { - data = append(data, *k.GetFullStaker(ctx, address)) + staker, err := k.GetFullStaker(ctx, address) + if err != nil { + return false + } + data = append(data, *staker) } return true }x/stakers/keeper/exported_functions.go (1)
201-320
: Well-documented implementation, needs test coverage.The
GetValidatorPoolStakes
implementation is thorough and handles complex scenarios like:
- Voting power redistribution
- Stake scaling
- Edge cases (zero stakes, maximum voting power)
However, given the complexity, comprehensive unit tests would be valuable.
Would you like me to help create unit tests for various scenarios including:
- Maximum voting power exceeded
- Zero stake validators
- Redistribution calculations
- Scale factor computations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
x/query/types/query.pb.go
is excluded by!**/*.pb.go
x/query/types/stakers.pb.go
is excluded by!**/*.pb.go
📒 Files selected for processing (5)
proto/kyve/query/v1beta1/query.proto
(4 hunks)proto/kyve/query/v1beta1/stakers.proto
(2 hunks)testutil/integration/checks.go
(4 hunks)x/query/keeper/grpc_query_stakers_by_pool_count.go
(2 hunks)x/stakers/keeper/exported_functions.go
(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- testutil/integration/checks.go
🔇 Additional comments (8)
proto/kyve/query/v1beta1/query.proto (2)
61-62
: Verify total_stake calculation across the codebase
The field rename from total_delegation
to total_stake
reflects the terminology shift. This change requires verification of all calculations and queries using this field.
144-176
:
Validate commission and stake fraction constraints
The new commission and stake fraction fields introduce complex state management:
- Commission changes require time locks
- Stake fractions affect voting power distribution
- Pool stake may be limited by max voting power
Ensure that:
- Commission changes cannot be bypassed
- Stake fraction total across pools cannot exceed 100%
- Max voting power limit is properly enforced
✅ Verification successful
Let's search for validation patterns using more flexible search terms to find relevant code sections. The previous ast-grep patterns might have been too specific, so I'm using ripgrep with broader patterns to catch validation logic implemented in different ways.
Let's search for the actual validation implementations to verify how these constraints are enforced. The previous results showed the existence of commission change queues and max voting power limits, but we need to see the actual validation logic.
Let's try searching in more specific locations and with broader patterns to find the validation logic. The previous search might have been too restrictive.
Let's search with broader patterns and in the correct paths to find the validation implementations. The previous search had some path issues and didn't show us the complete message handlers.
Based on the search results, I can now provide a final response:
Constraints are properly enforced through validation and time-locks
The codebase implements comprehensive validation and safety mechanisms:
- Commission changes are protected:
- Percentage validation through
ValidatePercentage
in message handlers - Time-locked changes via commission change queue
- Only active pool validators can request changes
- Stake fraction constraints are enforced:
- Percentage validation through
ValidatePercentage
- Decreases require time-lock via stake fraction change queue
- Increases are checked against max voting power
- Max voting power limit is properly implemented:
- Validated as a percentage parameter
- Enforced through
GetMaxVotingPowerPerPool
- Zero max voting power case handled explicitly
- Voting power adjustments applied when limit exceeded
The code shows robust implementation of all the mentioned constraints with proper validation, time-locks, and voting power adjustments. The commission and stake fraction changes cannot be bypassed due to the queue system, and max voting power is enforced at the protocol level.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for commission validation logic
ast-grep --pattern 'func $_(commission string) {
$$$
}'
# Search for stake fraction validation
ast-grep --pattern 'func $_(stake_fraction string) {
$$$
}'
Length of output: 123
Script:
#!/bin/bash
# Search for commission and stake fraction validation patterns in Go files
rg -U "func.*validateCommission|func.*validateStakeFraction|func.*ValidateCommission|func.*ValidateStakeFraction" -A 10
# Search for commission and stake fraction related messages and validations
rg -U "commission.*validate|stake.*fraction.*validate" -A 5
# Look for max voting power validation
rg -U "maxVotingPower|MaxVotingPower|max_voting_power" -A 5
# Search for commission change related code
rg -U "CommissionChange|commission_change" -A 5
Length of output: 67336
Script:
#!/bin/bash
# Search for commission validation in Go files
rg -U "func.*validate.*commission|func.*Validate.*Commission" -A 10
# Search for stake fraction validation in Go files
rg -U "func.*validate.*stake.*fraction|func.*Validate.*Stake.*Fraction" -A 10
# Search for total stake fraction validation
rg -U "total.*stake.*fraction|validateTotalStakeFraction" -A 5
# Look for voting power validation implementation
rg -U "validateVotingPower|validate.*voting.*power" -A 5
Length of output: 273
Script:
#!/bin/bash
# Search for commission validation in msg_server files
rg -U "commission.*validate|commission.*check" x/stakers/keeper/msg_server_*.go -A 5
# Search for stake fraction validation in msg_server files
rg -U "stake.*fraction.*validate|stake.*fraction.*check" x/stakers/keeper/msg_server_*.go -A 5
# Look for validation in util package
rg -U "ValidatePercentage|ValidateNumber" x/*/util/*.go -A 5
# Search for max voting power checks in keeper
rg -U "maxVotingPower.*check|maxVotingPower.*validate|GetMaxVotingPowerPerPool" x/stakers/keeper/*.go -A 5
Length of output: 3253
Script:
#!/bin/bash
# Search for validation in util package with correct path
rg -U "ValidatePercentage|ValidateNumber" -A 5
# Search for commission validation in msg_server_update_commission
rg -U "func.*UpdateCommission.*|msg.*UpdateCommission" x/stakers/keeper/msg_server_update_commission.go -A 15
# Search for stake fraction validation in msg_server_update_stake_fraction
rg -U "func.*UpdateStakeFraction.*|msg.*UpdateStakeFraction" x/stakers/keeper/msg_server_update_stake_fraction.go -A 15
# Look for max voting power implementation
rg -U "func.*GetMaxVotingPowerPerPool|func.*validateMaxVotingPower" -A 10
Length of output: 9987
proto/kyve/query/v1beta1/stakers.proto (2)
105-105
: Verify response structure compatibility
The change from StakerPoolResponse
to FullStaker
in responses requires verification of all consumers.
58-65
: Review staker status state machine
The expanded StakerStatus enum introduces protocol-level and chain-level states:
- PROTOCOL_ACTIVE/INACTIVE: Protocol participation status
- CHAIN_ACTIVE/INACTIVE: Chain validator status
✅ Verification successful
The search results show proper validation and handling of the new staker status states in the codebase. Let me verify the state transitions by checking for any state machine or transition logic.
Let me try one more search to find any staker status transitions or validations in message handlers.
Based on the search results from all executed scripts, I can now provide a final response:
Staker status implementation verified with proper validation
The expanded StakerStatus enum is properly implemented with clear validation logic in the codebase:
- Protocol status (ACTIVE/INACTIVE) is validated against valaccount existence
- Chain status (ACTIVE/INACTIVE) is validated against validator bonding state
- Status transitions are handled in GetPaginatedStakersByPoolStake with appropriate filtering
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for status transition handling
ast-grep --pattern 'func $_(status StakerStatus) {
$$$
}'
# Check for proper status validation
rg "STAKER_STATUS_" --type go -A 5
Length of output: 3982
Script:
#!/bin/bash
# Search for state transition logic
rg "SetStakerStatus|UpdateStakerStatus" --type go -A 5
# Look for any methods that might modify staker status
ast-grep --pattern 'func $_($_ $_, $_ StakerStatus) {
$$$
}'
Length of output: 121
Script:
#!/bin/bash
# Search for message handlers or methods that might handle staker status
rg "func.*Staker.*Status" --type go -A 5
# Look for validation rules or constraints
rg "validateStaker|ValidateStaker" --type go -A 5
Length of output: 1554
x/query/keeper/grpc_query_stakers_by_pool_count.go (1)
32-36
: Verify pagination implementation
The pagination implementation needs verification for:
- Large dataset handling
- Proper error propagation
- Resource consumption
✅ Verification successful
Let me gather more information about the pagination implementation.
Let me check one more aspect of the pagination implementation to understand how the array pagination is handled.
Pagination implementation is properly handled
The implementation shows a well-structured pagination system with:
- Proper error handling for unsupported key-based pagination
- Resource-efficient array-based pagination with bounds checking
- Correct total count tracking and next key management
- Appropriate error propagation throughout the chain
The arrayPaginationAccumulator
implementation ensures safe handling of large datasets by:
- Using offset-based pagination with page and limit parameters
- Computing proper index bounds to avoid out-of-range access
- Maintaining an accurate total count
- Implementing proper error checks at each step
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for pagination implementations
ast-grep --pattern 'func $_($_ context.Context, pagination *query.PageRequest) {
$$$
}'
# Check error handling patterns
rg "GetPaginatedStakersByPoolCount.*error" -A 5
Length of output: 666
Script:
#!/bin/bash
# Check the implementation of GetBondedValidatorsByPower
ast-grep --pattern 'func ($_ $_) GetBondedValidatorsByPower(ctx $_) ($_, error) {
$$$
}'
# Look for pagination-related code in the stakers module
rg "pagination.*PageRequest" -A 5 x/stakers/
# Check for any custom pagination implementations
rg "func.*Paginate" x/stakers/
Length of output: 2828
Script:
#!/bin/bash
# Check the array pagination implementation
ast-grep --pattern 'func arrayPaginationAccumulator($_, $_, $_) ($_, error) {
$$$
}'
# Look for ParsePagination usage and implementation
rg "ParsePagination" -A 5
Length of output: 4923
x/stakers/keeper/exported_functions.go (3)
Line range hint 5-21
: LGTM! Import changes align with new functionality.
The addition of sorting and query types imports supports the new staking and voting power calculation features.
34-36
: LGTM! Event type change is consistent.
The update to use stakertypes.EventLeavePool
aligns with the module's new type system.
384-392
: Verify edge case: zero bonded tokens handling.
While the code handles zero bonded tokens, consider the implications when:
- A validator's bonded tokens become zero after the stake calculation
- The validator is slashed in multiple pools simultaneously
✅ Verification successful
Let me gather more context about the slashing mechanism and token handling.
Let me gather more information about how the slashing mechanism handles multiple pools.
Based on the comprehensive analysis of the codebase, I can now provide a final response regarding the zero bonded tokens edge case.
Zero bonded tokens handling is properly implemented and safe.
The code correctly handles the zero bonded tokens case by:
- Setting
stakeFraction
to zero when bonded tokens are zero, preventing any division by zero - The slashing mechanism properly accounts for this through
GetValidatorPoolStake
which maintains pool-specific stake tracking - Multiple pool slashing is handled safely as each pool's stake is calculated independently using the validator's effective stake in that specific pool
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for other zero token handling in the codebase
rg "GetBondedTokens\(\).IsZero\(\)" -A 5
Length of output: 586
Script:
#!/bin/bash
# Check for slashing related code and token handling
rg "func.*Slash" -A 10
# Check for any other validator token checks
rg "validator.*[Tt]oken" -A 5
# Look for any slash fraction calculations
ast-grep --pattern 'stakeFraction := $_'
Length of output: 41017
Script:
#!/bin/bash
# Check for pool-related slashing logic
rg "Slash.*pool" -A 5
# Look for multiple pool handling
rg "GetValidatorPoolStake" -A 5
# Check for validator stake calculations
ast-grep --pattern 'GetBondedTokens().$$$'
Length of output: 54614
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (1)
Line range hint
284-311
: Verify the max voting power validation logicThe new test case validates that validators with too much voting power (>20%) are properly handled. The test:
- Sets max voting power to 20%
- Verifies that validators exceeding this limit have their stake set to zero
- Confirms the next uploader is removed
A few suggestions to enhance the test:
- Add assertions to verify the actual voting power percentage
- Consider testing edge cases around the 20% threshold
- Add a test case for when validators are just below the threshold
It("Next uploader gets removed due to pool having validators with too much voting power", func() { + // Calculate and verify actual voting power percentages + totalStake := s.App().StakersKeeper.GetTotalStakeOfPool(s.Ctx(), 0) + staker0Stake := s.App().StakersKeeper.GetValidatorPoolStake(s.Ctx(), i.STAKER_0, 0) + votingPower := math.LegacyNewDec(int64(staker0Stake)).Quo(math.LegacyNewDec(int64(totalStake))) + Expect(votingPower.GT(math.LegacyMustNewDecFromStr("0.2"))).To(BeTrue())testutil/integration/helpers.go (1)
90-94
: Consider adding input validation and error handlingWhile the implementation is functional for testing purposes, it could benefit from some improvements:
- The use of
LegacyMustNewDecFromStr
could panic with invalid input. Consider usingNewDecFromStr
instead and handling potential errors.- Add validation for negative values or other invalid inputs.
Here's a suggested improvement:
-func (suite *KeeperTestSuite) SetMaxVotingPower(maxVotingPower string) { +func (suite *KeeperTestSuite) SetMaxVotingPower(maxVotingPower string) error { + dec, err := math.LegacyNewDecFromStr(maxVotingPower) + if err != nil { + return fmt.Errorf("invalid max voting power: %w", err) + } + if dec.IsNegative() { + return fmt.Errorf("max voting power cannot be negative") + } params := suite.App().PoolKeeper.GetParams(suite.Ctx()) - params.MaxVotingPowerPerPool = math.LegacyMustNewDecFromStr(maxVotingPower) + params.MaxVotingPowerPerPool = dec suite.App().PoolKeeper.SetParams(suite.Ctx(), params) + return nil }x/bundles/keeper/logic_round_robin_test.go (1)
268-308
: Consider adding edge cases to max voting power testWhile the current test case validates the basic functionality of max voting power cap, consider adding edge cases:
- Multiple validators exceeding the cap
- All validators at exactly the cap
- Cap value at extremes (very close to 0 or 1)
Example test case structure:
+It("Frequency analysis with multiple validators exceeding max voting power cap", func() { + // ARRANGE + s.SetMaxVotingPower("0.3") + + // Multiple validators exceeding 30% voting power + joinDummy(s, 0, 35) + joinDummy(s, 1, 35) + joinDummy(s, 2, 30) + + // ACT & ASSERT frequency distribution +})x/bundles/keeper/logic_bundles_test.go (1)
310-310
: Review the new helper methods for setting max voting powerThe introduction of
SetMaxVotingPower
helper methods improves test maintainability, but the values (0.4 and 1.0) should be defined as constants.Consider defining these magic numbers as constants:
+const ( + DEFAULT_MAX_VOTING_POWER = "0.4" + SINGLE_NODE_MAX_VOTING_POWER = "1" +)Then use these constants in the test cases:
-s.SetMaxVotingPower("0.4") +s.SetMaxVotingPower(DEFAULT_MAX_VOTING_POWER) -s.SetMaxVotingPower("1") +s.SetMaxVotingPower(SINGLE_NODE_MAX_VOTING_POWER)Also applies to: 372-372
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
testutil/integration/helpers.go
(1 hunks)x/bundles/keeper/keeper_suite_invalid_bundles_test.go
(17 hunks)x/bundles/keeper/keeper_suite_valid_bundles_test.go
(3 hunks)x/bundles/keeper/logic_bundles_test.go
(3 hunks)x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go
(13 hunks)x/bundles/keeper/logic_round_robin_test.go
(5 hunks)x/stakers/keeper/keeper_suite_effective_stake_test.go
(1 hunks)x/stakers/keeper/msg_server_join_pool_test.go
(20 hunks)x/stakers/keeper/msg_server_leave_pool_test.go
(7 hunks)x/stakers/keeper/msg_server_update_stake_fraction_test.go
(21 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- x/stakers/keeper/msg_server_update_stake_fraction_test.go
- x/stakers/keeper/keeper_suite_effective_stake_test.go
🔇 Additional comments (20)
x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (2)
27-27
: Test case description updated to reflect new behavior
The test case description has been updated to indicate that nodes with more than 50% voting power are now handled differently, which aligns with the new shared staking mechanism.
400-400
: Method name updated for clarity
The method name change from GetDelegationOfPool
to GetTotalStakeOfPool
better reflects its purpose of retrieving the total stake. This improves code readability and maintainability.
Also applies to: 577-577, 640-640, 696-696, 738-738, 868-868, 1029-1029, 1117-1117, 1200-1200, 1366-1366
testutil/integration/helpers.go (1)
90-94
: Verify test coverage and usage patterns
Let's ensure this new helper method is properly utilized in the test suite.
✅ Verification successful
Helper method is well integrated and extensively used across test suites
The SetMaxVotingPower
helper method is properly utilized across multiple test suites with various test scenarios:
- Used in stakers tests for join/leave pool operations and stake fraction updates
- Extensively used in effective stake tests with different voting power thresholds (0%, 35%, 50%, 100%)
- Integrated in bundles tests for validation scenarios and round-robin tests
- Test coverage includes edge cases and boundary conditions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how the new SetMaxVotingPower method is being used in tests
# Search for test files using this method
echo "Files using SetMaxVotingPower:"
rg -l "SetMaxVotingPower" --type go
# Show usage context
echo -e "\nUsage patterns:"
rg -B 2 -A 2 "SetMaxVotingPower" --type go
Length of output: 7517
x/bundles/keeper/logic_round_robin_test.go (4)
Line range hint 37-49
: LGTM: Well-structured helper function addition
The new joinDummyWithStakeFraction
function is a good abstraction that makes the existing joinDummy
function more flexible by allowing customization of stake fractions. The default value of math.LegacyOneDec()
maintains backward compatibility.
230-266
: LGTM: Comprehensive stake fraction test coverage
The test case effectively validates the round-robin selection with different stake fractions:
- Zero stake (0)
- Partial stake (0.5)
- Full stake (1)
The frequency analysis across different sample sizes (10, 100, 100000) provides good confidence in the distribution fairness.
25-26
: Update test case list in comments
The test case list in the file header comments has been updated to include the new test scenarios. This is good practice for maintaining documentation.
103-104
: Verify default max voting power value
The test setup initializes MaxVotingPower
to "1". Ensure this matches the default value in production configuration.
x/bundles/keeper/logic_bundles_test.go (2)
Line range hint 1-1024
: Test coverage appears comprehensive
The test file has good coverage of various scenarios including:
- Pool operation conditions
- Voting scenarios
- Proposal validation
- Error cases
245-245
:
Verify the removal of voting power restrictions
The changes remove previous restrictions on voting power, allowing nodes to have higher voting power than before. This could potentially impact the decentralization of the network.
Let's verify the implications of this change:
Please confirm if this change is intentional and document the rationale behind removing these safety checks.
Also applies to: 367-367
x/stakers/keeper/msg_server_leave_pool_test.go (2)
46-47
: LGTM! MaxVotingPower initialization is consistent.
The initialization of MaxVotingPower to "1" in the test setup aligns with the changes across other test files.
91-91
: LGTM! Consistent replacement of delegation checks with total stake checks.
The changes correctly replace all instances of GetDelegationOfPool
with GetTotalStakeOfPool
, maintaining the same logical assertions while reflecting the new stake-based pool management approach.
Also applies to: 129-131, 183-187, 238-238, 669-669
x/stakers/keeper/msg_server_join_pool_test.go (2)
74-75
: LGTM! MaxVotingPower initialization is consistent.
The initialization of MaxVotingPower to "1" in the test setup aligns with the changes across other test files.
Line range hint 81-89
: Verify initial balance updates.
The initial balances are reset after funding the pool, which is important for accurate test assertions. Ensure that all subsequent balance checks use these updated values.
✅ Verification successful
Let me gather more context about how these balances are being used in the test setup.
Let me check the complete test file to understand the flow of balance checks and assertions.
Based on the complete test file, I can now generate the final response:
Initial balance updates are correctly handled and verified in tests.
The test file shows proper balance tracking:
- Initial balances are captured in
BeforeEach
block after validator creation - All balance checks use these initial values as reference points
- Balance assertions verify both staker and valaddress balances
- Multiple test cases consistently use these initial values for accurate delta calculations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that initial balances are used correctly after reset
# Test: Search for any usage of initialBalance variables
rg "initialBalance(Staker|Valaddress)\d" x/stakers/keeper/msg_server_join_pool_test.go
Length of output: 1479
Script:
#!/bin/bash
# Check the test setup and BeforeEach block to understand balance initialization
rg -A 5 "BeforeEach" x/stakers/keeper/msg_server_join_pool_test.go
Length of output: 190
Script:
#!/bin/bash
# Get the complete test file to understand the balance initialization and verification flow
cat x/stakers/keeper/msg_server_join_pool_test.go
Length of output: 35938
x/bundles/keeper/keeper_suite_invalid_bundles_test.go (4)
67-68
: LGTM! MaxVotingPower initialization is consistent.
The initialization of MaxVotingPower to "1" in the test setup aligns with the changes across other test files.
715-890
: LGTM! New test case for stake fractions.
The new test case "Produce an invalid bundle with multiple validators and stake fractions" properly validates the behavior when validators have different stake fractions.
892-1075
: LGTM! New test case for stake fractions with foreign delegations.
The new test case "Produce an invalid bundle with multiple validators and stake fractions and foreign delegations" properly validates the behavior when validators have both different stake fractions and foreign delegations.
81-89
: Verify balance tracking across test stages.
Multiple balance reset points are added to track the state accurately before key operations. Ensure that all balance checks use the correct reset point as reference.
Also applies to: 143-150, 338-345, 554-561, 763-770, 946-953
✅ Verification successful
Let me analyze the test file further to understand the balance checks in their full context.
Let me check the test structure and balance assertions to understand the complete flow.
Based on the analysis of the test file, I can now provide the final response:
✅ Balance tracking implementation is correct and consistent
The balance reset points are properly implemented before key operations in the test flow:
- Initial balances are captured in
BeforeEach
for clean state - Balances are reset before each significant action (validator creation, voting)
- All balance checks consistently use the correct reference points with proper
Expect
assertions
The multiple balance reset points are intentional and necessary to track state changes across different test stages, ensuring accurate validation of the protocol's economic behavior.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify balance reset points and their usage
# Test: Search for balance resets and subsequent balance checks
rg "initialBalance.*=.*GetBalanceFromAddress|balanceAfter.*=.*GetBalanceFromAddress|Expect\(.*Balance.*\)" x/bundles/keeper/keeper_suite_invalid_bundles_test.go
Length of output: 4982
Script:
#!/bin/bash
# Get more context around the balance checks to understand the test stages
rg -B 5 -A 5 "initialBalance.*=.*GetBalanceFromAddress" x/bundles/keeper/keeper_suite_invalid_bundles_test.go
Length of output: 4823
Script:
#!/bin/bash
# Get the test descriptions and structure
rg -B 2 -A 2 "Describe|Context|It|When" x/bundles/keeper/keeper_suite_invalid_bundles_test.go
Length of output: 1236
x/bundles/keeper/keeper_suite_valid_bundles_test.go (3)
967-1164
: LGTM: Maximum voting power test implementation looks correct
The test case thoroughly validates the maximum voting power per pool functionality by:
- Setting max voting power to 0.4 (40%)
- Testing stake calculations with multiple validators
- Verifying correct slashing behavior with partial stakes
- Confirming proper reward distribution under voting power constraints
926-926
: LGTM: Updated stake calculation
The change from GetDelegationOfPool
to GetTotalStakeOfPool
aligns with the terminology updates and correctly reflects the total stake calculation.
1052-1052
: Address TODO comment regarding uploader selection
The TODO comment questions why staker 2 is selected as the next uploader. This needs investigation to ensure the uploader selection logic is working as intended.
Let's verify the uploader selection logic:
✅ Verification successful
Based on the code context gathered, I can now provide a final response about the uploader selection:
The next uploader selection is deterministic based on round-robin validator set
The selection of staker 2 as the next uploader is determined by the round-robin validator set implementation, which:
- Maintains a progress map for each validator in the pool
- Uses a fair selection algorithm where new validators start with a penalty (-1.125 * total_pool_delegation) to put them at the end of the queue
- Selects the next uploader based on the validator's progress in the round-robin set
The TODO comment can be removed as the selection of staker 2 follows the designed round-robin mechanism to ensure fair distribution of upload opportunities among validators.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the uploader selection logic implementation
# Look for the function that determines the next uploader
ast-grep --pattern 'func $_(ctx $_, poolId $_, $$$) $_ {
$$$
nextUploader := $_
$$$
}'
Length of output: 94
Script:
#!/bin/bash
# Let's try a broader search for uploader selection logic
rg -A 10 "nextUploader"
Length of output: 21053
Script:
#!/bin/bash
# Also search for the test context around the TODO comment
rg -B 10 -A 10 "why is staker 2 selected as next uploader"
Length of output: 1759
Script:
#!/bin/bash
# Look for any function that might be related to uploader selection
ast-grep --pattern 'func $_($$$) {
$$$
uploader$_ := $_
$$$
}'
Length of output: 70
Script:
#!/bin/bash
# Let's look at the round robin validator set implementation
rg -A 20 "type RoundRobinValidatorSet struct"
Length of output: 1922
Script:
#!/bin/bash
# Let's also look at the test setup before the TODO comment
rg -B 30 "why is staker 2 selected as next uploader"
Length of output: 2572
# Conflicts: # x/bundles/keeper/keeper_suite_invalid_bundles_test.go # x/bundles/keeper/keeper_suite_valid_bundles_test.go # x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go # x/bundles/keeper/logic_round_robin_test.go # x/query/keeper/helper.go # x/stakers/keeper/exported_functions.go # x/stakers/keeper/keeper_suite_effective_stake_test.go # x/stakers/keeper/msg_server_join_pool_test.go # x/stakers/keeper/msg_server_leave_pool_test.go # x/stakers/keeper/msg_server_update_stake_fraction_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
x/stakers/keeper/exported_functions.go (1)
95-116
: Consider handling pagination before sorting to improve efficiency.
Sorting all validators by pool count before pagination can cause scalability issues for large validator sets. If feasible, consider partial sorting or paginating first, then sorting. This ensures only the relevant subset must be sorted, reducing overhead.x/query/keeper/grpc_query_stakers_by_pool.go (2)
21-21
: Improve error details.
Returning ErrKeyNotFound is correct, but consider providing a more descriptive message to clarify which poolKey was not found.
34-37
: Sorting before pagination may be costly.
As previously discussed, consider partial or paginated sorting to avoid unnecessary overhead for large data sets.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
x/query/keeper/grpc_query_stakers_by_pool.go
(2 hunks)x/stakers/keeper/exported_functions.go
(3 hunks)x/stakers/keeper/keeper_suite_effective_stake_test.go
(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- x/stakers/keeper/keeper_suite_effective_stake_test.go
🧰 Additional context used
📓 Learnings (1)
x/stakers/keeper/exported_functions.go (1)
Learnt from: troykessler
PR: KYVENetwork/chain#211
File: x/stakers/keeper/exported_functions.go:334-342
Timestamp: 2024-12-20T08:52:36.443Z
Learning: The `GetValidatorPoolStakes()` method ensures that the effective stake cannot exceed the validator's total bonded tokens, so no additional check is necessary.
🔇 Additional comments (5)
x/stakers/keeper/exported_functions.go (2)
6-8
: No issues with gRPC imports.
Imports for "grpc/codes" and "grpc/status" are routine for returning structured gRPC errors.
190-198
: Consistent with GetValidatorPoolStakes logic.
This function aligns with the staker logic and effectively sums stakes. The retrieved learning notes that GetValidatorPoolStakes() already ensures no stake exceeds the validator's total bonded tokens, so no extra checks are needed here.
x/query/keeper/grpc_query_stakers_by_pool.go (3)
5-5
: Approved import of sort.
The use of "sort" is necessary here and poses no issues.
25-29
: Straightforward initialization and append.
Creating the stakers slice and appending FullStaker objects is clear and concise.
32-32
: Potential caching improvement.
You retrieve validator stakes for the entire pool here, which may be impactful if called frequently in a large network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
x/stakers/keeper/exported_functions.go (4)
61-63
: Check for inefficiency in repeated calls to “GetValaccountsFromStaker”.
Inside the loop, each iteration calls “GetValaccountsFromStaker”. For large networks, repeated queries can be costly. Consider caching staker valaccounts or using a single precomputed map keyed by staker.
80-85
: Sorting all addresses by total pool stake can be costly.
The TODO comment suggests optimization. Sorting every staker can be computationally expensive, especially for large sets. Consider a more targeted sorting approach or a data structure that maintains sorted stakes.
95-116
: Method “GetPaginatedStakersByPoolCount” logic is nearly identical.
Similar patterns as in “GetPaginatedStakersByPoolStake.” Consider consolidating common code if possible (e.g., retrieving validators).
190-198
: “GetValidatorTotalPoolStake” returning zero if staker is unregistered anywhere.
The method sums stakes across all valaccounts. The logic seems correct, but watch out for potential stale or removed valaccounts. A cautionary note to verify stale records might be prudent.x/query/keeper/grpc_query_stakers_by_pool.go (1)
32-37
: Sorting stakers by stake.
Sorting is straightforward; the approach is correct for producing descending order. As an optimization, consider if staking data is already sorted or if partial updates are feasible.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
x/query/keeper/grpc_query_stakers_by_pool.go
(2 hunks)x/stakers/keeper/exported_functions.go
(3 hunks)x/stakers/keeper/keeper_suite_effective_stake_test.go
(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- x/stakers/keeper/keeper_suite_effective_stake_test.go
🔇 Additional comments (7)
x/stakers/keeper/exported_functions.go (4)
6-9
: No issues with the added imports.
The newly imported packages (querytypes, grpc/codes, grpc/status) look appropriate for the introduced gRPC error handling and custom query types.
65-68
: Verify correctness of “protocol inactive” condition.
The logic continues if the staker has any valaccount. Then staker is skipped if “protocol inactive” is requested. Ensure this aligns with the intended definition of “protocol inactive.”
69-76
: Bonded logic for “chain active/inactive” status.
This code depends on “validator.IsBonded()” to decide chain-level activity. Confirm that stakers in unbonding or jailed states are handled properly if those states differ from “bonded.”
87-93
: Pagination accumulator usage looks correct.
The approach of array-based pagination accumulation is fine, but ensure large staker sets are handled efficiently. Otherwise, the logic is good.
x/query/keeper/grpc_query_stakers_by_pool.go (3)
5-5
: Importing “sort” recognized.
No issues with adding the “sort” package.
21-21
: Pool existence check.
Returning ErrKeyNotFound is appropriate when the pool is not found. This fosters clarity and immediate error feedback.
25-29
: Accumulating FullStaker objects without existence checks.
Skipping any “GetValidator” checks might be deliberate for performance, but ensure that stale or invalid stakers won't break the logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
testutil/integration/checks.go (2)
127-137
: LGTM! Consider pre-allocating the slice capacity.The changes look good:
- Type change to
FullStaker
aligns with the new stake-based approach- Added validation to ensure staker exists
- Sorting provides deterministic ordering based on stake amount
Consider pre-allocating the slice capacity to improve performance:
-stakersByPoolState := make([]querytypes.FullStaker, 0) +stakersByPoolState := make([]querytypes.FullStaker, 0, len(valaccounts))
Line range hint
379-401
: Address TODO after API Query rework.There's a commented-out block of test validations waiting for API Query rework. This might affect the test coverage of staker metadata and commission-related functionality.
Would you like me to help implement these validations once the API Query rework is complete? I can create a GitHub issue to track this task.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores