Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor: test from jest to vitest #2688

Conversation

DevyanshuNegi
Copy link
Contributor

@DevyanshuNegi DevyanshuNegi commented Dec 19, 2024

Refactored src/screens/UserPortal/Volunteer/Groups.test.tsx to src/screens/UserPortal/Volunteer/Groups.spec.tsx

Issue: #2584 : #2584

Changes Implemented:

  • Refactored the testing framework from Jest to Vitest in Groups.test.tsx.
    • Updated import statements, mocking methods, and assertions to align with Vitest conventions.
    • Verified compatibility with the existing codebase using Vitest.
      - Renamed the test file:
    • From Groups.test.tsx → Groups.spec.tsx to follow the naming convention for Vitest.
      - Ran all tests successfully under the Vitest environment.
      Refactor: src/screens/UserPortal/Volunteer/Groups from Jest to Vitest #2584

Screenshot from 2024-12-11 17-27-28
Screenshot from 2024-12-11 17-29-15

Other information
I have read the previous refactor PR and tried to keep things as uniform as possible.
Please suggest any other changes if required.

Summary by CodeRabbit

  • Tests
    • Enhanced test suite for the Groups component with additional scenarios.
    • Added utility functions for debounce behavior and rendering components with Apollo link.
    • Expanded coverage for rendering, sorting, searching, and modal functionality.
    • Improved documentation with detailed comments for clarity.

Copy link
Contributor

coderabbitai bot commented Dec 19, 2024

Walkthrough

The pull request focuses on migrating the test suite for the Groups component from Jest to Vitest. The changes involve updating the testing framework, mocking strategies, and test cases to utilize Vitest's syntax and features. New utility functions like debounceWait and renderGroups are introduced to support more comprehensive testing of the component's functionality, including rendering, sorting, searching, and modal interactions.

Changes

File Change Summary
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx - Replaced Jest mocking with Vitest mocking for react-router-dom
- Added debounceWait utility function for simulating debounce behavior
- Added renderGroups function for rendering component with Apollo link
- Enhanced test cases with more comprehensive coverage

Possibly related issues

Possibly related PRs

Suggested reviewers

  • palisadoes

Poem

🐰 Hopping through code with glee,
Vitest replaces Jest, can't you see?
Groups tests now shine so bright,
Mocking and coverage, pure delight!
A rabbit's test suite, clean and free! 🧪


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
src/screens/UserPortal/Volunteer/Groups/Groups.test.tsx (1)

Line range hint 229-239: Enhance error handling test coverage

The error handling test could be more comprehensive by testing different error scenarios.

   it('Error while fetching groups data', async () => {
     renderGroups(link2);
 
     await waitFor(() => {
       expect(screen.getByTestId('errorMsg')).toBeInTheDocument();
+      // Verify error message content
+      expect(screen.getByTestId('errorMsg')).toHaveTextContent(/error/i);
+      // Verify UI state during error
+      expect(screen.queryByTestId('groupName')).not.toBeInTheDocument();
     });
+
+    // Verify user can retry
+    const retryButton = screen.getByRole('button', { name: /retry/i });
+    expect(retryButton).toBeInTheDocument();
+    await userEvent.click(retryButton);
   });
🧹 Nitpick comments (4)
src/screens/UserPortal/Volunteer/Groups/Groups.test.tsx (4)

Line range hint 1-1: Rename file extension to align with PR objectives

According to the PR objectives, the file should be renamed from .test.tsx to .spec.tsx to align with Vitest conventions.


Line range hint 42-49: Consider adding parameter validation to debounceWait

The function is well-implemented, but consider adding parameter validation for better type safety and error handling.

-const debounceWait = async (ms = 300): Promise<void> => {
+const debounceWait = async (ms: number = 300): Promise<void> => {
+  if (ms < 0) {
+    throw new Error('Delay must be a non-negative number');
+  }
   await act(() => {
     return new Promise((resolve) => {
       setTimeout(resolve, ms);
     });
   });
 };

Line range hint 51-77: Consider enhancing renderGroups with error boundary

The utility function is well-structured, but consider adding an error boundary to catch and handle rendering errors gracefully during testing.

+class TestErrorBoundary extends React.Component {
+  componentDidCatch(error: Error) {
+    console.error('Rendering error in test:', error);
+  }
+  render() {
+    return this.props.children;
+  }
+}

 const renderGroups = (link: ApolloLink): RenderResult => {
   return render(
+    <TestErrorBoundary>
     <MockedProvider addTypename={false} link={link}>
       {/* ... existing JSX ... */}
     </MockedProvider>
+    </TestErrorBoundary>
   );
 };

82-89: Enhance type safety in router mocking

Consider adding type safety to the router mock implementation.

-    vi.mock('react-router-dom', async () => {
+    vi.mock('react-router-dom', async (): Promise<typeof import('react-router-dom')> => {
       const actual = await vi.importActual('react-router-dom');
       return {
         ...actual,
-        useParams: () => ({ orgId: 'orgId' }),
+        useParams: (): { orgId: string } => ({ orgId: 'orgId' }),
       };
     });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b45d85 and 6344642.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Volunteer/Groups/Groups.test.tsx (11 hunks)

src/screens/UserPortal/Volunteer/Groups/Groups.test.tsx Outdated Show resolved Hide resolved
src/screens/UserPortal/Volunteer/Groups/Groups.test.tsx Outdated Show resolved Hide resolved
@palisadoes
Copy link
Contributor

We have a policy of unassigning contributors who close PRs without getting validation from our reviewer team. This is because:

  1. We start looking for people to review PRs when you submit them.
  2. We often contact them and link to the PR. If the PR is closed the whole effort is wasted.
  3. The historical thread of reviewer comments is broken when the work is spread across multiple PRs. The quality of our code is affected negatively.

Please be considerate of our volunteers' limited time and our desire to improve our code base.

This policy is stated as a pinned post in all our Talawa repositories. Our YouTube videos explain why this practice is not acceptable to our Community.

In most cases you don’t have to close the PR to trigger the GitHub workflow to run again. Making a new commit and pushing it to your GitHub account will normally be sufficient.

Unfortunately, if this continues we will have to close the offending PR and unassign you from the issue.

@DevyanshuNegi DevyanshuNegi reopened this Dec 22, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (1)

Migration to Vitest is incomplete - Jest configuration still present

The file under review has been properly migrated to use Vitest (using vi from 'vitest' for mocking), but there are conflicting test configurations in the project:

  1. Both Jest and Vitest configurations are present:

    • vitest.config.ts exists with proper Vitest setup
    • jest.config.js is still present
    • Package.json has both Jest and Vitest test scripts
  2. The project is in a transitional state with:

    • Vitest scripts: test:vitest, test:vitest:watch, test:vitest:coverage
    • Jest script: test still using Jest configuration

Recommendation:

  • Remove Jest configuration and dependencies
  • Update main test script to use Vitest
  • Remove custom Jest test environment at scripts/custom-test-env.js
🔗 Analysis chain

Line range hint 1-266: Verify complete migration from Jest to Vitest

Let's ensure all Jest-specific patterns have been properly migrated to Vitest.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for remaining Jest-specific patterns that should be migrated to Vitest

# Check for Jest-specific imports and patterns
echo "Checking for Jest-specific patterns..."
rg -i "jest\." --type=typescript --type=tsx

# Check for common Jest globals that should be replaced with Vitest equivalents
echo "Checking for Jest globals..."
rg -i "describe|it|test|expect|mock|spy|jest" --type=typescript --type=tsx

# Check for Jest configuration files that should be migrated
echo "Checking for Jest config files..."
fd "jest.config" -t f

Length of output: 460


Script:

#!/bin/bash
# Let's try a different approach without file type filters since they're not recognized
# First, check for Jest-specific patterns
echo "Checking for Jest-specific patterns..."
rg -i "jest\."

# Check for Jest configuration imports or requires
echo "Checking for Jest configuration imports..."
rg "require.*jest|import.*jest"

# Check content of jest.config.js
echo "Checking jest.config.js content..."
cat jest.config.js

# Look for testing-library/jest-dom imports
echo "Checking for @testing-library/jest-dom imports..."
rg "@testing-library/jest-dom"

Length of output: 81108


Script:

#!/bin/bash
# Let's check the specific file under review and its test configuration
echo "Checking the specific file under review..."
cat src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx

# Check for vitest configuration
echo "Checking for vitest configuration..."
cat vitest.config.ts 2>/dev/null || cat vitest.config.js 2>/dev/null || echo "No vitest config found"

# Check package.json for test-related dependencies
echo "Checking package.json for test dependencies..."
cat package.json | grep -A 5 -B 5 '"test":'

Length of output: 10617

🧹 Nitpick comments (4)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (4)

18-19: Consider cleaning up test-related imports

While the Vitest-specific imports have been added correctly, consider reviewing if all other test-related imports are still necessary after the migration. For example, some Jest-specific imports might be redundant.


Line range hint 36-48: Consider moving debounceWait to shared test utilities

The debounceWait utility function is well-implemented and documented, but since it's a generic testing helper, consider moving it to a shared test utilities file for reuse across other test suites.


82-97: Consider adding afterEach cleanup

While the test setup is good, consider adding an afterEach block to reset any side effects that might occur during individual tests (e.g., resetting localStorage, clearing mocked function calls). This helps ensure test isolation.

Example:

afterEach(() => {
  vi.resetAllMocks();
  localStorage.clear();
});

Line range hint 100-266: Consider adding more edge cases and error scenarios

The test suite has good coverage of happy paths, but consider adding tests for:

  • Network timeout scenarios
  • Invalid search inputs
  • Boundary conditions for sorting (e.g., equal member counts)
  • Modal interaction edge cases (e.g., rapid open/close)
  • Keyboard navigation accessibility

Example test case:

it('should handle network timeout gracefully', async () => {
  const timeoutLink = new StaticMockLink([
    {
      ...MOCKS[0],
      delay: 5000 // Simulate timeout
    }
  ]);
  
  renderGroups(timeoutLink);
  
  await waitFor(() => {
    expect(screen.getByTestId('errorMsg')).toHaveTextContent(/timeout/i);
  });
});
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6344642 and 7e8f99d.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (11 hunks)
🔇 Additional comments (1)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (1)

Line range hint 51-76: LGTM! Well-structured test setup utility

The renderGroups function is well-implemented with:

  • Comprehensive provider wrapping
  • Clear type definitions
  • Good JSDoc documentation

Copy link

codecov bot commented Dec 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 87.59%. Comparing base (aff65c9) to head (71d0823).

Additional details and impacted files
@@                  Coverage Diff                  @@
##           develop-postgres    #2688       +/-   ##
=====================================================
+ Coverage             68.61%   87.59%   +18.97%     
=====================================================
  Files                   296      313       +17     
  Lines                  7358     8204      +846     
  Branches               1606     1848      +242     
=====================================================
+ Hits                   5049     7186     +2137     
+ Misses                 2068      824     -1244     
+ Partials                241      194       -47     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@palisadoes
Copy link
Contributor

Please fix the failing tests. Our test code coverage system will fail if any of these conditions occur:

  1. The overall code coverage drops below the target threshold of the repository
  2. Any file in the pull request has code coverage levels below the repository threshold
  3. The code does not comply with our linting check or other related formatting requirements

Here are some steps to take:

  1. Click on the Details link for the tests to get an explanation of the cause(s)
  2. You may have to fix a test that is not related to this PR as in some rare cases we merge faulty test code. If this is the case, fix the test in this PR. Do not open another issue.

We do this to ensure the overall reliability of the code base is maintained. This is a prerequisite for getting your PR approved. Assigned reviewers regularly review the PR queue and tend to focus on PRs that are passing.

Additional notes:

  1. Ensure that CodeRabbit.ai approves your work
  2. Please fix any conflicting files

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (4)

6-12: Consider consolidating test utility imports

The testing utilities are imported individually, which could be simplified. Also, since Vitest includes Jest DOM matchers by default, the separate import of '@testing-library/jest-dom' might be unnecessary.

Consider this optimization:

-import {
-  cleanup,
-  fireEvent,
-  render,
-  screen,
-  waitFor,
-} from '@testing-library/react';
+import * as rtl from '@testing-library/react';
 import { vi } from 'vitest';
-import '@testing-library/jest-dom';

Also applies to: 24-25


Line range hint 57-83: Consider enhancing renderGroups utility

While the implementation is solid, consider these improvements:

  1. Add error boundary wrapper for better error handling in tests
  2. Consider making the initial route path configurable

Here's a suggested enhancement:

-const renderGroups = (link: ApolloLink): RenderResult => {
+interface RenderGroupsOptions {
+  link: ApolloLink;
+  initialPath?: string;
+}
+
+const renderGroups = ({ 
+  link, 
+  initialPath = '/user/volunteer/orgId' 
+}: RenderGroupsOptions): RenderResult => {
   return render(
     <MockedProvider addTypename={false} link={link}>
-      <MemoryRouter initialEntries={['/user/volunteer/orgId']}>
+      <MemoryRouter initialEntries={[initialPath]}>
         <Provider store={store}>
           <LocalizationProvider dateAdapter={AdapterDayjs}>
             <I18nextProvider i18n={i18n}>
-              <Routes>
-                <Route path="/user/volunteer/:orgId" element={<Groups />} />
-                <Route
-                  path="/"
-                  element={<div data-testid="paramsError"></div>}
-                />
-              </Routes>
+              <ErrorBoundary fallback={<div>Something went wrong</div>}>
+                <Routes>
+                  <Route path="/user/volunteer/:orgId" element={<Groups />} />
+                  <Route
+                    path="/"
+                    element={<div data-testid="paramsError"></div>}
+                  />
+                </Routes>
+              </ErrorBoundary>
             </I18nextProvider>
           </LocalizationProvider>
         </Provider>
      </MemoryRouter>
     </MockedProvider>
   );
 };

88-104: Enhance test setup robustness

While the setup is good, consider these improvements:

  1. Add type safety to the mocked useParams
  2. Consider restoring all mocks after all tests

Here's a suggested enhancement:

 beforeAll(() => {
   vi.mock('react-router-dom', async () => {
     const actual = await vi.importActual('react-router-dom');
     return {
       ...actual,
-      useParams: () => ({ orgId: 'orgId' }),
+      useParams: vi.fn().mockReturnValue({ orgId: 'orgId' } as { orgId: string }),
     };
   });
 });

+afterAll(() => {
+  vi.restoreAllMocks();
+});

 afterEach(() => {
   vi.resetAllMocks();
   cleanup();
 });

Line range hint 136-276: Enhance test assertions and error handling

The test cases are comprehensive, but consider these improvements:

  1. Add more specific assertions for error cases
  2. Add timeout handling for async operations
  3. Add negative test cases for search functionality

Here's an example enhancement for the error test case:

 it('Error while fetching groups data', async () => {
   renderGroups(link2);

-  await waitFor(() => {
-    expect(screen.getByTestId('errorMsg')).toBeInTheDocument();
-  });
+  await waitFor(
+    () => {
+      const errorElement = screen.getByTestId('errorMsg');
+      expect(errorElement).toBeInTheDocument();
+      expect(errorElement).toHaveTextContent(/error fetching groups/i);
+    },
+    { timeout: 2000 }
+  );
 });

+it('Search by Group with no results', async () => {
+  renderGroups(link1);
+  const searchInput = await screen.findByTestId('searchBy');
+  
+  userEvent.type(searchInput, 'NonexistentGroup');
+  await debounceWait();
+  
+  expect(screen.getByText(t.noVolunteerGroups)).toBeInTheDocument();
+});
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7e8f99d and 71d0823.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (12 hunks)
🔇 Additional comments (1)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (1)

Line range hint 42-56: Well-implemented debounce utility!

The debounceWait utility is well-documented and correctly implements async behavior with act() wrapper, which is crucial for handling state updates in tests.

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 24, 2024
Copy link
Contributor

@palisadoes palisadoes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment

src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (3)

Line range hint 41-82: LGTM: Well-structured utility functions with comprehensive setup

The utility functions are well-documented and implement best practices. The renderGroups function provides a clean wrapper for consistent component rendering across tests.

Consider adding error handling to debounceWait for negative timeout values:

 const debounceWait = async (ms = 300): Promise<void> => {
+  if (ms < 0) {
+    throw new Error('Timeout value must be non-negative');
+  }
   await act(() => {
     return new Promise((resolve) => {
       setTimeout(resolve, ms);
     });
   });
 };

101-103: Consider adding cleanup for localStorage in afterEach

While the current cleanup is good, consider also clearing localStorage to prevent potential test interference.

 afterEach(() => {
   vi.resetAllMocks();
   cleanup(); // from @testing-library/react
+  localStorage.clear(); // Clean up localStorage after each test
 });

Line range hint 106-276: LGTM: Comprehensive test coverage with clear organization

The test suite demonstrates:

  • Clear test case organization with descriptive JSDoc comments
  • Proper async/await usage with waitFor
  • Good coverage of different scenarios (happy path, error cases, empty states)
  • Consistent use of test utilities

Consider adding test cases for:

  1. Search with special characters
  2. Edge cases for group names (empty, very long)
  3. Concurrent sorting and searching operations
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71d0823 and 5bf0fd2.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (12 hunks)
🔇 Additional comments (2)
src/screens/UserPortal/Volunteer/Groups/Groups.spec.tsx (2)

Line range hint 1-24: LGTM: Clean imports aligned with Vitest

The imports are properly organized and include all necessary testing utilities while correctly transitioning to Vitest.


87-94: LGTM: Clean Vitest mocking implementation

The mocking setup correctly uses Vitest's vi.mock with async imports, maintaining the actual router functionality while mocking only the necessary parts.

@palisadoes
Copy link
Contributor

Minor PR

@palisadoes palisadoes merged commit c217676 into PalisadoesFoundation:develop-postgres Dec 24, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants