Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor : Adding Vitest to Request Page #2652

Conversation

shivasankaran18
Copy link

@shivasankaran18 shivasankaran18 commented Dec 13, 2024

Refactor: Adding Vitest to Request Page

**Issue Number: #2569 **

**Did you add tests for your changes? Yes
**

**
Screenshot 2024-12-13 175144
**

Summary
Migrated the testing framework to Vitest.
Updated all test files and configurations to be compatible with Vitest's syntax and features.

Have you read the contributing guide? Yes

Summary by CodeRabbit

  • Bug Fixes

    • Enhanced mocking strategy for local storage and window location in tests.
  • Tests

    • Updated test suite to utilize new mocking methods while maintaining existing logic and assertions.

Copy link

coderabbitai bot commented Dec 13, 2024

Walkthrough

The changes in this pull request focus on refactoring the test file Requests.spec.tsx to utilize the Vitest testing framework instead of Jest. This includes the introduction of new mocking strategies for the localStorage and window.location global objects. The test structure remains intact, with updates to the setup and teardown processes to accommodate the new mocks. The logic and assertions within the tests are preserved, ensuring that the functionality of the Requests component continues to be validated under various conditions.

Changes

File Path Change Summary
src/screens/Requests/Requests.spec.tsx Refactored to use Vitest for mocking localStorage and window.location; updated test setup and teardown.

Possibly related issues

Possibly related PRs

Suggested labels

refactor

Suggested reviewers

  • palisadoes
  • varshith257

Poem

In a world of tests so bright,
We hopped from Jest to Vitest light.
With mocks that play and stubs that cheer,
Our Requests shine, the path is clear!
Hooray for code, let’s give a cheer,
For every test, we hold so dear! 🐰✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ebf0d09 and 09b4d38.

📒 Files selected for processing (1)
  • src/screens/Requests/Requests.spec.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/screens/Requests/Requests.spec.tsx

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Congratulations on making your first PR! 🎊 If you haven't already, check out our Contributing Guidelines and PR Reporting Guidelines to ensure that you are following our guidelines for contributing and creating PR.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (5)
src/screens/Requests/Requests.spec.tsx (5)

32-34: Enhance window.location mock with additional properties

The window.location mock is missing some commonly used properties that might be needed in other tests. Consider adding pathname, search, and hash.

 Object.defineProperty(window, 'location', {
-  value: { href: 'http://localhost/', assign: vi.fn(), reload: vi.fn() },
+  value: {
+    href: 'http://localhost/',
+    pathname: '/',
+    search: '',
+    hash: '',
+    assign: vi.fn(),
+    reload: vi.fn()
+  },
 });

25-34: Consider moving mock setup to a shared test utility file

The localStorage and window.location mocks could be useful across multiple test files. Consider extracting them to a shared test utility file (e.g., testUtils.ts) to promote reusability and consistency.

Example structure:

// src/utils/testUtils.ts
import { vi } from 'vitest';

export function setupStorageMock() {
  vi.stubGlobal('localStorage', {
    getItem: vi.fn(),
    setItem: vi.fn(),
    clear: vi.fn(),
    removeItem: vi.fn(),
  });
}

export function setupLocationMock() {
  Object.defineProperty(window, 'location', {
    value: {
      href: 'http://localhost/',
      pathname: '/',
      search: '',
      hash: '',
      assign: vi.fn(),
      reload: vi.fn()
    },
  });
}

Line range hint 56-58: Reset mock function state in afterEach

While localStorage.clear() cleans the storage, it doesn't reset the mock function state. Consider adding vi.clearAllMocks() to ensure mock function calls are reset between tests.

 afterEach(() => {
   localStorage.clear();
+  vi.clearAllMocks();
 });

Line range hint 47-53: Replace custom wait function with Vitest utilities

Instead of using a custom wait function, consider using Vitest's built-in timer utilities for better test reliability and maintainability.

-async function wait(ms = 100): Promise<void> {
-  await act(() => {
-    return new Promise((resolve) => {
-      setTimeout(resolve, ms);
-    });
-  });
-}

Replace wait() calls with:

await vi.advanceTimersByTimeAsync(100);

Don't forget to add vi.useFakeTimers() in beforeEach and vi.useRealTimers() in afterEach.


Line range hint 142-167: Enhance search functionality test assertions

The search functionality test could be improved with more specific assertions to verify the expected behavior after each search operation.

 test('Testing Search requests functionality', async () => {
   render(
     // ... render code ...
   );

   await wait();
   const searchBtn = screen.getByTestId('searchButton');
   const search1 = 'John';
   userEvent.type(screen.getByTestId(/searchByName/i), search1);
   userEvent.click(searchBtn);
   await wait();
+  expect(screen.getByText('John')).toBeInTheDocument();

   const search2 = 'Pete{backspace}{backspace}{backspace}{backspace}';
   userEvent.type(screen.getByTestId(/searchByName/i), search2);
+  await wait();
+  expect(screen.queryByText('John')).not.toBeInTheDocument();

   // ... rest of the test ...
 });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a5c9d97 and ebf0d09.

📒 Files selected for processing (1)
  • src/screens/Requests/Requests.spec.tsx (1 hunks)

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 13, 2024
@Cioppolo14
Copy link

@shivasankaran18 Please fix the first comment so that each issue listed automatically closes. The PR_GUIDELINES.md file has details. Please also fix the failed tests.

@shivasankaran18
Copy link
Author

@palisadoes what changes should I do to make CI tests successfull ..could you please help me out?

Copy link

codecov bot commented Dec 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83.78%. Comparing base (a5c9d97) to head (09b4d38).

Additional details and impacted files
@@                  Coverage Diff                  @@
##           develop-postgres    #2652       +/-   ##
=====================================================
- Coverage             94.55%   83.78%   -10.77%     
=====================================================
  Files                   295      312       +17     
  Lines                  7036     8118     +1082     
  Branches               1516     1773      +257     
=====================================================
+ Hits                   6653     6802      +149     
- Misses                  177     1171      +994     
+ Partials                206      145       -61     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@palisadoes
Copy link
Contributor

We have a policy of unassigning contributors who close PRs without getting validation from our reviewer team. This is because:

  1. We start looking for people to review PRs when you submit them.
  2. We often contact them and link to the PR. If the PR is closed the whole effort is wasted.
  3. The historical thread of reviewer comments is broken when the work is spread across multiple PRs. The quality of our code is affected negatively.

Please be considerate of our volunteers' limited time and our desire to improve our code base.

This policy is stated as a pinned post in all our Talawa repositories. Our YouTube videos explain why this practice is not acceptable to our Community.

Unfortunately, if this continues we will have to close the offending PR and unassign you from the issue.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants