Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: add sdm unit tests from source-declarative-manifest connector directory #82

Merged
merged 7 commits into from
Nov 21, 2024

Conversation

aaronsteers
Copy link
Contributor

@aaronsteers aaronsteers commented Nov 21, 2024

Import unit_tests folder from source-declarative-manifest directory in the airbyte repo.

Drive-by CI updates:

  • Always run pytest-fast workflow, even if no code changes are detected.
  • Run the full pytest matrix whenever tests are changed. (Previously would be skipped because tests are not in the primary source code folder.
  • Allow the tests scope in PR Semantic Title review.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced new YAML and JSON configuration files for both valid and invalid scenarios related to the Pokémon API.
    • Added pytest fixtures to streamline testing with various configuration files.
  • Workflow Enhancements

    • Updated GitHub Actions workflows to improve responsiveness to changes and streamline testing processes.
    • Enhanced automation for labeling commits related to testing.
  • Tests

    • Added comprehensive unit tests for the declarative source manifest functionality, covering both success and error scenarios.

@aaronsteers aaronsteers marked this pull request as ready for review November 21, 2024 22:44
@aaronsteers aaronsteers changed the title tests: add sdm unit tests from source-declarative-manifest connector directory test: add sdm unit tests from source-declarative-manifest connector directory Nov 21, 2024
@aaronsteers aaronsteers changed the title test: add sdm unit tests from source-declarative-manifest connector directory tests: add sdm unit tests from source-declarative-manifest connector directory Nov 21, 2024
Copy link
Contributor

coderabbitai bot commented Nov 21, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

This pull request introduces several changes across multiple configuration files and test scripts. Key modifications include the addition of a "testing" label in the release drafter configuration, updates to GitHub Actions workflows to enhance trigger conditions for testing, and the introduction of new YAML and JSON configuration files for testing purposes. Additionally, new test files have been created to validate the functionality of the declarative source manifest, covering various scenarios for both valid and invalid configurations.

Changes

File Path Change Summary
.github/release-drafter.yml Added "testing" label under "Under the Hood ⚙️" category and updated autolabeler to match commit messages starting with "tests".
.github/workflows/pytest_fast.yml Removed path filters for push and pull_request, allowing workflow to trigger on any changes to the main branch.
.github/workflows/pytest_matrix.yml Added path unit_tests/** to trigger workflow on changes in the unit_tests directory.
.github/workflows/semantic_pr_check.yml Updated allowed types in pull request title validation from test and Test to tests and Tests.
unit_tests/source_declarative_manifest/conftest.py Introduced pytest fixtures for various configuration file paths to streamline testing.
unit_tests/source_declarative_manifest/resources/invalid_local_manifest.yaml Added new YAML file defining a declarative source configuration with a check for stream names.
unit_tests/source_declarative_manifest/resources/invalid_local_pokeapi_config.json Added new empty JSON file.
unit_tests/source_declarative_manifest/resources/invalid_remote_config.json Added new empty JSON file.
unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml Added new YAML file defining a comprehensive declarative source configuration for Pokémon API integration.
unit_tests/source_declarative_manifest/resources/valid_local_pokeapi_config.json Added new JSON file with a configuration for a specific Pokémon.
unit_tests/source_declarative_manifest/resources/valid_remote_config.json Added new JSON file with a structured declarative manifest for remote configuration.
unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py Introduced new test file with unit tests for command handling in local manifest.
unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py Introduced new test file with unit tests for the declarative source manifest functionality.

Possibly related PRs

Suggested labels

enhancement

Warning

Rate limit exceeded

@aaronsteers has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 0 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between f44234e and 11a3ace.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between f44234e and 11a3ace.

📒 Files selected for processing (1)
  • unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@aaronsteers
Copy link
Contributor Author

@coderabbitai review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (18)
unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (3)

15-19: The spec test looks good! Would you like to add a docstring for clarity? 🤔

The test effectively validates the spec command output. Consider adding a docstring to explain the purpose and expected behavior, wdyt?

 def test_spec_does_not_raise_value_error(capsys):
+    """
+    Test that the spec command executes successfully and returns the expected manifest configuration.
+    """
     handle_command(["spec"])

21-24: Great negative test! Consider adding more specific error validation?

The test effectively checks for ValueError, but we could make it even more robust. What do you think about validating the error message content too?

 def test_given_no_injected_declarative_manifest_then_raise_value_error(invalid_remote_config):
-    with pytest.raises(ValueError):
+    with pytest.raises(ValueError, match="__injected_declarative_manifest is required"):
         create_declarative_source(["check", "--config", str(invalid_remote_config)])

26-28: Good positive test! Want to add more assertions? 🤔

The instance check is good, but we could make the test more comprehensive. Maybe we could also verify some properties of the created source? For example:

 def test_given_injected_declarative_manifest_then_return_declarative_manifest(valid_remote_config):
     source = create_declarative_source(["check", "--config", str(valid_remote_config)])
     assert isinstance(source, ManifestDeclarativeSource)
+    # Additional assertions to verify source properties
+    assert source.get_manifest() is not None
.github/release-drafter.yml (1)

42-44: The autolabeler pattern looks good! Quick thought though...

The regex pattern '/^tests((.*))?:/i' will catch commits starting with "tests:". Would it be helpful to also catch variations like "test:" or "testing:" for more flexibility? wdyt?

Here's a possible enhancement if you're interested:

   - label: "testing"
     title:
-      - '/^tests(\(.*\))?\:/i'
+      - '/^test(s|ing)?(\(.*\))?\:/i'
unit_tests/source_declarative_manifest/conftest.py (3)

10-12: How about adding some type hints and documentation? 🤔

The function looks great! Would you consider adding type hints and a docstring to make it even more maintainable? Here's what I'm thinking, wdyt?

-def get_fixture_path(file_name):
+def get_fixture_path(file_name: str) -> str:
+    """
+    Constructs an absolute path to a fixture file relative to this module.
+    
+    Args:
+        file_name: The name of the fixture file
+    
+    Returns:
+        Absolute path to the fixture file
+    """
     return os.path.join(os.path.dirname(__file__), file_name)

34-44: How about adding some error handling for the YAML loading? 💭

The YAML loading looks secure with safe_load! Would you consider adding some error handling to make it more robust? Here's a suggestion:

 @pytest.fixture
 def valid_local_manifest_yaml(valid_local_manifest):
     with open(valid_local_manifest, "r") as file:
-        return yaml.safe_load(file)
+        try:
+            return yaml.safe_load(file)
+        except yaml.YAMLError as e:
+            pytest.fail(f"Failed to parse YAML file {valid_local_manifest}: {e}")

46-53: Should we make these config fixtures more generic? 🤔

The fixtures work great, but I noticed they're specifically named for pokeapi. If we plan to test with different APIs in the future, would it make sense to make them more generic? Something like:

-def valid_local_config_file():
+def valid_local_config_file(api_name: str = "pokeapi"):
-    return get_fixture_path("resources/valid_local_pokeapi_config.json")
+    return get_fixture_path(f"resources/valid_local_{api_name}_config.json")

This would make the fixtures more reusable for future API tests, wdyt?

unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py (6)

5-10: Consider adding type hints to improve IDE support and documentation

The imports look clean and well-organized! Would you consider adding type hints to improve IDE support and make the code more maintainable? For example:

-from unittest.mock import patch
+from unittest.mock import patch
+from typing import Generator, Any

wdyt? 🤔


12-14: Consider using structured JSON comparison instead of substring matching

The current approach using substrings might be fragile if the JSON structure changes. How about using proper JSON objects for comparison? Something like:

-POKEAPI_JSON_SPEC_SUBSTRING = '"required":["pokemon_name"]'
+POKEAPI_SPEC_REQUIRED_FIELDS = {"required": ["pokemon_name"]}

This would make the tests more robust against formatting changes. What do you think? 🤔


17-28: Add docstring and type hints to the fixture

The fixture looks good! Would you consider adding a docstring and type hints to make it more maintainable? Something like:

 @pytest.fixture(autouse=True)
-def setup(valid_local_manifest_yaml):
+def setup(valid_local_manifest_yaml: dict) -> Generator[None, None, None]:
+    """Setup fixture that patches manifest command checks and YAML parsing.
+    
+    Args:
+        valid_local_manifest_yaml: A valid manifest configuration for testing
+    """
     with patch(

This would make the fixture's purpose and return type clearer. Thoughts? 🤔


30-34: Enhance spec test with more specific assertions

The test looks good! Consider making the assertion more descriptive by validating the complete spec structure instead of just a substring. Maybe something like:

 def test_spec_is_poke_api(capsys):
     _run.handle_command(["spec"])
     stdout = capsys.readouterr()
-    assert POKEAPI_JSON_SPEC_SUBSTRING in stdout.out
+    spec = json.loads(stdout.out)
+    assert "pokemon_name" in spec.get("required", []), "Spec should require pokemon_name field"

What are your thoughts on this approach? 🤔


36-43: Validate error message in invalid YAML test

Good test! Would you consider capturing and validating the error message to ensure it's user-friendly? Something like:

 def test_invalid_yaml_throws(capsys, invalid_local_manifest_yaml):
     with patch(
         "airbyte_cdk.cli.source_declarative_manifest._run.YamlDeclarativeSource._read_and_parse_yaml_file",
         return_value=invalid_local_manifest_yaml,
     ):
-        with pytest.raises(ValidationError):
+        with pytest.raises(ValidationError) as exc_info:
             _run.handle_command(["spec"])
+        assert "schema validation failed" in str(exc_info.value)

wdyt? 🤔


45-54: Consider parametrizing the config tests

The last two tests follow a similar pattern. Would you consider using pytest's parametrize to make them more maintainable? Something like:

@pytest.mark.parametrize(
    "config_file,expected_status",
    [
        ("invalid_local_config_file", "FAILED"),
        ("valid_local_config_file", "SUCCEEDED"),
    ],
)
def test_check_command(capsys, config_file, expected_status, request):
    config = request.getfixturevalue(config_file)
    _run.handle_command(["check", "--config", str(config)])
    stdout = capsys.readouterr()
    assert json.loads(stdout.out).get("connectionStatus").get("status") == expected_status

This would reduce code duplication and make it easier to add more test cases. What do you think? 🤔

.github/workflows/pytest_matrix.yml (1)

Line range hint 35-39: Should we create a tracking issue for Python 3.12 support? 🤔

I notice there's a comment about Python 3.12 being blocked by Pendulum. Would it be helpful to create a GitHub issue to track this limitation and ensure we don't forget about it? This could help us upgrade once Pendulum adds support, wdyt?

Would you like me to help create a GitHub issue to track this?

unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml (4)

5-5: Consider enhancing the description for better documentation.

The current description "This is just a test" could be more informative. Would you consider adding details about the Pokemon API integration and its purpose? For example: "Declarative source configuration for Pokemon API integration that fetches detailed Pokemon data." wdyt?


7-11: Consider adding more comprehensive health checks.

The current check only verifies the presence of the 'pokemon' stream. Would you consider adding:

  1. Connection validation to verify API accessibility?
  2. Rate limit checks to ensure API quotas?
  3. Schema validation to verify response format?
    wdyt?

41-959: Consider externalizing the Pokemon names enum.

The current configuration includes a long list of Pokemon names directly in the YAML. Would you consider moving this to a separate configuration file for better maintainability? This would also make it easier to update the list when new Pokemon are added. wdyt?

Example:

pokemon_name:
  type: string
  description: Pokemon requested from the API.
  $ref: "#/definitions/pokemon_names"

And create a separate pokemon_names.yaml:

definitions:
  pokemon_names:
    enum:
      - bulbasaur
      - ivysaur
      # ... rest of the names

972-1368: Consider adding schema property descriptions.

The schema is well-structured but lacks descriptions for its properties. Would you consider adding descriptions to help users understand what each field represents? For example:

id:
  type: ["null", "integer"]
  description: "Unique identifier for the Pokemon"
name:
  type: ["null", "string"]
  description: "The Pokemon's name"

wdyt?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 249fa23 and f44234e.

📒 Files selected for processing (13)
  • .github/release-drafter.yml (2 hunks)
  • .github/workflows/pytest_fast.yml (0 hunks)
  • .github/workflows/pytest_matrix.yml (1 hunks)
  • .github/workflows/semantic_pr_check.yml (1 hunks)
  • unit_tests/source_declarative_manifest/conftest.py (1 hunks)
  • unit_tests/source_declarative_manifest/resources/invalid_local_manifest.yaml (1 hunks)
  • unit_tests/source_declarative_manifest/resources/invalid_local_pokeapi_config.json (1 hunks)
  • unit_tests/source_declarative_manifest/resources/invalid_remote_config.json (1 hunks)
  • unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml (1 hunks)
  • unit_tests/source_declarative_manifest/resources/valid_local_pokeapi_config.json (1 hunks)
  • unit_tests/source_declarative_manifest/resources/valid_remote_config.json (1 hunks)
  • unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py (1 hunks)
  • unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (1 hunks)
💤 Files with no reviewable changes (1)
  • .github/workflows/pytest_fast.yml
✅ Files skipped from review due to trivial changes (5)
  • unit_tests/source_declarative_manifest/resources/invalid_local_manifest.yaml
  • unit_tests/source_declarative_manifest/resources/invalid_local_pokeapi_config.json
  • unit_tests/source_declarative_manifest/resources/invalid_remote_config.json
  • unit_tests/source_declarative_manifest/resources/valid_local_pokeapi_config.json
  • unit_tests/source_declarative_manifest/resources/valid_remote_config.json
🔇 Additional comments (11)
.github/workflows/semantic_pr_check.yml (2)

40-41: LGTM! The addition of "tests" variants aligns well with semantic conventions.

The change to use "tests" (plural) instead of "test" (singular) is consistent with common semantic commit conventions and matches the casing pattern used for other types. Nice work on maintaining consistency!


40-41: Shall we verify consistency across related files? 🤔

The change looks good, but we might want to ensure this naming convention is consistently applied across other configuration files. What do you think about running a quick check?

✅ Verification successful

All good! The test-related naming is consistent across the codebase 👍

Looking at the workflow files, the plural form "tests" is consistently used in semantic configurations, and other test-related terms (like pytest, test commands) follow their respective conventions. No inconsistencies found that would need attention.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining instances of singular "test" in semantic configs
# and related workflow files to ensure consistency

echo "Checking for semantic commit type configurations..."
rg -l --type yaml '(types:|type:).*test[^s]' .github/

echo "Checking for other potential semantic validation configs..."
fd -e yml -e yaml -e json semantic

Length of output: 334


Script:

#!/bin/bash
# Let's try a more comprehensive search for test-related patterns in workflow files
echo "Checking all workflow files for test-related patterns..."
rg -i "test" .github/workflows/ -A 2 -B 2

echo "Checking semantic configuration in package.json if it exists..."
fd "package.json" --exec cat {} | jq -r '.commitlint // .config // .release'

echo "Looking for any commitlint config files..."
fd -e js -e json -e yaml -e yml "commitlint"

Length of output: 19905

unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (1)

21-28: Verify fixture definitions

The tests use valid_remote_config and invalid_remote_config fixtures. Let's make sure they're properly defined.

✅ Verification successful

Let's also check the content of the test file itself to understand the context better:

And let's look for any fixture-related imports in this file:


Fixtures are properly defined in conftest.py ✨

The fixtures valid_remote_config and invalid_remote_config are well-defined in the conftest.py file. They return paths to JSON config files from the resources directory. The test file correctly uses these fixtures, and the setup looks clean and organized. No issues to report here!

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Look for fixture definitions in conftest.py or other test files

# Search for fixture definitions
rg -A 5 "@pytest.fixture.*(?:valid|invalid)_remote_config"

Length of output: 60


Script:

#!/bin/bash
# Let's try a broader search for these fixtures in test files
fd "test.*\.py$" | xargs rg -l "(?:valid|invalid)_remote_config"

# Also search for any conftest.py files that might contain these fixtures
fd "conftest\.py$" | xargs rg -l "(?:valid|invalid)_remote_config"

Length of output: 319


Script:

cat unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py

Length of output: 1095


Script:

rg "^from.*import.*(?:valid|invalid)_remote_config" unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py

Length of output: 135


Script:

#!/bin/bash
# Let's check the conftest.py file where we found the fixture definitions
cat unit_tests/source_declarative_manifest/conftest.py

Length of output: 1275

.github/release-drafter.yml (1)

18-18: LGTM! Good categorization choice.

Adding the "testing" label under "Under the Hood ⚙️" makes perfect sense since test improvements are indeed internal enhancements.

unit_tests/source_declarative_manifest/conftest.py (2)

1-8: Looking good! Clean and well-organized imports.

The imports are appropriate and follow standard conventions.


14-32: Should we verify that all these resource files exist? 🤔

The fixtures look well-organized! Let's make sure all the referenced resource files are present.

✅ Verification successful

All resource files are present and accounted for! ✅

The verification shows that all four resource files exist at their expected locations:

  • valid_remote_config.json
  • invalid_remote_config.json
  • valid_local_manifest.yaml
  • invalid_local_manifest.yaml
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify existence of all referenced resource files

echo "Checking resource files..."
for file in "valid_remote_config.json" "invalid_remote_config.json" "valid_local_manifest.yaml" "invalid_local_manifest.yaml"; do
  if ! fd -p "unit_tests/source_declarative_manifest/resources/$file" ; then
    echo "⚠️ Warning: Missing resource file: $file"
  fi
done

Length of output: 1210

unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py (1)

1-54: Overall: Well-structured tests with good coverage! 🎉

The test file looks great! It has good coverage of both success and failure cases, and follows testing best practices. The suggestions above are optional improvements that could make the tests even more robust and maintainable.

.github/workflows/pytest_matrix.yml (2)

16-16: LGTM! The path trigger changes look good! 🎯

Adding unit_tests/** to both push and pull_request triggers ensures that the pytest matrix runs whenever test files are modified. This aligns perfectly with the PR's objective of importing unit tests from the source-declarative-manifest connector.

Also applies to: 22-22


Line range hint 35-39: Let's check existing issues about Python 3.12 support 🔍

✅ Verification successful

Found an existing issue about Python 3.12 compatibility (#44) 🎯

I noticed there's already an open issue (#44) tracking Python 3.12 compatibility, created on November 19, 2024. Additionally, there are several related issues in the Pendulum repository, including an open issue about DeprecationWarning in Python 3.12 (#830).

Would you like to add a reference to these issues in the comment? Something like # Currently blocked by Pendulum (see #44) wdyt? 🤔

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for existing issues about Python 3.12 or Pendulum support
gh issue list --search "Python 3.12 Pendulum in:title,body" --state all

# Also check if Pendulum has an existing issue for Python 3.12
gh api graphql -f query='
{
  search(query: "repo:sdispater/pendulum Python 3.12 in:title,body type:issue", type: ISSUE, first: 5) {
    nodes {
      ... on Issue {
        title
        url
        state
      }
    }
  }
}'

Length of output: 1162

unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml (2)

38-40: LGTM!

The stream reference is correctly configured.


960-971: LGTM!

The metadata configuration is comprehensive and well-structured.

@aaronsteers aaronsteers enabled auto-merge (squash) November 21, 2024 23:06
@aaronsteers aaronsteers merged commit 2368b59 into main Nov 21, 2024
23 of 24 checks passed
@aaronsteers aaronsteers deleted the aj/sdm/add-unit-tests branch November 21, 2024 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant