-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: add sdm unit tests from source-declarative-manifest connector directory #82
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThis pull request introduces several changes across multiple configuration files and test scripts. Key modifications include the addition of a "testing" label in the release drafter configuration, updates to GitHub Actions workflows to enhance trigger conditions for testing, and the introduction of new YAML and JSON configuration files for testing purposes. Additionally, new test files have been created to validate the functionality of the declarative source manifest, covering various scenarios for both valid and invalid configurations. Changes
Possibly related PRs
Suggested labels
Warning Rate limit exceeded@aaronsteers has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 0 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (18)
unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (3)
15-19
: The spec test looks good! Would you like to add a docstring for clarity? 🤔The test effectively validates the spec command output. Consider adding a docstring to explain the purpose and expected behavior, wdyt?
def test_spec_does_not_raise_value_error(capsys): + """ + Test that the spec command executes successfully and returns the expected manifest configuration. + """ handle_command(["spec"])
21-24
: Great negative test! Consider adding more specific error validation?The test effectively checks for ValueError, but we could make it even more robust. What do you think about validating the error message content too?
def test_given_no_injected_declarative_manifest_then_raise_value_error(invalid_remote_config): - with pytest.raises(ValueError): + with pytest.raises(ValueError, match="__injected_declarative_manifest is required"): create_declarative_source(["check", "--config", str(invalid_remote_config)])
26-28
: Good positive test! Want to add more assertions? 🤔The instance check is good, but we could make the test more comprehensive. Maybe we could also verify some properties of the created source? For example:
def test_given_injected_declarative_manifest_then_return_declarative_manifest(valid_remote_config): source = create_declarative_source(["check", "--config", str(valid_remote_config)]) assert isinstance(source, ManifestDeclarativeSource) + # Additional assertions to verify source properties + assert source.get_manifest() is not None.github/release-drafter.yml (1)
42-44
: The autolabeler pattern looks good! Quick thought though...The regex pattern '/^tests((.*))?:/i' will catch commits starting with "tests:". Would it be helpful to also catch variations like "test:" or "testing:" for more flexibility? wdyt?
Here's a possible enhancement if you're interested:
- label: "testing" title: - - '/^tests(\(.*\))?\:/i' + - '/^test(s|ing)?(\(.*\))?\:/i'unit_tests/source_declarative_manifest/conftest.py (3)
10-12
: How about adding some type hints and documentation? 🤔The function looks great! Would you consider adding type hints and a docstring to make it even more maintainable? Here's what I'm thinking, wdyt?
-def get_fixture_path(file_name): +def get_fixture_path(file_name: str) -> str: + """ + Constructs an absolute path to a fixture file relative to this module. + + Args: + file_name: The name of the fixture file + + Returns: + Absolute path to the fixture file + """ return os.path.join(os.path.dirname(__file__), file_name)
34-44
: How about adding some error handling for the YAML loading? 💭The YAML loading looks secure with
safe_load
! Would you consider adding some error handling to make it more robust? Here's a suggestion:@pytest.fixture def valid_local_manifest_yaml(valid_local_manifest): with open(valid_local_manifest, "r") as file: - return yaml.safe_load(file) + try: + return yaml.safe_load(file) + except yaml.YAMLError as e: + pytest.fail(f"Failed to parse YAML file {valid_local_manifest}: {e}")
46-53
: Should we make these config fixtures more generic? 🤔The fixtures work great, but I noticed they're specifically named for pokeapi. If we plan to test with different APIs in the future, would it make sense to make them more generic? Something like:
-def valid_local_config_file(): +def valid_local_config_file(api_name: str = "pokeapi"): - return get_fixture_path("resources/valid_local_pokeapi_config.json") + return get_fixture_path(f"resources/valid_local_{api_name}_config.json")This would make the fixtures more reusable for future API tests, wdyt?
unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py (6)
5-10
: Consider adding type hints to improve IDE support and documentationThe imports look clean and well-organized! Would you consider adding type hints to improve IDE support and make the code more maintainable? For example:
-from unittest.mock import patch +from unittest.mock import patch +from typing import Generator, Anywdyt? 🤔
12-14
: Consider using structured JSON comparison instead of substring matchingThe current approach using substrings might be fragile if the JSON structure changes. How about using proper JSON objects for comparison? Something like:
-POKEAPI_JSON_SPEC_SUBSTRING = '"required":["pokemon_name"]' +POKEAPI_SPEC_REQUIRED_FIELDS = {"required": ["pokemon_name"]}This would make the tests more robust against formatting changes. What do you think? 🤔
17-28
: Add docstring and type hints to the fixtureThe fixture looks good! Would you consider adding a docstring and type hints to make it more maintainable? Something like:
@pytest.fixture(autouse=True) -def setup(valid_local_manifest_yaml): +def setup(valid_local_manifest_yaml: dict) -> Generator[None, None, None]: + """Setup fixture that patches manifest command checks and YAML parsing. + + Args: + valid_local_manifest_yaml: A valid manifest configuration for testing + """ with patch(This would make the fixture's purpose and return type clearer. Thoughts? 🤔
30-34
: Enhance spec test with more specific assertionsThe test looks good! Consider making the assertion more descriptive by validating the complete spec structure instead of just a substring. Maybe something like:
def test_spec_is_poke_api(capsys): _run.handle_command(["spec"]) stdout = capsys.readouterr() - assert POKEAPI_JSON_SPEC_SUBSTRING in stdout.out + spec = json.loads(stdout.out) + assert "pokemon_name" in spec.get("required", []), "Spec should require pokemon_name field"What are your thoughts on this approach? 🤔
36-43
: Validate error message in invalid YAML testGood test! Would you consider capturing and validating the error message to ensure it's user-friendly? Something like:
def test_invalid_yaml_throws(capsys, invalid_local_manifest_yaml): with patch( "airbyte_cdk.cli.source_declarative_manifest._run.YamlDeclarativeSource._read_and_parse_yaml_file", return_value=invalid_local_manifest_yaml, ): - with pytest.raises(ValidationError): + with pytest.raises(ValidationError) as exc_info: _run.handle_command(["spec"]) + assert "schema validation failed" in str(exc_info.value)wdyt? 🤔
45-54
: Consider parametrizing the config testsThe last two tests follow a similar pattern. Would you consider using pytest's parametrize to make them more maintainable? Something like:
@pytest.mark.parametrize( "config_file,expected_status", [ ("invalid_local_config_file", "FAILED"), ("valid_local_config_file", "SUCCEEDED"), ], ) def test_check_command(capsys, config_file, expected_status, request): config = request.getfixturevalue(config_file) _run.handle_command(["check", "--config", str(config)]) stdout = capsys.readouterr() assert json.loads(stdout.out).get("connectionStatus").get("status") == expected_statusThis would reduce code duplication and make it easier to add more test cases. What do you think? 🤔
.github/workflows/pytest_matrix.yml (1)
Line range hint
35-39
: Should we create a tracking issue for Python 3.12 support? 🤔I notice there's a comment about Python 3.12 being blocked by Pendulum. Would it be helpful to create a GitHub issue to track this limitation and ensure we don't forget about it? This could help us upgrade once Pendulum adds support, wdyt?
Would you like me to help create a GitHub issue to track this?
unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml (4)
5-5
: Consider enhancing the description for better documentation.The current description "This is just a test" could be more informative. Would you consider adding details about the Pokemon API integration and its purpose? For example: "Declarative source configuration for Pokemon API integration that fetches detailed Pokemon data." wdyt?
7-11
: Consider adding more comprehensive health checks.The current check only verifies the presence of the 'pokemon' stream. Would you consider adding:
- Connection validation to verify API accessibility?
- Rate limit checks to ensure API quotas?
- Schema validation to verify response format?
wdyt?
41-959
: Consider externalizing the Pokemon names enum.The current configuration includes a long list of Pokemon names directly in the YAML. Would you consider moving this to a separate configuration file for better maintainability? This would also make it easier to update the list when new Pokemon are added. wdyt?
Example:
pokemon_name: type: string description: Pokemon requested from the API. $ref: "#/definitions/pokemon_names"And create a separate
pokemon_names.yaml
:definitions: pokemon_names: enum: - bulbasaur - ivysaur # ... rest of the names
972-1368
: Consider adding schema property descriptions.The schema is well-structured but lacks descriptions for its properties. Would you consider adding descriptions to help users understand what each field represents? For example:
id: type: ["null", "integer"] description: "Unique identifier for the Pokemon" name: type: ["null", "string"] description: "The Pokemon's name"wdyt?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (13)
.github/release-drafter.yml
(2 hunks).github/workflows/pytest_fast.yml
(0 hunks).github/workflows/pytest_matrix.yml
(1 hunks).github/workflows/semantic_pr_check.yml
(1 hunks)unit_tests/source_declarative_manifest/conftest.py
(1 hunks)unit_tests/source_declarative_manifest/resources/invalid_local_manifest.yaml
(1 hunks)unit_tests/source_declarative_manifest/resources/invalid_local_pokeapi_config.json
(1 hunks)unit_tests/source_declarative_manifest/resources/invalid_remote_config.json
(1 hunks)unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml
(1 hunks)unit_tests/source_declarative_manifest/resources/valid_local_pokeapi_config.json
(1 hunks)unit_tests/source_declarative_manifest/resources/valid_remote_config.json
(1 hunks)unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py
(1 hunks)unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py
(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/pytest_fast.yml
✅ Files skipped from review due to trivial changes (5)
- unit_tests/source_declarative_manifest/resources/invalid_local_manifest.yaml
- unit_tests/source_declarative_manifest/resources/invalid_local_pokeapi_config.json
- unit_tests/source_declarative_manifest/resources/invalid_remote_config.json
- unit_tests/source_declarative_manifest/resources/valid_local_pokeapi_config.json
- unit_tests/source_declarative_manifest/resources/valid_remote_config.json
🔇 Additional comments (11)
.github/workflows/semantic_pr_check.yml (2)
40-41
: LGTM! The addition of "tests" variants aligns well with semantic conventions.
The change to use "tests" (plural) instead of "test" (singular) is consistent with common semantic commit conventions and matches the casing pattern used for other types. Nice work on maintaining consistency!
40-41
: Shall we verify consistency across related files? 🤔
The change looks good, but we might want to ensure this naming convention is consistently applied across other configuration files. What do you think about running a quick check?
✅ Verification successful
All good! The test-related naming is consistent across the codebase 👍
Looking at the workflow files, the plural form "tests" is consistently used in semantic configurations, and other test-related terms (like pytest, test commands) follow their respective conventions. No inconsistencies found that would need attention.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining instances of singular "test" in semantic configs
# and related workflow files to ensure consistency
echo "Checking for semantic commit type configurations..."
rg -l --type yaml '(types:|type:).*test[^s]' .github/
echo "Checking for other potential semantic validation configs..."
fd -e yml -e yaml -e json semantic
Length of output: 334
Script:
#!/bin/bash
# Let's try a more comprehensive search for test-related patterns in workflow files
echo "Checking all workflow files for test-related patterns..."
rg -i "test" .github/workflows/ -A 2 -B 2
echo "Checking semantic configuration in package.json if it exists..."
fd "package.json" --exec cat {} | jq -r '.commitlint // .config // .release'
echo "Looking for any commitlint config files..."
fd -e js -e json -e yaml -e yml "commitlint"
Length of output: 19905
unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py (1)
21-28
: Verify fixture definitions
The tests use valid_remote_config
and invalid_remote_config
fixtures. Let's make sure they're properly defined.
✅ Verification successful
Let's also check the content of the test file itself to understand the context better:
And let's look for any fixture-related imports in this file:
Fixtures are properly defined in conftest.py ✨
The fixtures valid_remote_config
and invalid_remote_config
are well-defined in the conftest.py file. They return paths to JSON config files from the resources directory. The test file correctly uses these fixtures, and the setup looks clean and organized. No issues to report here!
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Look for fixture definitions in conftest.py or other test files
# Search for fixture definitions
rg -A 5 "@pytest.fixture.*(?:valid|invalid)_remote_config"
Length of output: 60
Script:
#!/bin/bash
# Let's try a broader search for these fixtures in test files
fd "test.*\.py$" | xargs rg -l "(?:valid|invalid)_remote_config"
# Also search for any conftest.py files that might contain these fixtures
fd "conftest\.py$" | xargs rg -l "(?:valid|invalid)_remote_config"
Length of output: 319
Script:
cat unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py
Length of output: 1095
Script:
rg "^from.*import.*(?:valid|invalid)_remote_config" unit_tests/source_declarative_manifest/test_source_declarative_remote_manifest.py
Length of output: 135
Script:
#!/bin/bash
# Let's check the conftest.py file where we found the fixture definitions
cat unit_tests/source_declarative_manifest/conftest.py
Length of output: 1275
.github/release-drafter.yml (1)
18-18
: LGTM! Good categorization choice.
Adding the "testing" label under "Under the Hood ⚙️" makes perfect sense since test improvements are indeed internal enhancements.
unit_tests/source_declarative_manifest/conftest.py (2)
1-8
: Looking good! Clean and well-organized imports.
The imports are appropriate and follow standard conventions.
14-32
: Should we verify that all these resource files exist? 🤔
The fixtures look well-organized! Let's make sure all the referenced resource files are present.
✅ Verification successful
All resource files are present and accounted for! ✅
The verification shows that all four resource files exist at their expected locations:
valid_remote_config.json
invalid_remote_config.json
valid_local_manifest.yaml
invalid_local_manifest.yaml
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify existence of all referenced resource files
echo "Checking resource files..."
for file in "valid_remote_config.json" "invalid_remote_config.json" "valid_local_manifest.yaml" "invalid_local_manifest.yaml"; do
if ! fd -p "unit_tests/source_declarative_manifest/resources/$file" ; then
echo "⚠️ Warning: Missing resource file: $file"
fi
done
Length of output: 1210
unit_tests/source_declarative_manifest/test_source_declarative_local_manifest.py (1)
1-54
: Overall: Well-structured tests with good coverage! 🎉
The test file looks great! It has good coverage of both success and failure cases, and follows testing best practices. The suggestions above are optional improvements that could make the tests even more robust and maintainable.
.github/workflows/pytest_matrix.yml (2)
16-16
: LGTM! The path trigger changes look good! 🎯
Adding unit_tests/**
to both push and pull_request triggers ensures that the pytest matrix runs whenever test files are modified. This aligns perfectly with the PR's objective of importing unit tests from the source-declarative-manifest connector.
Also applies to: 22-22
Line range hint 35-39
: Let's check existing issues about Python 3.12 support 🔍
✅ Verification successful
Found an existing issue about Python 3.12 compatibility (#44) 🎯
I noticed there's already an open issue (#44) tracking Python 3.12 compatibility, created on November 19, 2024. Additionally, there are several related issues in the Pendulum repository, including an open issue about DeprecationWarning in Python 3.12 (#830).
- Issue Compatibility with Python 3.12 #44: "Compatibility with Python 3.12" (open)
- Pendulum #830: "DeprecationWarning about datetime.utcfromtimestamp in Python-3.12" (open)
Would you like to add a reference to these issues in the comment? Something like # Currently blocked by Pendulum (see #44)
wdyt? 🤔
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for existing issues about Python 3.12 or Pendulum support
gh issue list --search "Python 3.12 Pendulum in:title,body" --state all
# Also check if Pendulum has an existing issue for Python 3.12
gh api graphql -f query='
{
search(query: "repo:sdispater/pendulum Python 3.12 in:title,body type:issue", type: ISSUE, first: 5) {
nodes {
... on Issue {
title
url
state
}
}
}
}'
Length of output: 1162
unit_tests/source_declarative_manifest/resources/valid_local_manifest.yaml (2)
38-40
: LGTM!
The stream reference is correctly configured.
960-971
: LGTM!
The metadata configuration is comprehensive and well-structured.
Import
unit_tests
folder fromsource-declarative-manifest
directory in theairbyte
repo.Drive-by CI updates:
pytest-fast
workflow, even if no code changes are detected.tests
scope in PR Semantic Title review.Summary by CodeRabbit
Release Notes
New Features
Workflow Enhancements
Tests