-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add sync method for bank accounts, dimensions and dimension_values #173
base: master
Are you sure you want to change the base?
Conversation
WalkthroughThe pull request introduces enhancements to the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (3)
tests/test_business_central/fixtures.py (3)
84-142
: Consider improving test data quality
- Replace the real email address with a dummy one (e.g., "[email protected]")
- Use Python's
None
instead of string "None" in custom properties for consistency- "employee_email": "[email protected]", + "employee_email": "[email protected]", ... "custom_properties": { - "Custom Expense Field": "None", + "Custom Expense Field": None, },
159-220
: Consider adding multi-currency test casesAll bank accounts currently use INR currency. Consider adding accounts with different currencies (e.g., USD, EUR) to improve test coverage for international scenarios.
{ "id": "bdbb303f-4319-ee11-9cc4-6045bdc8dcac", "number": "NBL", "displayName": "New Bank of London", "lastModifiedDateTime": "2023-07-03T01:45:45.597Z", "bankAccountNumber": "78-66-345", "blocked": False, - "currencyCode": "INR", + "currencyCode": "GBP", "currencyId": "00000000-0000-0000-0000-000000000000",
221-229
: Consider adding more dimension typesCurrently only the "AREA" dimension is defined. Consider adding more dimension types (e.g., "DEPARTMENT", "REGION") to improve test coverage for the sync functionality.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
apps/business_central/utils.py
(1 hunks)requirements.txt
(1 hunks)tests/test_business_central/fixtures.py
(4 hunks)tests/test_business_central/test_utils.py
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- requirements.txt
🔇 Additional comments (7)
tests/test_business_central/test_utils.py (1)
4-4
: LGTM!
The import statement is properly placed and follows Python's import style guidelines.
tests/test_business_central/fixtures.py (4)
13-13
: LGTM: Expense description fields properly structured
The added fields enhance the test data structure with realistic values.
Also applies to: 18-18
37-37
: LGTM: Export settings aligned with sync functionality
The auto_map_employees setting supports the new sync features.
47-47
: LGTM: Advanced settings properly configured
The auto_create_vendor setting provides a good default for testing.
143-158
: LGTM: Well-structured field configuration
The expense_fields and included_fields provide a clear configuration structure for field visibility.
apps/business_central/utils.py (2)
117-117
: LGTM: Improved destination ID handling for bank accounts
The conditional logic for determining the destination ID is well-thought-out, using the 'id' field for bank accounts while maintaining the existing behavior for other types.
124-166
: Verify integration with existing sync workflow
Let's verify how these new sync methods are integrated with the existing sync workflow.
✅ Verification successful
Integration verified and properly implemented
The new sync methods are well-integrated into the existing codebase:
- The methods are part of a consistent sync pattern alongside other sync methods (
sync_companies
,sync_accounts
,sync_vendors
, etc.) in the utils.py file sync_dimensions
is properly integrated into the workflow:- Called from business_central/helpers.py in a dedicated helper function
- Used in serializers.py during credential setup
- Integrated into journal entry and purchase invoice export queues
- Covered by unit tests in test_utils.py, test_helpers.py, and test_views.py
sync_bank_accounts
follows the same pattern as other sync methods and has test coverage in test_utils.py
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how these new sync methods are used in the codebase
# Check for sync method calls
echo "Checking for sync method usage..."
rg -l "sync_bank_accounts|sync_dimensions" --type python
# Check for potential task definitions
echo "Checking for celery tasks..."
rg -l "sync_bank_accounts|sync_dimensions" --type python | grep "tasks.py"
# Check for API endpoints
echo "Checking for API endpoints..."
rg -l "sync_bank_accounts|sync_dimensions" --type python | grep "views.py"
Length of output: 514
Script:
#!/bin/bash
# Let's try a different approach without file type filtering
# Check for sync method usage
echo "Checking for sync method usage..."
rg "sync_bank_accounts|sync_dimensions"
# Find Python files that might contain related code
echo -e "\nFinding relevant Python files..."
fd "tasks.py|views.py|sync.*\.py" --extension py
# Look for other sync methods in the utils file
echo -e "\nChecking other sync methods in utils.py..."
rg "def sync_" apps/business_central/utils.py
Length of output: 4073
def test_sync_bank_accounts(mocker, db, create_business_central_connection): | ||
workspace_id = 1 | ||
business_central_connection = create_business_central_connection | ||
|
||
mocker.patch.object( | ||
business_central_connection.connection.bank_accounts, | ||
'get_all', | ||
return_value=data['bank_accounts'] | ||
) | ||
|
||
bank_account_count = DestinationAttribute.objects.filter( | ||
workspace_id=workspace_id, attribute_type="BANK_ACCOUNT" | ||
).count() | ||
|
||
assert bank_account_count == 0 | ||
business_central_connection.sync_bank_accounts() | ||
|
||
bank_accounts = DestinationAttribute.objects.filter( | ||
workspace_id=workspace_id, attribute_type="BANK_ACCOUNT" | ||
) | ||
|
||
assert bank_accounts.count() == 4 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance test coverage and robustness.
While the basic happy path is covered, consider the following improvements:
- Add cleanup in setUp or before the test to ensure a clean state
- Verify the actual attributes of synced bank accounts, not just the count
- Add negative test cases (e.g., API failures, malformed data)
- Use workspace fixture instead of hardcoded workspace_id
Here's a suggested improvement:
def test_sync_bank_accounts(mocker, db, create_business_central_connection):
- workspace_id = 1
+ workspace = create_business_central_connection.workspace
+
+ # Cleanup
+ DestinationAttribute.objects.filter(
+ workspace_id=workspace.id,
+ attribute_type="BANK_ACCOUNT"
+ ).delete()
+
business_central_connection = create_business_central_connection
mocker.patch.object(
business_central_connection.connection.bank_accounts,
'get_all',
return_value=data['bank_accounts']
)
bank_account_count = DestinationAttribute.objects.filter(
- workspace_id=workspace_id,
+ workspace_id=workspace.id,
attribute_type="BANK_ACCOUNT"
).count()
assert bank_account_count == 0
business_central_connection.sync_bank_accounts()
bank_accounts = DestinationAttribute.objects.filter(
- workspace_id=workspace_id,
+ workspace_id=workspace.id,
attribute_type="BANK_ACCOUNT"
)
assert bank_accounts.count() == 4
+
+ # Verify attributes of synced bank accounts
+ expected_attributes = ['id', 'name', 'code']
+ for bank_account in bank_accounts:
+ for attr in expected_attributes:
+ assert getattr(bank_account, attr) == data['bank_accounts'][0][attr]
Consider adding these additional test cases:
def test_sync_bank_accounts_api_failure(mocker, db, create_business_central_connection):
business_central_connection = create_business_central_connection
mocker.patch.object(
business_central_connection.connection.bank_accounts,
'get_all',
side_effect=Exception("API Error")
)
with pytest.raises(Exception):
business_central_connection.sync_bank_accounts()
def test_sync_dimension(mocker, db, create_business_central_connection): | ||
|
||
workspace_id = 1 | ||
business_central_connection = create_business_central_connection | ||
|
||
mocker.patch.object( | ||
business_central_connection.connection.dimensions, | ||
'get_all_dimensions', | ||
return_value=data['dimensions'] | ||
) | ||
|
||
mocker.patch.object( | ||
business_central_connection.connection.dimensions, | ||
'get_all_dimension_values', | ||
return_value=data['dimension_values'] | ||
) | ||
|
||
area_count = DestinationAttribute.objects.filter( | ||
workspace_id=workspace_id, attribute_type="AREA" | ||
).count() | ||
|
||
assert area_count == 0 | ||
business_central_connection.sync_dimensions() | ||
|
||
areas = DestinationAttribute.objects.filter( | ||
workspace_id=workspace_id, attribute_type="AREA" | ||
) | ||
assert areas.count() == 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve test coverage and add documentation.
The test needs similar improvements as test_sync_bank_accounts, plus some specific to dimensions:
- Add docstring explaining the relationship between dimensions and areas
- Verify both dimensions and their values
- Add cleanup before test
- Use workspace fixture instead of hardcoded ID
Here's a suggested improvement:
def test_sync_dimension(mocker, db, create_business_central_connection):
+ """
+ Test synchronization of dimensions and dimension values.
+
+ Dimensions in Business Central are mapped to Areas in our system.
+ Each dimension can have multiple dimension values.
+ """
- workspace_id = 1
+ workspace = create_business_central_connection.workspace
+
+ # Cleanup
+ DestinationAttribute.objects.filter(
+ workspace_id=workspace.id,
+ attribute_type__in=["AREA", "DIMENSION_VALUE"]
+ ).delete()
+
business_central_connection = create_business_central_connection
mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimensions',
return_value=data['dimensions']
)
mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimension_values',
return_value=data['dimension_values']
)
area_count = DestinationAttribute.objects.filter(
- workspace_id=workspace_id,
+ workspace_id=workspace.id,
attribute_type="AREA"
).count()
assert area_count == 0
business_central_connection.sync_dimensions()
areas = DestinationAttribute.objects.filter(
- workspace_id=workspace_id,
+ workspace_id=workspace.id,
attribute_type="AREA"
)
assert areas.count() == 3
+
+ # Verify dimension values
+ dimension_values = DestinationAttribute.objects.filter(
+ workspace_id=workspace.id,
+ attribute_type="DIMENSION_VALUE"
+ )
+ assert dimension_values.count() == len(data['dimension_values'])
+
+ # Verify attributes
+ for area in areas:
+ dimension = next(d for d in data['dimensions'] if d['id'] == area.destination_id)
+ assert area.value == dimension['name']
+
+ # Verify related dimension values
+ related_values = dimension_values.filter(parent_id=area.id)
+ expected_values = [v for v in data['dimension_values'] if v['dimension_id'] == dimension['id']]
+ assert related_values.count() == len(expected_values)
Consider adding these test cases:
def test_sync_dimensions_api_failure(mocker, db, create_business_central_connection):
business_central_connection = create_business_central_connection
mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimensions',
side_effect=Exception("API Error")
)
with pytest.raises(Exception):
business_central_connection.sync_dimensions()
def test_sync_dimensions_with_empty_values(mocker, db, create_business_central_connection):
business_central_connection = create_business_central_connection
mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimensions',
return_value=data['dimensions']
)
mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimension_values',
return_value=[] # Empty dimension values
)
business_central_connection.sync_dimensions()
areas = DestinationAttribute.objects.filter(
workspace_id=business_central_connection.workspace.id,
attribute_type="AREA"
)
assert areas.count() == 3 # Dimensions should still be synced
dimension_values = DestinationAttribute.objects.filter(
workspace_id=business_central_connection.workspace.id,
attribute_type="DIMENSION_VALUE"
)
assert dimension_values.count() == 0
"dimension_values": [ | ||
{ | ||
"id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac", | ||
"code": "LARGE", | ||
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", | ||
"displayName": "Large Business", | ||
"consolidationCode": "", | ||
"lastModifiedDateTime": "2023-07-03T01:46:02.817Z", | ||
}, | ||
{ | ||
"id": "1c4a2945-4319-ee11-9cc4-6045bdc8dcac", | ||
"code": "MEDIUM", | ||
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", | ||
"displayName": "Medium Business", | ||
"consolidationCode": "", | ||
"lastModifiedDateTime": "2023-07-03T01:46:02.82Z", | ||
}, | ||
{ | ||
"id": "1d4a2945-4319-ee11-9cc4-6045bdc8dcac", | ||
"code": "SMALL", | ||
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", | ||
"displayName": "Small Business", | ||
"consolidationCode": "", | ||
"lastModifiedDateTime": "2023-07-03T01:46:02.823Z", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistent dimensionId reference
The dimension values reference dimensionId "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", but the AREA dimension has id "5bbc303f-4319-ee11-9cc4-6045bdc8dcac". This mismatch could cause issues in tests.
{
"id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac",
"code": "LARGE",
- "dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
+ "dimensionId": "5bbc303f-4319-ee11-9cc4-6045bdc8dcac",
"displayName": "Large Business",
Committable suggestion skipped: line range outside the PR's diff.
apps/business_central/utils.py
Outdated
def sync_dimensions(self): | ||
""" | ||
sync business central dimensions | ||
""" | ||
|
||
dimensions = self.connection.dimensions.get_all_dimensions() | ||
for dimension in dimensions: | ||
dimension_attributes = [] | ||
dimension_id = dimension["id"] | ||
dimension_name = dimension["code"] | ||
dimension_values = self.connection.dimensions.get_all_dimension_values( | ||
dimension_id | ||
) | ||
|
||
for value in dimension_values: | ||
detail = {"dimension_id": dimension_id} | ||
dimension_attributes.append( | ||
{ | ||
"attribute_type": dimension_name, | ||
"display_name": dimension["displayName"], | ||
"value": value["displayName"], | ||
"destination_id": value["id"], | ||
"detail": detail, | ||
"active": True, | ||
} | ||
) | ||
|
||
DestinationAttribute.bulk_create_or_update_destination_attributes( | ||
dimension_attributes, dimension_name, self.workspace_id | ||
) | ||
|
||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and optimize performance
The sync_dimensions
method needs several improvements:
- Add error handling for API calls
- Optimize performance for nested API calls
- Add data validation
- Add upper limit check
- Consider memory optimization for large datasets
Here's a suggested implementation:
def sync_dimensions(self):
"""
sync business central dimensions
"""
+ try:
+ dimensions = self.connection.dimensions.get_all_dimensions()
+ if not dimensions:
+ logger.info('No dimensions found for workspace %s', self.workspace_id)
+ return
- dimensions = self.connection.dimensions.get_all_dimensions()
for dimension in dimensions:
+ try:
+ dimension_id = dimension.get('id')
+ dimension_name = dimension.get('code')
+
+ if not dimension_id or not dimension_name:
+ logger.warning(
+ 'Skipping dimension with missing id/code for workspace %s',
+ self.workspace_id
+ )
+ continue
+
+ dimension_values = self.connection.dimensions.get_all_dimension_values(
+ dimension_id
+ )
+
+ if not dimension_values:
+ logger.info(
+ 'No dimension values found for dimension %s in workspace %s',
+ dimension_name,
+ self.workspace_id
+ )
+ continue
+
dimension_attributes = []
- dimension_id = dimension["id"]
- dimension_name = dimension["code"]
- dimension_values = self.connection.dimensions.get_all_dimension_values(
- dimension_id
- )
for value in dimension_values:
+ if not value.get('displayName'):
+ continue
+
detail = {"dimension_id": dimension_id}
dimension_attributes.append(
{
"attribute_type": dimension_name,
- "display_name": dimension["displayName"],
- "value": value["displayName"],
- "destination_id": value["id"],
+ "display_name": dimension.get('displayName', dimension_name),
+ "value": value.get('displayName'),
+ "destination_id": value.get('id'),
"detail": detail,
"active": True,
}
)
+ if dimension_attributes:
+ DestinationAttribute.bulk_create_or_update_destination_attributes(
+ dimension_attributes, dimension_name, self.workspace_id
+ )
+ except Exception as e:
+ logger.error(
+ 'Error processing dimension %s for workspace %s: %s',
+ dimension.get('code', 'UNKNOWN'),
+ self.workspace_id,
+ str(e)
+ )
+ continue
- DestinationAttribute.bulk_create_or_update_destination_attributes(
- dimension_attributes, dimension_name, self.workspace_id
- )
-
- return []
+ return dimensions
+ except Exception as e:
+ logger.error(
+ 'Error syncing dimensions for workspace %s: %s',
+ self.workspace_id,
+ str(e)
+ )
+ raise
Additionally, consider implementing batch processing for large datasets to optimize memory usage:
def _process_dimension_values_in_batches(self, dimension_values, batch_size=1000):
"""Process dimension values in batches to optimize memory usage"""
for i in range(0, len(dimension_values), batch_size):
yield dimension_values[i:i + batch_size]
apps/business_central/utils.py
Outdated
def sync_bank_accounts(self): | ||
""" | ||
sync business central bank accounts | ||
""" | ||
|
||
bank_accounts = self.connection.bank_accounts.get_all() | ||
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | ||
|
||
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | ||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and consistency improvements
The sync_bank_accounts
method needs several improvements:
- Add error handling for the API call to
get_all()
- Add validation for the bank_accounts data
- Add upper limit check similar to other sync methods
- Clarify the purpose of returning an empty list
Here's a suggested implementation:
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
+ try:
+ attribute_count = self.connection.bank_accounts.count()
+ if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count):
+ logger.info(
+ 'Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit',
+ self.workspace_id,
+ attribute_count
+ )
+ return
- bank_accounts = self.connection.bank_accounts.get_all()
+ bank_accounts = self.connection.bank_accounts.get_all()
+ if not bank_accounts:
+ logger.info('No bank accounts found for workspace %s', self.workspace_id)
+ return
+
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
- return []
+ return bank_accounts
+ except Exception as e:
+ logger.error(
+ 'Error syncing bank accounts for workspace %s: %s',
+ self.workspace_id,
+ str(e)
+ )
+ raise
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
bank_accounts = self.connection.bank_accounts.get_all() | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return [] | |
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
try: | |
attribute_count = self.connection.bank_accounts.count() | |
if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count): | |
logger.info( | |
'Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', | |
self.workspace_id, | |
attribute_count | |
) | |
return | |
bank_accounts = self.connection.bank_accounts.get_all() | |
if not bank_accounts: | |
logger.info('No bank accounts found for workspace %s', self.workspace_id) | |
return | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return bank_accounts | |
except Exception as e: | |
logger.error( | |
'Error syncing bank accounts for workspace %s: %s', | |
self.workspace_id, | |
str(e) | |
) | |
raise |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
apps/business_central/helpers.py (2)
Line range hint
52-57
: Enhance error handling for sync failures.The current implementation has several concerns:
- Using INFO level for exceptions is not appropriate - sync failures should be logged at ERROR level
- Silently catching all exceptions might hide critical failures
- No distinction between different types of failures
Consider this improved implementation:
for dimension in dimensions: try: # Dynamically call the sync method based on the dimension sync = getattr(business_central_connection, 'sync_{}'.format(dimension)) sync() except AttributeError as e: - # Log any exceptions that occur during synchronization - logger.info(exception) + # Log when sync method doesn't exist + logger.error('Sync method not found for dimension %s: %s', dimension, str(e)) + raise + except Exception as e: + # Log other sync failures + logger.error('Failed to sync dimension %s: %s', dimension, str(e)) + raise
Line range hint
22-27
: Fix potential UnboundLocalError in sync interval check.There's a logical error in the interval checking code. The
time_interval
variable is used in the if condition but it's only defined ifworkspace.destination_synced_at
exists.Apply this fix:
- if workspace.destination_synced_at: - # Calculate the time interval since the last destination sync - time_interval = datetime.now(timezone.utc) - workspace.destination_synced_at - - if workspace.destination_synced_at is None or time_interval.days > 0: + if workspace.destination_synced_at is None: + sync_dimensions(business_central_credential, workspace.id) + return True + + time_interval = datetime.now(timezone.utc) - workspace.destination_synced_at + if time_interval.days > 0:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
apps/business_central/helpers.py
(1 hunks)apps/business_central/utils.py
(1 hunks)
🔇 Additional comments (3)
apps/business_central/helpers.py (1)
51-51
: Verify the existence of sync methods for new dimensions.
The addition of 'bank_accounts' and 'dimensions' to the dimensions list aligns with the PR objectives. However, we need to ensure that the corresponding sync methods exist in the BusinessCentralConnector class.
Let's verify the existence of these methods:
✅ Verification successful
Let me try a different approach to search for these methods, as they might be defined with different naming patterns or in specific files.
Both sync methods for new dimensions exist and are properly implemented
Based on the search results, I can confirm that both required sync methods exist in the codebase:
sync_bank_accounts()
method is implemented inapps/business_central/utils.py
sync_dimensions()
method is implemented inapps/business_central/utils.py
Both methods are part of the BusinessCentralConnector class and are being used in tests and the main application code. The addition of these dimensions to the list is consistent with the existing implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if sync_bank_accounts and sync_dimensions methods exist in BusinessCentralConnector
# Search for the sync method implementations
echo "Checking for sync_bank_accounts method:"
ast-grep --pattern 'def sync_bank_accounts($$$)'
echo -e "\nChecking for sync_dimensions method:"
ast-grep --pattern 'def sync_dimensions($$$)'
Length of output: 276
Script:
#!/bin/bash
# Search for any method related to bank accounts or dimensions synchronization
echo "Searching for bank account sync related methods:"
rg -i "bank.*account.*sync|sync.*bank.*account" -g "*.py" -A 2
echo -e "\nSearching for dimension sync related methods:"
rg -i "dimension.*sync|sync.*dimension" -g "*.py" -A 2
echo -e "\nChecking BusinessCentralConnector class definition:"
ast-grep --pattern 'class BusinessCentralConnector {
$$$
}'
Length of output: 11954
apps/business_central/utils.py (2)
117-117
: LGTM: Appropriate handling of destination_id for bank accounts
The modification correctly ensures that bank accounts use their 'id' field as the destination_id, while maintaining the existing behavior for other types.
124-133
:
Critical: Add error handling and sync limit validation
The method needs several essential improvements:
- Add sync limit validation using
is_sync_allowed
- Add error handling for API calls
- Add validation for bank accounts data
- Consider returning the bank accounts instead of an empty list for consistency
Apply this diff to address these issues:
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
+ try:
+ attribute_count = self.connection.bank_accounts.count()
+ if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count):
+ logger.info(
+ 'Skipping sync of bank accounts for workspace %s as it has %s accounts',
+ self.workspace_id,
+ attribute_count
+ )
+ return
- bank_accounts = self.connection.bank_accounts.get_all()
+ bank_accounts = self.connection.bank_accounts.get_all()
+ if not bank_accounts:
+ logger.info('No bank accounts found for workspace %s', self.workspace_id)
+ return
+
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
- return []
+ return bank_accounts
+ except Exception as e:
+ logger.error(
+ 'Error syncing bank accounts for workspace %s: %s',
+ self.workspace_id,
+ str(e)
+ )
+ raise
Likely invalid or redundant comment.
apps/business_central/utils.py
Outdated
def sync_dimensions(self): | ||
""" | ||
sync business central dimensions | ||
""" | ||
|
||
dimensions = self.connection.dimensions.get_all_dimensions() | ||
for dimension in dimensions: | ||
dimension_attributes = [] | ||
dimension_id = dimension['id'] | ||
dimension_name = dimension['code'] | ||
dimension_values = self.connection.dimensions.get_all_dimension_values( | ||
dimension_id | ||
) | ||
|
||
for value in dimension_values: | ||
detail = {'dimension_id': dimension_id, 'code': value['code']} | ||
dimension_attributes.append( | ||
{ | ||
'attribute_type': dimension_name, | ||
'display_name': dimension['displayName'], | ||
'value': value['displayName'], | ||
'destination_id': value['id'], | ||
'detail': detail, | ||
'active': True, | ||
} | ||
) | ||
|
||
DestinationAttribute.bulk_create_or_update_destination_attributes( | ||
dimension_attributes, dimension_name, self.workspace_id | ||
) | ||
|
||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Add error handling, validation, and performance optimizations
The method needs several critical improvements:
- Add error handling for API calls
- Add validation for dimensions data
- Add sync limit validation
- Implement batch processing for large datasets
- Add validation for required fields
Apply this diff to address these issues:
def sync_dimensions(self):
"""
sync business central dimensions
"""
+ try:
+ dimensions = self.connection.dimensions.get_all_dimensions()
+ if not dimensions:
+ logger.info('No dimensions found for workspace %s', self.workspace_id)
+ return
- dimensions = self.connection.dimensions.get_all_dimensions()
for dimension in dimensions:
- dimension_attributes = []
- dimension_id = dimension['id']
- dimension_name = dimension['code']
- dimension_values = self.connection.dimensions.get_all_dimension_values(
- dimension_id
- )
+ try:
+ if not dimension.get('id') or not dimension.get('code'):
+ logger.warning(
+ 'Skipping dimension with missing id/code for workspace %s',
+ self.workspace_id
+ )
+ continue
- for value in dimension_values:
- detail = {'dimension_id': dimension_id, 'code': value['code']}
- dimension_attributes.append(
- {
- 'attribute_type': dimension_name,
- 'display_name': dimension['displayName'],
- 'value': value['displayName'],
- 'destination_id': value['id'],
- 'detail': detail,
- 'active': True,
- }
+ dimension_id = dimension['id']
+ dimension_name = dimension['code']
+
+ dimension_values = self.connection.dimensions.get_all_dimension_values(
+ dimension_id
)
+ if not dimension_values:
+ logger.info(
+ 'No dimension values found for dimension %s in workspace %s',
+ dimension_name,
+ self.workspace_id
+ )
+ continue
- DestinationAttribute.bulk_create_or_update_destination_attributes(
- dimension_attributes, dimension_name, self.workspace_id
- )
+ # Process dimension values in batches
+ batch_size = 1000
+ for i in range(0, len(dimension_values), batch_size):
+ batch = dimension_values[i:i + batch_size]
+ dimension_attributes = []
- return []
+ for value in batch:
+ if not value.get('displayName') or not value.get('id'):
+ continue
+
+ detail = {
+ 'dimension_id': dimension_id,
+ 'code': value.get('code', '')
+ }
+ dimension_attributes.append({
+ 'attribute_type': dimension_name,
+ 'display_name': dimension.get('displayName', dimension_name),
+ 'value': value['displayName'],
+ 'destination_id': value['id'],
+ 'detail': detail,
+ 'active': True,
+ })
+
+ if dimension_attributes:
+ DestinationAttribute.bulk_create_or_update_destination_attributes(
+ dimension_attributes,
+ dimension_name,
+ self.workspace_id
+ )
+ except Exception as e:
+ logger.error(
+ 'Error processing dimension %s for workspace %s: %s',
+ dimension.get('code', 'UNKNOWN'),
+ self.workspace_id,
+ str(e)
+ )
+ continue
+
+ return dimensions
+ except Exception as e:
+ logger.error(
+ 'Error syncing dimensions for workspace %s: %s',
+ self.workspace_id,
+ str(e)
+ )
+ raise
Additionally, consider implementing a helper method for batch processing:
def _process_in_batches(self, items, batch_size=1000):
"""Process items in batches to optimize memory usage"""
for i in range(0, len(items), batch_size):
yield items[i:i + batch_size]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (2)
apps/business_central/utils.py (2)
20-20
: Fix dictionary spacingThere's inconsistent spacing around the colon in the dictionary.
- 'bank_accounts':2000, + 'bank_accounts': 2000,
120-120
: Improve readability of destination_id logicThe condition could be more readable with proper parentheses and a clearer variable name.
- item['number'] if (item.get('number') and attribute_type != 'BANK_ACCOUNT') else item['id'], + item['number'] if (item.get('number') and attribute_type != 'BANK_ACCOUNT') else item['id'],
apps/business_central/utils.py
Outdated
@@ -136,7 +195,7 @@ def sync_accounts(self): | |||
Synchronize accounts from MS Dynamics SDK to your application | |||
""" | |||
attribute_count = self.connection.accounts.count() | |||
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | |||
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect attribute_type in is_sync_allowed call
The attribute_type should be 'accounts' instead of 'bank_accounts' to match the SYNC_UPPER_LIMIT dictionary.
- if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
+ if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): | |
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): |
apps/business_central/utils.py
Outdated
def sync_bank_accounts(self): | ||
""" | ||
sync business central bank accounts | ||
""" | ||
attribute_count = self.connection.bank_accounts.count() | ||
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | ||
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | ||
return | ||
bank_accounts = self.connection.bank_accounts.get_all() | ||
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | ||
|
||
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | ||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and return meaningful data
The method needs several improvements:
- Add try-catch block for error handling
- Return the bank_accounts data instead of an empty list
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
+ try:
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
- return []
+ return bank_accounts
+ except Exception as e:
+ logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
+ raise
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
attribute_count = self.connection.bank_accounts.count() | |
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | |
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | |
return | |
bank_accounts = self.connection.bank_accounts.get_all() | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return [] | |
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
try: | |
attribute_count = self.connection.bank_accounts.count() | |
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | |
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | |
return | |
bank_accounts = self.connection.bank_accounts.get_all() | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return bank_accounts | |
except Exception as e: | |
logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e)) | |
raise |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
tests/test_business_central/test_utils.py (2)
236-258
: Fix inconsistent indentation in mock setup.The indentation in the mock setup blocks is inconsistent with the rest of the file. Ensure consistent indentation throughout.
199-270
: Consider adding integration test.Consider adding a test that verifies the interaction between bank accounts and dimensions when both are synced. This would help ensure that syncing one doesn't affect the other and that they work correctly together.
Example:
def test_sync_bank_accounts_and_dimensions_integration(mocker, db, create_business_central_connection): """Test that bank accounts and dimensions can be synced together without interference.""" workspace = create_business_central_connection.workspace # Mock both services mocker.patch.object( business_central_connection.connection.bank_accounts, 'get_all', return_value=data['bank_accounts'] ) mocker.patch.object( business_central_connection.connection.dimensions, 'get_all_dimensions', return_value=data['dimensions'] ) # Sync both business_central_connection.sync_bank_accounts() business_central_connection.sync_dimensions() # Verify both exist correctly bank_accounts = DestinationAttribute.objects.filter( workspace_id=workspace.id, attribute_type="BANK_ACCOUNT" ) areas = DestinationAttribute.objects.filter( workspace_id=workspace.id, attribute_type="AREA" ) assert bank_accounts.count() == 4 assert areas.count() == 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
tests/test_business_central/test_utils.py
(2 hunks)
🔇 Additional comments (2)
tests/test_business_central/test_utils.py (2)
201-228
: Enhance test coverage and robustness for bank account sync.
The test needs several improvements to ensure reliability and completeness:
- Use workspace from fixture instead of hardcoded ID
- Add cleanup before test
- Verify actual bank account attributes
- Add negative test cases
The improvements suggested in the past review comment are still valid and should be implemented.
231-270
: Improve test coverage and documentation for dimension sync.
The test needs several improvements:
- Use workspace from fixture instead of hardcoded ID
- Add cleanup before test
- Verify both dimensions and their values
- Add negative test cases
- Add docstring explaining dimension-area relationship
The improvements suggested in the past review comment are still valid and should be implemented.
apps/business_central/utils.py
Outdated
@@ -136,7 +195,7 @@ def sync_accounts(self): | |||
Synchronize accounts from MS Dynamics SDK to your application | |||
""" | |||
attribute_count = self.connection.accounts.count() | |||
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | |||
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
account type should be accounts
apps/business_central/utils.py
Outdated
sync business central bank accounts | ||
""" | ||
attribute_count = self.connection.bank_accounts.count() | ||
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
attribute type should be bank_accounts
|
…Invoice (#174) * Feat: Support for dimension during export Journal Entry and purchase Invoice * comment resolved * added dimension support for JE and bug fix * implementation of exporting dimensions for JE and PI * bug fixed * comment resolved * flake fixed * Feat: Adding new Journal Entry changes (#175) * Feat: Adding new Journal Entry changes * comment resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (1)
apps/business_central/utils.py (1)
20-20
: Fix spacing after colon in dictionary entryThere's a missing space after the colon in the
bank_accounts
entry.- 'bank_accounts':2000, + 'bank_accounts': 2000,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/business_central/utils.py
(2 hunks)
🔇 Additional comments (1)
apps/business_central/utils.py (1)
120-120
: LGTM! Correct handling of destination_id for bank accounts
The modification correctly ensures that bank accounts use 'id' instead of 'number' for the destination_id.
sync business central bank accounts | ||
""" | ||
attribute_count = self.connection.bank_accounts.count() | ||
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix attribute_type in is_sync_allowed call
According to previous discussions, the attribute_type should be 'accounts' to match the SYNC_UPPER_LIMIT dictionary.
- if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
+ if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): | |
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): |
def sync_bank_accounts(self): | ||
""" | ||
sync business central bank accounts | ||
""" | ||
attribute_count = self.connection.bank_accounts.count() | ||
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): | ||
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | ||
return | ||
bank_accounts = self.connection.bank_accounts.get_all() | ||
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | ||
|
||
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | ||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and return meaningful data
The method needs error handling and should return the bank_accounts data instead of an empty list.
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
+ try:
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
- return []
+ return bank_accounts
+ except Exception as e:
+ logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
+ raise
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
attribute_count = self.connection.bank_accounts.count() | |
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count): | |
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | |
return | |
bank_accounts = self.connection.bank_accounts.get_all() | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return [] | |
def sync_bank_accounts(self): | |
""" | |
sync business central bank accounts | |
""" | |
try: | |
attribute_count = self.connection.bank_accounts.count() | |
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count): | |
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count) | |
return | |
bank_accounts = self.connection.bank_accounts.get_all() | |
field_names = ['currencyCode', 'intercompanyEnabled', 'number'] | |
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names) | |
return bank_accounts | |
except Exception as e: | |
logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e)) | |
raise |
|
|
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
apps/fyle/models.py (1)
Line range hint
106-171
: Consider sync functionality impact on expense creation.The
create_expense_objects
method might need to be updated to handle any new relationships or validations introduced by the bank account and dimension sync functionality. Specifically:
- Consider adding validation for bank account references
- Ensure dimension values in
custom_properties
are validated against synced dimensions- Review the handling of
posted_at
andpayment_number
in relation to synced bank accountsConsider implementing a post-save signal or hook to trigger necessary synchronization when expenses are created or updated.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
apps/fyle/models.py
(1 hunks)
🔇 Additional comments (2)
apps/fyle/models.py (2)
82-82
:
Verify the impact of commenting out the is_posted_at_null
field.
Instead of commenting out the field, it should be properly removed through Django's migration process. This change could break existing code that relies on this field.
Let's verify the field usage in the codebase:
To properly remove this field:
- Create a migration to remove the field using
python manage.py makemigrations
- Update any code that references this field
- Apply the migration using
python manage.py migrate
✅ Verification successful
Field removal is safe - no active usage found
The field is_posted_at_null
appears to be unused in the codebase:
- No references in any Python files except the commented line in models.py
- No usage in database migrations
- No usage in queries or filters
Since this is a commented-out field with no active usage, removing the commented line is safe and doesn't require a migration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any references to is_posted_at_null field
# Search for field references in Python files
echo "Searching for field references in code..."
rg "is_posted_at_null" --type py
# Search for field in migrations
echo "Searching for field in migrations..."
fd ".*migrations.*\.py$" -x rg "is_posted_at_null" {}
# Search for field usage in queries/filters
echo "Searching for field in queries..."
rg "filter.*is_posted_at_null|is_posted_at_null.*filter" --type py
Length of output: 543
Line range hint 73-74
: Verify field handling in sync implementation.
The following fields in the Expense
model might be affected by the new sync functionality for bank accounts and dimensions:
posted_at
: Tracks when money is taken from bankpayment_number
: Links to payment detailscustom_properties
: May store dimension values
Please ensure that the new sync methods properly handle these fields and maintain data consistency.
Also applies to: 85-86, 90-90
✅ Verification successful
Based on the search results, I can now generate the final response since I have enough information about how these fields are handled in the codebase.
Fields are properly handled in sync and export implementations
The codebase shows these fields are correctly integrated:
posted_at
is used in base export model for date handling (apps/business_central/exports/base_model.py
)payment_number
is properly mapped in expense creation (apps/fyle/models.py
)custom_properties
sync is handled through the dimension sync implementations (apps/business_central/helpers.py
andapps/business_central/utils.py
)
The sync functionality properly maintains data consistency through:
- Regular dimension syncs triggered by
sync_dimensions()
- Automated sync checks via
check_interval_and_sync_dimension()
- Export handlers that ensure field values are properly processed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how these fields are handled in the sync implementation
# Search for field usage in sync methods
echo "Searching for field usage in sync methods..."
rg "sync.*bank_accounts|sync.*dimensions" -A 10 -B 10 --type py
# Check for any existing field validation or processing
echo "Checking field processing..."
rg "posted_at|payment_number|custom_properties.*sync" --type py
Length of output: 43925
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (10)
apps/business_central/exports/purchase_invoice/tasks.py (2)
105-109
: Add return type hint to the method signature.The method's signature should include a return type hint for better type safety and documentation.
Apply this diff:
- def construct_dimension_set_line_payload(self, dimensions: list, exported_response: dict): + def construct_dimension_set_line_payload(self, dimensions: list, exported_response: dict) -> List[Dict]:
113-117
: Improve readability of document mapping construction.The dictionary comprehension is complex and could be hard to maintain. Consider breaking it down into more readable steps.
Apply this diff:
- document_mapping = { - response['body']['description2']: response['body']['id'] - for response in exported_response - if response.get('body') and 'description2' in response['body'] and 'id' in response['body'] - } + document_mapping = {} + for response in exported_response: + body = response.get('body', {}) + if not body: + continue + + description2 = body.get('description2') + doc_id = body.get('id') + if description2 and doc_id: + document_mapping[description2] = doc_idtests/test_business_central/test_tasks.py (4)
9-12
: Consider adjusting the logging configuration for tests.The current logging setup has a few concerns:
- Setting logging level to INFO in tests may produce verbose output
- Logger configuration should ideally be centralized in test configuration
Consider moving this to a centralized test configuration and using a more appropriate level:
-import logging - -logger = logging.getLogger(__name__) -logger.level = logging.INFO +from tests.conftest import logger # Assuming centralized logger configuration
44-44
: Improve logging clarity and test assertions.The logging statement lacks context about what's being logged, and the test doesn't verify the ignored dimension value.
Consider these improvements:
-payload_list, _ = export_journal_entry._ExportJournalEntry__construct_journal_entry(journal_entry, lineitems) - -logger.info(payload_list) +payload_list, dimensions = export_journal_entry._ExportJournalEntry__construct_journal_entry(journal_entry, lineitems) + +logger.info('Constructed journal entry payload: %s', payload_list) + +# Add assertions for dimensions +assert dimensions is not None, "Dimensions should not be None"Also applies to: 46-46
Line range hint
168-173
: Improve code readability and test coverage.The line continuation makes the code less readable, and the test doesn't verify the ignored dimension value.
Consider these improvements:
- payload, batch_payload, _ = export_purchase_invoice.\ - _ExportPurchaseInvoice__construct_purchase_invoice( - purchase_invoice, - lineitems - ) + ( + payload, + batch_payload, + dimensions + ) = export_purchase_invoice._ExportPurchaseInvoice__construct_purchase_invoice( + purchase_invoice, + lineitems + ) + + # Add assertions for dimensions + assert dimensions is not None, "Dimensions should not be None"
Based on the gathered information, I can now provide the final response:
Tests needed for dimension handling in export tasks
The implementation shows extensive dimension handling in both journal entry and purchase invoice exports, but the test coverage is insufficient:
- The
__construct_journal_entry
test ignores the dimensions return value (using_
placeholder)- The
__construct_purchase_invoice
test ignores the dimensions return value (using_
placeholder)- No tests verify the
construct_dimension_set_line_payload
method in either class- No assertions check if dimensions are properly collected from line items
- No verification of dimension payload construction and posting in the
post
methodsRequired test coverage:
- Test dimension collection from line items in both export classes
- Verify dimension payload construction for both journal entries and purchase invoices
- Assert proper dimension posting through the BusinessCentralConnector
- Test error cases in dimension handling
🔗 Analysis chain
Line range hint
1-268
: Verify test coverage for dimension synchronization.While the tests have been updated to handle the new dimension values, they don't thoroughly verify the dimension synchronization functionality mentioned in the PR objectives.
Let's check the test coverage:
Let me gather more information about the dimension-related code and tests.
Let me check the actual dimension handling in the export tasks and their corresponding test coverage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check test coverage for dimension-related code # Look for dimension-related tests rg -l "test.*dimension" "tests/" # Look for dimension-related assertions rg "assert.*dimension" "tests/"Length of output: 468
Script:
#!/bin/bash # Check dimension-related code in the test files rg -A 5 "test.*dimension" "tests/test_business_central/" # Check dimension-related code in the implementation rg -l "dimension" "apps/business_central/" # Look for dimension handling in export tasks ast-grep --pattern 'def $_(self, $$$dimension$$$)'Length of output: 3176
Script:
#!/bin/bash # Check dimension handling in export tasks rg -A 10 "dimension" "apps/business_central/exports/journal_entry/tasks.py" "apps/business_central/exports/purchase_invoice/tasks.py" # Check test coverage for the export tasks rg -B 5 -A 10 "__construct_(journal_entry|purchase_invoice)" "tests/test_business_central/test_tasks.py" # Check if there are any dimension-related fixtures rg "dimension" "tests/test_business_central/conftest.py"Length of output: 20330
tests/test_business_central/test_models.py (4)
Line range hint
155-187
: Add missingadd_fyle_credentials
fixture to test_create_or_update_purchase_invoiceThe
add_fyle_credentials
fixture has been added to similar test functions but is missing here. For consistency, consider adding it to maintain parity with other test functions that create or update business central objects.Apply this diff:
def test_create_or_update_purchase_invoice( db, create_temp_workspace, create_export_settings, add_advanced_settings, create_accounting_export_expenses, - create_employee_mapping_with_employee + create_employee_mapping_with_employee, + add_fyle_credentials ):
Line range hint
1-479
: Standardize fixture ordering across test functionsFor better maintainability and readability, consider standardizing the order of fixtures across all test functions following this pattern:
db
- Workspace-related fixtures (e.g.,
create_temp_workspace
)- Settings-related fixtures (e.g.,
create_export_settings
,add_advanced_settings
)- Data-related fixtures (e.g.,
create_accounting_export_expenses
)- Mapping-related fixtures (e.g.,
create_employee_mapping_with_employee
)- Credentials-related fixtures (e.g.,
add_fyle_credentials
)
Line range hint
1-479
: Add test coverage for new sync functionalityThe PR objectives mention adding sync methods for bank accounts and dimensions, but there are no corresponding test cases in this file. Consider adding test cases to cover:
- Bank account synchronization
- Dimension synchronization
- Error scenarios for sync operations
- Edge cases in sync operations
Would you like me to help generate test cases for the new sync functionality?
Line range hint
1-479
: Enhance test assertions and error handling coverageThe current test assertions primarily focus on basic checks. Consider enhancing the test suite with:
- Negative test cases
- Boundary value testing
- Error scenario validations
- State verification before and after operations
- Mock response validation
Example areas to improve:
- Test invalid credential scenarios
- Verify error handling for network issues
- Validate state transitions during sync operations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (5)
apps/business_central/exports/journal_entry/models.py
(4 hunks)apps/business_central/exports/purchase_invoice/tasks.py
(3 hunks)tests/test_business_central/conftest.py
(1 hunks)tests/test_business_central/test_models.py
(4 hunks)tests/test_business_central/test_tasks.py
(3 hunks)
🧰 Additional context used
🪛 Ruff (0.7.0)
apps/business_central/exports/journal_entry/models.py
109-109: Comparison to None
should be cond is None
Replace with cond is None
(E711)
🔇 Additional comments (4)
apps/business_central/exports/journal_entry/models.py (3)
2-2
: LGTM: New JSON fields are properly defined.
The new dimension-related fields are well-structured with appropriate defaults and help text.
Also applies to: 90-92
44-56
: Add error handling for missing expense.
The code assumes an expense will always exist. Add proper error handling to prevent potential runtime errors.
Apply this diff:
- expense = accounting_export.expenses.first()
+ expense = accounting_export.expenses.first()
+ if not expense:
+ raise ValueError('No expense found for this accounting export')
document_number = expense.expense_number
109-109
: Fix None comparison style.
Use Python's identity comparison for None.
Apply this diff:
- category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)
+ category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category is None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)
🧰 Tools
🪛 Ruff (0.7.0)
109-109: Comparison to None
should be cond is None
Replace with cond is None
(E711)
tests/test_business_central/conftest.py (1)
164-165
: Verify fixture dependencies
Let's verify if all the fixtures are properly defined and imported:
Also applies to: 183-183
✅ Verification successful
Fixture dependencies are properly defined and used
The add_fyle_credentials
fixture is:
- Defined in
tests/conftest.py
- Imported and used extensively across multiple test files
- Properly referenced in the test files where it's being used as a test dependency
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for the definition or import of add_fyle_credentials fixture
# Search for add_fyle_credentials definition in test files
echo "Searching for add_fyle_credentials fixture definition..."
rg -l "def add_fyle_credentials" tests/
# Search for potential imports of the fixture
echo "Searching for potential imports of add_fyle_credentials..."
rg -l "from .* import .*add_fyle_credentials" tests/
# Check if the fixture is used elsewhere
echo "Checking other usages of add_fyle_credentials..."
rg "add_fyle_credentials" tests/
Length of output: 6550
lineitem = accounting_export.expenses.first() | ||
journal_entry = JournalEntry.objects.get(accounting_export=accounting_export) | ||
|
||
journal_entry_lineitems = [] | ||
|
||
for lineitem in expenses: | ||
category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category) | ||
|
||
account = CategoryMapping.objects.filter( | ||
source_category__value=category, | ||
workspace_id=accounting_export.workspace_id | ||
).first() | ||
|
||
comment = self.get_expense_comment(accounting_export.workspace_id, lineitem, lineitem.category, advance_setting) | ||
|
||
invoice_date = self.get_invoice_date(accounting_export=accounting_export) | ||
|
||
account_type, account_id = self.get_account_id_type(accounting_export=accounting_export, export_settings=export_settings, merchant=lineitem.vendor) | ||
|
||
document_number = accounting_export.description['claim_number'] if accounting_export.description and accounting_export.description.get('claim_number') else accounting_export.description['expense_number'] | ||
|
||
journal_entry_lineitems_object, _ = JournalEntryLineItems.objects.update_or_create( | ||
journal_entry_id = journal_entry.id, | ||
expense_id=lineitem.id, | ||
defaults={ | ||
'amount': lineitem.amount * -1, | ||
'account_id': account_id, | ||
'account_type': account_type, | ||
'document_number': document_number, | ||
'accounts_payable_account_id': account.destination_account.destination_id, | ||
'comment': comment, | ||
'workspace_id': accounting_export.workspace_id, | ||
'invoice_date': invoice_date, | ||
'description': lineitem.purpose if lineitem.purpose else None | ||
} | ||
) | ||
journal_entry_lineitems.append(journal_entry_lineitems_object) | ||
category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category) | ||
|
||
account = CategoryMapping.objects.filter( | ||
source_category__value=category, | ||
workspace_id=accounting_export.workspace_id | ||
).first() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize database queries.
The current implementation makes multiple separate database queries which could lead to N+1 query issues.
Consider these optimizations:
- Prefetch related fields when getting the expense:
lineitem = accounting_export.expenses.select_related('expense_number', 'category', 'sub_category', 'vendor').first()
- Add appropriate indexes to the CategoryMapping model:
class CategoryMapping(models.Model):
class Meta:
indexes = [
models.Index(fields=['source_category', 'workspace_id']),
]
🧰 Tools
🪛 Ruff (0.7.0)
109-109: Comparison to None
should be cond is None
Replace with cond is None
(E711)
dimensions = self.get_dimension_object(accounting_export, lineitem) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for dimension retrieval.
The get_dimension_object
call could potentially fail, but there's no error handling in place.
Apply this diff:
- dimensions = self.get_dimension_object(accounting_export, lineitem)
+ try:
+ dimensions = self.get_dimension_object(accounting_export, lineitem)
+ except Exception as e:
+ dimensions = []
+ # Log the error or handle it appropriately
+ logger.error(f'Failed to get dimensions for expense {lineitem.id}: {str(e)}')
Committable suggestion skipped: line range outside the PR's diff.
@@ -41,6 +41,7 @@ def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[Pu | |||
:return: constructed expense_report | |||
''' | |||
batch_purchase_invoice_lineitem_payload = [] | |||
dimensions = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update return type annotation to match implementation.
The method's return type annotation shows Dict
but it's returning a tuple of three elements. Update the type annotation to match the actual return value.
Apply this diff:
-def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[PurchaseInvoiceLineitems]) -> Dict:
+def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[PurchaseInvoiceLineitems]) -> Tuple[Dict, List, List]:
Don't forget to add Tuple
to the imports:
-from typing import Dict, List
+from typing import Dict, List, Tuple
Also applies to: 70-70
'description': lineitem.description if lineitem.description else '', | ||
'description2': lineitem.expense.expense_number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for accessing expense attribute.
The code assumes that lineitem.expense
exists and has an expense_number
attribute. Add error handling to gracefully handle cases where these might be None.
Apply this diff:
- 'description2': lineitem.expense.expense_number
+ 'description2': lineitem.expense.expense_number if getattr(lineitem, 'expense', None) else ''
Committable suggestion skipped: line range outside the PR's diff.
for dimension in dimensions: | ||
expense_number = dimension.get('expense_number') | ||
parent_id = document_mapping.get(expense_number) | ||
|
||
if parent_id: | ||
dimension_payload.append({ | ||
"id": dimension['id'], | ||
"code": dimension['code'], | ||
"parentId": parent_id, | ||
"valueId": dimension['valueId'], | ||
"valueCode": dimension['valueCode'], | ||
"exported_module_id": dimension['exported_module_id'] | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for required dimension fields.
The code assumes all required fields exist in the dimension dictionary. Add validation to ensure required fields are present.
Apply this diff:
for dimension in dimensions:
+ required_fields = ['id', 'code', 'valueId', 'valueCode', 'expense_number', 'exported_module_id']
+ if not all(field in dimension for field in required_fields):
+ logger.error('Missing required fields in dimension: %s', dimension)
+ continue
+
expense_number = dimension.get('expense_number')
parent_id = document_mapping.get(expense_number)
if parent_id:
dimension_payload.append({
"id": dimension['id'],
"code": dimension['code'],
"parentId": parent_id,
"valueId": dimension['valueId'],
"valueCode": dimension['valueCode'],
"exported_module_id": dimension['exported_module_id']
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for dimension in dimensions: | |
expense_number = dimension.get('expense_number') | |
parent_id = document_mapping.get(expense_number) | |
if parent_id: | |
dimension_payload.append({ | |
"id": dimension['id'], | |
"code": dimension['code'], | |
"parentId": parent_id, | |
"valueId": dimension['valueId'], | |
"valueCode": dimension['valueCode'], | |
"exported_module_id": dimension['exported_module_id'] | |
}) | |
for dimension in dimensions: | |
required_fields = ['id', 'code', 'valueId', 'valueCode', 'expense_number', 'exported_module_id'] | |
if not all(field in dimension for field in required_fields): | |
logger.error('Missing required fields in dimension: %s', dimension) | |
continue | |
expense_number = dimension.get('expense_number') | |
parent_id = document_mapping.get(expense_number) | |
if parent_id: | |
dimension_payload.append({ | |
"id": dimension['id'], | |
"code": dimension['code'], | |
"parentId": parent_id, | |
"valueId": dimension['valueId'], | |
"valueCode": dimension['valueCode'], | |
"exported_module_id": dimension['exported_module_id'] | |
}) |
if dimensions: | ||
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses']) | ||
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads)) | ||
dimension_line_responses = business_central_connection.post_dimension_lines( | ||
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id | ||
) | ||
response['dimension_line_responses'] = dimension_line_responses | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for dimension posting.
The dimension posting logic should handle potential failures gracefully to prevent partial updates.
Apply this diff:
if dimensions:
- dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
- logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads))
- dimension_line_responses = business_central_connection.post_dimension_lines(
- dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
- )
- response['dimension_line_responses'] = dimension_line_responses
+ try:
+ dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
+ logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(
+ accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads
+ ))
+ dimension_line_responses = business_central_connection.post_dimension_lines(
+ dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
+ )
+ response['dimension_line_responses'] = dimension_line_responses
+ except Exception as e:
+ logger.error('Failed to post dimensions - WORKSPACE_ID: %s, ACCOUNTING_EXPORT_ID: %s, ERROR: %s',
+ accounting_export.workspace_id, accounting_export.id, str(e)
+ )
+ raise
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if dimensions: | |
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses']) | |
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads)) | |
dimension_line_responses = business_central_connection.post_dimension_lines( | |
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id | |
) | |
response['dimension_line_responses'] = dimension_line_responses | |
if dimensions: | |
try: | |
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses']) | |
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format( | |
accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads | |
)) | |
dimension_line_responses = business_central_connection.post_dimension_lines( | |
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id | |
) | |
response['dimension_line_responses'] = dimension_line_responses | |
except Exception as e: | |
logger.error('Failed to post dimensions - WORKSPACE_ID: %s, ACCOUNTING_EXPORT_ID: %s, ERROR: %s', | |
accounting_export.workspace_id, accounting_export.id, str(e) | |
) | |
raise |
create_employee_mapping_with_employee, | ||
add_fyle_credentials |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Missing add_fyle_credentials
fixture definition
The changes add add_fyle_credentials
as a dependency to multiple fixtures, but the fixture itself is not defined in this file. This will cause test failures.
Please add the missing fixture definition or ensure it's imported from another test configuration file.
Also applies to: 183-183
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
tests/test_business_central/fixtures.py (2)
84-141
: LGTM! Consider adding field descriptionsThe expense object structure is well-defined and comprehensive. However, adding comments to document the purpose and expected format of critical fields would improve test maintainability.
160-221
: Enhance test data coverage for bank accountsConsider improving the test fixtures by:
- Including accounts with different currencies to test currency handling
- Adding test cases with all required fields populated
- Including a blocked account to test account status handling
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
apps/fyle/models.py
(1 hunks)requirements.txt
(1 hunks)tests/test_business_central/fixtures.py
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/fyle/models.py
- requirements.txt
🔇 Additional comments (2)
tests/test_business_central/fixtures.py (2)
145-159
: LGTM! Field configuration is well structured
The expense fields configuration provides a clear mapping between enabled fields and their inclusion status, which is essential for testing the sync functionality.
231-255
:
Fix inconsistent dimensionId references
The dimension values reference dimensionId: "5dbc303f-4319-ee11-9cc4-6045bdc8dcac"
, but the AREA dimension has id: "5bbc303f-4319-ee11-9cc4-6045bdc8dcac"
. This mismatch will cause issues in tests.
Apply this fix to all dimension values:
{
"id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac",
"code": "LARGE",
- "dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
+ "dimensionId": "5bbc303f-4319-ee11-9cc4-6045bdc8dcac",
"displayName": "Large Business",
"consolidationCode": "",
"lastModifiedDateTime": "2023-07-03T01:46:02.817Z",
},
Likely invalid or redundant comment.
|
|
Description
Please add PR description here, add screenshots if needed
Clickup
Please add link here
https://app.clickup.com/1864988/v/l/li/901604887942
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Chores