Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Add sync method for bank accounts, dimensions and dimension_values #173

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

Ashutosh619-sudo
Copy link
Contributor

@Ashutosh619-sudo Ashutosh619-sudo commented Nov 15, 2024

Description

Please add PR description here, add screenshots if needed

Clickup

Please add link here
https://app.clickup.com/1864988/v/l/li/901604887942

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced methods for synchronizing bank accounts and dimensions with Microsoft Dynamics.
    • Expanded synchronization capabilities to include additional dimensions such as 'bank_accounts' and 'dimensions'.
    • Added new logging fields for tracking dimension synchronization success and errors.
    • Enhanced test suite with new tests for bank accounts and dimensions synchronization.
  • Bug Fixes

    • Updated synchronization logic to enhance reliability and accuracy.
  • Documentation

    • Improved data structure in test fixtures for better clarity and usability.
  • Chores

    • Updated package dependencies to the latest version for improved performance and compatibility.

Copy link

coderabbitai bot commented Nov 15, 2024

Walkthrough

The pull request introduces enhancements to the BusinessCentralConnector class by adding two new methods: sync_bank_accounts and sync_dimensions, which facilitate the synchronization of bank accounts and dimensions with Microsoft Dynamics. Additionally, the _sync_data method is modified to adjust the logic for determining destination attributes. The requirements.txt file is updated to reflect a new version of the ms-dynamics-business-central-sdk package. Furthermore, the test suite is expanded with new tests for the synchronization methods and improvements to the data structure in the test fixtures.

Changes

File Path Change Summary
apps/business_central/utils.py Added methods sync_bank_accounts and sync_dimensions. Modified _sync_data logic. Updated SYNC_UPPER_LIMIT.
requirements.txt Updated ms-dynamics-business-central-sdk version from 1.5.2 to 1.6.0.
tests/test_business_central/fixtures.py Restructured data with new keys: expense_fields, included_fields, bank_accounts, dimensions, dimension_values. Enhanced expenses key formatting.
tests/test_business_central/test_utils.py Added tests test_sync_bank_accounts and test_sync_dimension. Modified test_skip_sync_attributes for cleanup.
apps/business_central/helpers.py Expanded dimensions list in sync_dimensions function to include 'bank_accounts' and 'dimensions'.
apps/business_central/exports/base_model.py Added method get_dimension_object for retrieving dimension data.
apps/business_central/exports/journal_entry/models.py Modified create_or_update_object methods to handle single expense and line items, added new fields for dimensions.
apps/business_central/exports/journal_entry/tasks.py Updated ExportJournalEntry to include dimension handling in journal entry exports.
apps/business_central/exports/purchase_invoice/models.py Added fields for dimensions and logs in PurchaseInvoiceLineitems.
apps/business_central/exports/purchase_invoice/tasks.py Enhanced dimension handling in purchase invoice exports.
apps/business_central/migrations/0002_journalentrylineitems_dimensions_and_more.py Added dimensions field to models via migration.
apps/business_central/migrations/0003_journalentrylineitems_dimension_error_log_and_more.py Added dimension_error_log field to models via migration.
apps/business_central/migrations/0004_journalentrylineitems_dimension_success_log_and_more.py Added dimension_success_log field and updated dimensions field definition via migration.
apps/mappings/imports/modules/base.py Modified sync_destination_attributes to ensure a default sync method is available.

Possibly related PRs

Suggested reviewers

  • ruuushhh

🐇✨ In the garden where data flows,
New methods bloom, as everyone knows.
Syncing accounts, dimensions in sight,
With tests in place, everything's right!
Let's hop along, with joy we sing,
For the changes made, oh what they bring! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the size/L Large PR label Nov 15, 2024
Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 9.041s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (3)
tests/test_business_central/fixtures.py (3)

84-142: Consider improving test data quality

  1. Replace the real email address with a dummy one (e.g., "[email protected]")
  2. Use Python's None instead of string "None" in custom properties for consistency
-            "employee_email": "[email protected]",
+            "employee_email": "[email protected]",
             ...
             "custom_properties": {
-                "Custom Expense Field": "None",
+                "Custom Expense Field": None,
             },

159-220: Consider adding multi-currency test cases

All bank accounts currently use INR currency. Consider adding accounts with different currencies (e.g., USD, EUR) to improve test coverage for international scenarios.

         {
             "id": "bdbb303f-4319-ee11-9cc4-6045bdc8dcac",
             "number": "NBL",
             "displayName": "New Bank of London",
             "lastModifiedDateTime": "2023-07-03T01:45:45.597Z",
             "bankAccountNumber": "78-66-345",
             "blocked": False,
-            "currencyCode": "INR",
+            "currencyCode": "GBP",
             "currencyId": "00000000-0000-0000-0000-000000000000",

221-229: Consider adding more dimension types

Currently only the "AREA" dimension is defined. Consider adding more dimension types (e.g., "DEPARTMENT", "REGION") to improve test coverage for the sync functionality.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d44c72f and 44c18a3.

📒 Files selected for processing (4)
  • apps/business_central/utils.py (1 hunks)
  • requirements.txt (1 hunks)
  • tests/test_business_central/fixtures.py (4 hunks)
  • tests/test_business_central/test_utils.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • requirements.txt
🔇 Additional comments (7)
tests/test_business_central/test_utils.py (1)

4-4: LGTM!

The import statement is properly placed and follows Python's import style guidelines.

tests/test_business_central/fixtures.py (4)

13-13: LGTM: Expense description fields properly structured

The added fields enhance the test data structure with realistic values.

Also applies to: 18-18


37-37: LGTM: Export settings aligned with sync functionality

The auto_map_employees setting supports the new sync features.


47-47: LGTM: Advanced settings properly configured

The auto_create_vendor setting provides a good default for testing.


143-158: LGTM: Well-structured field configuration

The expense_fields and included_fields provide a clear configuration structure for field visibility.

apps/business_central/utils.py (2)

117-117: LGTM: Improved destination ID handling for bank accounts

The conditional logic for determining the destination ID is well-thought-out, using the 'id' field for bank accounts while maintaining the existing behavior for other types.


124-166: Verify integration with existing sync workflow

Let's verify how these new sync methods are integrated with the existing sync workflow.

✅ Verification successful

Integration verified and properly implemented

The new sync methods are well-integrated into the existing codebase:

  • The methods are part of a consistent sync pattern alongside other sync methods (sync_companies, sync_accounts, sync_vendors, etc.) in the utils.py file
  • sync_dimensions is properly integrated into the workflow:
    • Called from business_central/helpers.py in a dedicated helper function
    • Used in serializers.py during credential setup
    • Integrated into journal entry and purchase invoice export queues
    • Covered by unit tests in test_utils.py, test_helpers.py, and test_views.py
  • sync_bank_accounts follows the same pattern as other sync methods and has test coverage in test_utils.py
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check how these new sync methods are used in the codebase

# Check for sync method calls
echo "Checking for sync method usage..."
rg -l "sync_bank_accounts|sync_dimensions" --type python

# Check for potential task definitions
echo "Checking for celery tasks..."
rg -l "sync_bank_accounts|sync_dimensions" --type python | grep "tasks.py"

# Check for API endpoints
echo "Checking for API endpoints..."
rg -l "sync_bank_accounts|sync_dimensions" --type python | grep "views.py"

Length of output: 514


Script:

#!/bin/bash
# Let's try a different approach without file type filtering

# Check for sync method usage
echo "Checking for sync method usage..."
rg "sync_bank_accounts|sync_dimensions"

# Find Python files that might contain related code
echo -e "\nFinding relevant Python files..."
fd "tasks.py|views.py|sync.*\.py" --extension py

# Look for other sync methods in the utils file
echo -e "\nChecking other sync methods in utils.py..."
rg "def sync_" apps/business_central/utils.py

Length of output: 4073

Comment on lines 201 to 223
def test_sync_bank_accounts(mocker, db, create_business_central_connection):
workspace_id = 1
business_central_connection = create_business_central_connection

mocker.patch.object(
business_central_connection.connection.bank_accounts,
'get_all',
return_value=data['bank_accounts']
)

bank_account_count = DestinationAttribute.objects.filter(
workspace_id=workspace_id, attribute_type="BANK_ACCOUNT"
).count()

assert bank_account_count == 0
business_central_connection.sync_bank_accounts()

bank_accounts = DestinationAttribute.objects.filter(
workspace_id=workspace_id, attribute_type="BANK_ACCOUNT"
)

assert bank_accounts.count() == 4

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test coverage and robustness.

While the basic happy path is covered, consider the following improvements:

  1. Add cleanup in setUp or before the test to ensure a clean state
  2. Verify the actual attributes of synced bank accounts, not just the count
  3. Add negative test cases (e.g., API failures, malformed data)
  4. Use workspace fixture instead of hardcoded workspace_id

Here's a suggested improvement:

 def test_sync_bank_accounts(mocker, db, create_business_central_connection):
-    workspace_id = 1
+    workspace = create_business_central_connection.workspace
+    
+    # Cleanup
+    DestinationAttribute.objects.filter(
+        workspace_id=workspace.id,
+        attribute_type="BANK_ACCOUNT"
+    ).delete()
+
     business_central_connection = create_business_central_connection

     mocker.patch.object(
         business_central_connection.connection.bank_accounts,
         'get_all',
         return_value=data['bank_accounts']
     )

     bank_account_count = DestinationAttribute.objects.filter(
-        workspace_id=workspace_id,
+        workspace_id=workspace.id,
         attribute_type="BANK_ACCOUNT"
     ).count()

     assert bank_account_count == 0
     business_central_connection.sync_bank_accounts()

     bank_accounts = DestinationAttribute.objects.filter(
-        workspace_id=workspace_id,
+        workspace_id=workspace.id,
         attribute_type="BANK_ACCOUNT"
     )

     assert bank_accounts.count() == 4
+    
+    # Verify attributes of synced bank accounts
+    expected_attributes = ['id', 'name', 'code']
+    for bank_account in bank_accounts:
+        for attr in expected_attributes:
+            assert getattr(bank_account, attr) == data['bank_accounts'][0][attr]

Consider adding these additional test cases:

def test_sync_bank_accounts_api_failure(mocker, db, create_business_central_connection):
    business_central_connection = create_business_central_connection
    
    mocker.patch.object(
        business_central_connection.connection.bank_accounts,
        'get_all',
        side_effect=Exception("API Error")
    )
    
    with pytest.raises(Exception):
        business_central_connection.sync_bank_accounts()

Comment on lines 225 to 252
def test_sync_dimension(mocker, db, create_business_central_connection):

workspace_id = 1
business_central_connection = create_business_central_connection

mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimensions',
return_value=data['dimensions']
)

mocker.patch.object(
business_central_connection.connection.dimensions,
'get_all_dimension_values',
return_value=data['dimension_values']
)

area_count = DestinationAttribute.objects.filter(
workspace_id=workspace_id, attribute_type="AREA"
).count()

assert area_count == 0
business_central_connection.sync_dimensions()

areas = DestinationAttribute.objects.filter(
workspace_id=workspace_id, attribute_type="AREA"
)
assert areas.count() == 3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve test coverage and add documentation.

The test needs similar improvements as test_sync_bank_accounts, plus some specific to dimensions:

  1. Add docstring explaining the relationship between dimensions and areas
  2. Verify both dimensions and their values
  3. Add cleanup before test
  4. Use workspace fixture instead of hardcoded ID

Here's a suggested improvement:

 def test_sync_dimension(mocker, db, create_business_central_connection):
+    """
+    Test synchronization of dimensions and dimension values.
+    
+    Dimensions in Business Central are mapped to Areas in our system.
+    Each dimension can have multiple dimension values.
+    """
-    workspace_id = 1
+    workspace = create_business_central_connection.workspace
+    
+    # Cleanup
+    DestinationAttribute.objects.filter(
+        workspace_id=workspace.id,
+        attribute_type__in=["AREA", "DIMENSION_VALUE"]
+    ).delete()
+
     business_central_connection = create_business_central_connection

     mocker.patch.object(
         business_central_connection.connection.dimensions,
         'get_all_dimensions',
         return_value=data['dimensions']
     )

     mocker.patch.object(
         business_central_connection.connection.dimensions,
         'get_all_dimension_values',
         return_value=data['dimension_values']
     )

     area_count = DestinationAttribute.objects.filter(
-        workspace_id=workspace_id,
+        workspace_id=workspace.id,
         attribute_type="AREA"
     ).count()

     assert area_count == 0
     business_central_connection.sync_dimensions()

     areas = DestinationAttribute.objects.filter(
-        workspace_id=workspace_id,
+        workspace_id=workspace.id,
         attribute_type="AREA"
     )
     assert areas.count() == 3
+    
+    # Verify dimension values
+    dimension_values = DestinationAttribute.objects.filter(
+        workspace_id=workspace.id,
+        attribute_type="DIMENSION_VALUE"
+    )
+    assert dimension_values.count() == len(data['dimension_values'])
+    
+    # Verify attributes
+    for area in areas:
+        dimension = next(d for d in data['dimensions'] if d['id'] == area.destination_id)
+        assert area.value == dimension['name']
+        
+        # Verify related dimension values
+        related_values = dimension_values.filter(parent_id=area.id)
+        expected_values = [v for v in data['dimension_values'] if v['dimension_id'] == dimension['id']]
+        assert related_values.count() == len(expected_values)

Consider adding these test cases:

def test_sync_dimensions_api_failure(mocker, db, create_business_central_connection):
    business_central_connection = create_business_central_connection
    
    mocker.patch.object(
        business_central_connection.connection.dimensions,
        'get_all_dimensions',
        side_effect=Exception("API Error")
    )
    
    with pytest.raises(Exception):
        business_central_connection.sync_dimensions()

def test_sync_dimensions_with_empty_values(mocker, db, create_business_central_connection):
    business_central_connection = create_business_central_connection
    
    mocker.patch.object(
        business_central_connection.connection.dimensions,
        'get_all_dimensions',
        return_value=data['dimensions']
    )
    
    mocker.patch.object(
        business_central_connection.connection.dimensions,
        'get_all_dimension_values',
        return_value=[]  # Empty dimension values
    )
    
    business_central_connection.sync_dimensions()
    
    areas = DestinationAttribute.objects.filter(
        workspace_id=business_central_connection.workspace.id,
        attribute_type="AREA"
    )
    assert areas.count() == 3  # Dimensions should still be synced
    
    dimension_values = DestinationAttribute.objects.filter(
        workspace_id=business_central_connection.workspace.id,
        attribute_type="DIMENSION_VALUE"
    )
    assert dimension_values.count() == 0

Comment on lines +230 to +254
"dimension_values": [
{
"id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac",
"code": "LARGE",
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
"displayName": "Large Business",
"consolidationCode": "",
"lastModifiedDateTime": "2023-07-03T01:46:02.817Z",
},
{
"id": "1c4a2945-4319-ee11-9cc4-6045bdc8dcac",
"code": "MEDIUM",
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
"displayName": "Medium Business",
"consolidationCode": "",
"lastModifiedDateTime": "2023-07-03T01:46:02.82Z",
},
{
"id": "1d4a2945-4319-ee11-9cc4-6045bdc8dcac",
"code": "SMALL",
"dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
"displayName": "Small Business",
"consolidationCode": "",
"lastModifiedDateTime": "2023-07-03T01:46:02.823Z",
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistent dimensionId reference

The dimension values reference dimensionId "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", but the AREA dimension has id "5bbc303f-4319-ee11-9cc4-6045bdc8dcac". This mismatch could cause issues in tests.

         {
             "id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac",
             "code": "LARGE",
-            "dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
+            "dimensionId": "5bbc303f-4319-ee11-9cc4-6045bdc8dcac",
             "displayName": "Large Business",

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 135 to 166
def sync_dimensions(self):
"""
sync business central dimensions
"""

dimensions = self.connection.dimensions.get_all_dimensions()
for dimension in dimensions:
dimension_attributes = []
dimension_id = dimension["id"]
dimension_name = dimension["code"]
dimension_values = self.connection.dimensions.get_all_dimension_values(
dimension_id
)

for value in dimension_values:
detail = {"dimension_id": dimension_id}
dimension_attributes.append(
{
"attribute_type": dimension_name,
"display_name": dimension["displayName"],
"value": value["displayName"],
"destination_id": value["id"],
"detail": detail,
"active": True,
}
)

DestinationAttribute.bulk_create_or_update_destination_attributes(
dimension_attributes, dimension_name, self.workspace_id
)

return []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and optimize performance

The sync_dimensions method needs several improvements:

  1. Add error handling for API calls
  2. Optimize performance for nested API calls
  3. Add data validation
  4. Add upper limit check
  5. Consider memory optimization for large datasets

Here's a suggested implementation:

 def sync_dimensions(self):
     """
     sync business central dimensions
     """
+    try:
+        dimensions = self.connection.dimensions.get_all_dimensions()
+        if not dimensions:
+            logger.info('No dimensions found for workspace %s', self.workspace_id)
+            return
 
-        dimensions = self.connection.dimensions.get_all_dimensions()
         for dimension in dimensions:
+            try:
+                dimension_id = dimension.get('id')
+                dimension_name = dimension.get('code')
+                
+                if not dimension_id or not dimension_name:
+                    logger.warning(
+                        'Skipping dimension with missing id/code for workspace %s',
+                        self.workspace_id
+                    )
+                    continue
+
+                dimension_values = self.connection.dimensions.get_all_dimension_values(
+                    dimension_id
+                )
+                
+                if not dimension_values:
+                    logger.info(
+                        'No dimension values found for dimension %s in workspace %s',
+                        dimension_name,
+                        self.workspace_id
+                    )
+                    continue
+
                 dimension_attributes = []
-                dimension_id = dimension["id"]
-                dimension_name = dimension["code"]
-                dimension_values = self.connection.dimensions.get_all_dimension_values(
-                    dimension_id
-                )
 
                 for value in dimension_values:
+                    if not value.get('displayName'):
+                        continue
+
                     detail = {"dimension_id": dimension_id}
                     dimension_attributes.append(
                         {
                             "attribute_type": dimension_name,
-                            "display_name": dimension["displayName"],
-                            "value": value["displayName"],
-                            "destination_id": value["id"],
+                            "display_name": dimension.get('displayName', dimension_name),
+                            "value": value.get('displayName'),
+                            "destination_id": value.get('id'),
                             "detail": detail,
                             "active": True,
                         }
                     )
 
+                if dimension_attributes:
+                    DestinationAttribute.bulk_create_or_update_destination_attributes(
+                        dimension_attributes, dimension_name, self.workspace_id
+                    )
+            except Exception as e:
+                logger.error(
+                    'Error processing dimension %s for workspace %s: %s',
+                    dimension.get('code', 'UNKNOWN'),
+                    self.workspace_id,
+                    str(e)
+                )
+                continue
 
-                DestinationAttribute.bulk_create_or_update_destination_attributes(
-                    dimension_attributes, dimension_name, self.workspace_id
-                )
-
-        return []
+        return dimensions
+    except Exception as e:
+        logger.error(
+            'Error syncing dimensions for workspace %s: %s',
+            self.workspace_id,
+            str(e)
+        )
+        raise

Additionally, consider implementing batch processing for large datasets to optimize memory usage:

def _process_dimension_values_in_batches(self, dimension_values, batch_size=1000):
    """Process dimension values in batches to optimize memory usage"""
    for i in range(0, len(dimension_values), batch_size):
        yield dimension_values[i:i + batch_size]

Comment on lines 124 to 133
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""

bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']

self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and consistency improvements

The sync_bank_accounts method needs several improvements:

  1. Add error handling for the API call to get_all()
  2. Add validation for the bank_accounts data
  3. Add upper limit check similar to other sync methods
  4. Clarify the purpose of returning an empty list

Here's a suggested implementation:

 def sync_bank_accounts(self):
     """
     sync business central bank accounts
     """
+    try:
+        attribute_count = self.connection.bank_accounts.count()
+        if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count):
+            logger.info(
+                'Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit',
+                self.workspace_id,
+                attribute_count
+            )
+            return
 
-        bank_accounts = self.connection.bank_accounts.get_all()
+        bank_accounts = self.connection.bank_accounts.get_all()
+        if not bank_accounts:
+            logger.info('No bank accounts found for workspace %s', self.workspace_id)
+            return
+
         field_names = ['currencyCode', 'intercompanyEnabled', 'number']
 
         self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
-        return []
+        return bank_accounts
+    except Exception as e:
+        logger.error(
+            'Error syncing bank accounts for workspace %s: %s',
+            self.workspace_id,
+            str(e)
+        )
+        raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
try:
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count):
logger.info(
'Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit',
self.workspace_id,
attribute_count
)
return
bank_accounts = self.connection.bank_accounts.get_all()
if not bank_accounts:
logger.info('No bank accounts found for workspace %s', self.workspace_id)
return
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return bank_accounts
except Exception as e:
logger.error(
'Error syncing bank accounts for workspace %s: %s',
self.workspace_id,
str(e)
)
raise

Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.599s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
apps/business_central/helpers.py (2)

Line range hint 52-57: Enhance error handling for sync failures.

The current implementation has several concerns:

  1. Using INFO level for exceptions is not appropriate - sync failures should be logged at ERROR level
  2. Silently catching all exceptions might hide critical failures
  3. No distinction between different types of failures

Consider this improved implementation:

     for dimension in dimensions:
         try:
             # Dynamically call the sync method based on the dimension
             sync = getattr(business_central_connection, 'sync_{}'.format(dimension))
             sync()
         except AttributeError as e:
-            # Log any exceptions that occur during synchronization
-            logger.info(exception)
+            # Log when sync method doesn't exist
+            logger.error('Sync method not found for dimension %s: %s', dimension, str(e))
+            raise
+        except Exception as e:
+            # Log other sync failures
+            logger.error('Failed to sync dimension %s: %s', dimension, str(e))
+            raise

Line range hint 22-27: Fix potential UnboundLocalError in sync interval check.

There's a logical error in the interval checking code. The time_interval variable is used in the if condition but it's only defined if workspace.destination_synced_at exists.

Apply this fix:

-    if workspace.destination_synced_at:
-        # Calculate the time interval since the last destination sync
-        time_interval = datetime.now(timezone.utc) - workspace.destination_synced_at
-
-    if workspace.destination_synced_at is None or time_interval.days > 0:
+    if workspace.destination_synced_at is None:
+        sync_dimensions(business_central_credential, workspace.id)
+        return True
+
+    time_interval = datetime.now(timezone.utc) - workspace.destination_synced_at
+    if time_interval.days > 0:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 44c18a3 and 83ede99.

📒 Files selected for processing (2)
  • apps/business_central/helpers.py (1 hunks)
  • apps/business_central/utils.py (1 hunks)
🔇 Additional comments (3)
apps/business_central/helpers.py (1)

51-51: Verify the existence of sync methods for new dimensions.

The addition of 'bank_accounts' and 'dimensions' to the dimensions list aligns with the PR objectives. However, we need to ensure that the corresponding sync methods exist in the BusinessCentralConnector class.

Let's verify the existence of these methods:

✅ Verification successful

Let me try a different approach to search for these methods, as they might be defined with different naming patterns or in specific files.


Both sync methods for new dimensions exist and are properly implemented

Based on the search results, I can confirm that both required sync methods exist in the codebase:

  1. sync_bank_accounts() method is implemented in apps/business_central/utils.py
  2. sync_dimensions() method is implemented in apps/business_central/utils.py

Both methods are part of the BusinessCentralConnector class and are being used in tests and the main application code. The addition of these dimensions to the list is consistent with the existing implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if sync_bank_accounts and sync_dimensions methods exist in BusinessCentralConnector

# Search for the sync method implementations
echo "Checking for sync_bank_accounts method:"
ast-grep --pattern 'def sync_bank_accounts($$$)'

echo -e "\nChecking for sync_dimensions method:"
ast-grep --pattern 'def sync_dimensions($$$)'

Length of output: 276


Script:

#!/bin/bash
# Search for any method related to bank accounts or dimensions synchronization
echo "Searching for bank account sync related methods:"
rg -i "bank.*account.*sync|sync.*bank.*account" -g "*.py" -A 2

echo -e "\nSearching for dimension sync related methods:"
rg -i "dimension.*sync|sync.*dimension" -g "*.py" -A 2

echo -e "\nChecking BusinessCentralConnector class definition:"
ast-grep --pattern 'class BusinessCentralConnector {
  $$$
}'

Length of output: 11954

apps/business_central/utils.py (2)

117-117: LGTM: Appropriate handling of destination_id for bank accounts

The modification correctly ensures that bank accounts use their 'id' field as the destination_id, while maintaining the existing behavior for other types.


124-133: ⚠️ Potential issue

Critical: Add error handling and sync limit validation

The method needs several essential improvements:

  1. Add sync limit validation using is_sync_allowed
  2. Add error handling for API calls
  3. Add validation for bank accounts data
  4. Consider returning the bank accounts instead of an empty list for consistency

Apply this diff to address these issues:

 def sync_bank_accounts(self):
     """
     sync business central bank accounts
     """
+    try:
+        attribute_count = self.connection.bank_accounts.count()
+        if not self.is_sync_allowed(attribute_type='accounts', attribute_count=attribute_count):
+            logger.info(
+                'Skipping sync of bank accounts for workspace %s as it has %s accounts',
+                self.workspace_id,
+                attribute_count
+            )
+            return
 
-        bank_accounts = self.connection.bank_accounts.get_all()
+        bank_accounts = self.connection.bank_accounts.get_all()
+        if not bank_accounts:
+            logger.info('No bank accounts found for workspace %s', self.workspace_id)
+            return
+
         field_names = ['currencyCode', 'intercompanyEnabled', 'number']
 
         self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
-        return []
+        return bank_accounts
+    except Exception as e:
+        logger.error(
+            'Error syncing bank accounts for workspace %s: %s',
+            self.workspace_id,
+            str(e)
+        )
+        raise

Likely invalid or redundant comment.

Comment on lines 135 to 166
def sync_dimensions(self):
"""
sync business central dimensions
"""

dimensions = self.connection.dimensions.get_all_dimensions()
for dimension in dimensions:
dimension_attributes = []
dimension_id = dimension['id']
dimension_name = dimension['code']
dimension_values = self.connection.dimensions.get_all_dimension_values(
dimension_id
)

for value in dimension_values:
detail = {'dimension_id': dimension_id, 'code': value['code']}
dimension_attributes.append(
{
'attribute_type': dimension_name,
'display_name': dimension['displayName'],
'value': value['displayName'],
'destination_id': value['id'],
'detail': detail,
'active': True,
}
)

DestinationAttribute.bulk_create_or_update_destination_attributes(
dimension_attributes, dimension_name, self.workspace_id
)

return []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Add error handling, validation, and performance optimizations

The method needs several critical improvements:

  1. Add error handling for API calls
  2. Add validation for dimensions data
  3. Add sync limit validation
  4. Implement batch processing for large datasets
  5. Add validation for required fields

Apply this diff to address these issues:

 def sync_dimensions(self):
     """
     sync business central dimensions
     """
+    try:
+        dimensions = self.connection.dimensions.get_all_dimensions()
+        if not dimensions:
+            logger.info('No dimensions found for workspace %s', self.workspace_id)
+            return
 
-        dimensions = self.connection.dimensions.get_all_dimensions()
         for dimension in dimensions:
-            dimension_attributes = []
-            dimension_id = dimension['id']
-            dimension_name = dimension['code']
-            dimension_values = self.connection.dimensions.get_all_dimension_values(
-                dimension_id
-            )
+            try:
+                if not dimension.get('id') or not dimension.get('code'):
+                    logger.warning(
+                        'Skipping dimension with missing id/code for workspace %s',
+                        self.workspace_id
+                    )
+                    continue
 
-            for value in dimension_values:
-                detail = {'dimension_id': dimension_id, 'code': value['code']}
-                dimension_attributes.append(
-                    {
-                        'attribute_type': dimension_name,
-                        'display_name': dimension['displayName'],
-                        'value': value['displayName'],
-                        'destination_id': value['id'],
-                        'detail': detail,
-                        'active': True,
-                    }
+                dimension_id = dimension['id']
+                dimension_name = dimension['code']
+                
+                dimension_values = self.connection.dimensions.get_all_dimension_values(
+                    dimension_id
                 )
+                if not dimension_values:
+                    logger.info(
+                        'No dimension values found for dimension %s in workspace %s',
+                        dimension_name,
+                        self.workspace_id
+                    )
+                    continue
 
-            DestinationAttribute.bulk_create_or_update_destination_attributes(
-                dimension_attributes, dimension_name, self.workspace_id
-            )
+                # Process dimension values in batches
+                batch_size = 1000
+                for i in range(0, len(dimension_values), batch_size):
+                    batch = dimension_values[i:i + batch_size]
+                    dimension_attributes = []
 
-        return []
+                    for value in batch:
+                        if not value.get('displayName') or not value.get('id'):
+                            continue
+
+                        detail = {
+                            'dimension_id': dimension_id,
+                            'code': value.get('code', '')
+                        }
+                        dimension_attributes.append({
+                            'attribute_type': dimension_name,
+                            'display_name': dimension.get('displayName', dimension_name),
+                            'value': value['displayName'],
+                            'destination_id': value['id'],
+                            'detail': detail,
+                            'active': True,
+                        })
+
+                    if dimension_attributes:
+                        DestinationAttribute.bulk_create_or_update_destination_attributes(
+                            dimension_attributes,
+                            dimension_name,
+                            self.workspace_id
+                        )
+            except Exception as e:
+                logger.error(
+                    'Error processing dimension %s for workspace %s: %s',
+                    dimension.get('code', 'UNKNOWN'),
+                    self.workspace_id,
+                    str(e)
+                )
+                continue
+
+        return dimensions
+    except Exception as e:
+        logger.error(
+            'Error syncing dimensions for workspace %s: %s',
+            self.workspace_id,
+            str(e)
+        )
+        raise

Additionally, consider implementing a helper method for batch processing:

def _process_in_batches(self, items, batch_size=1000):
    """Process items in batches to optimize memory usage"""
    for i in range(0, len(items), batch_size):
        yield items[i:i + batch_size]

Copy link

Tests Skipped Failures Errors Time
125 0 💤 2 ❌ 0 🔥 8.411s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
apps/business_central/utils.py (2)

20-20: Fix dictionary spacing

There's inconsistent spacing around the colon in the dictionary.

-    'bank_accounts':2000,
+    'bank_accounts': 2000,

120-120: Improve readability of destination_id logic

The condition could be more readable with proper parentheses and a clearer variable name.

-                item['number'] if (item.get('number') and attribute_type != 'BANK_ACCOUNT') else item['id'],
+                item['number'] if (item.get('number') and attribute_type != 'BANK_ACCOUNT') else item['id'],
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 83ede99 and 085d86c.

📒 Files selected for processing (1)
  • apps/business_central/utils.py (3 hunks)

@@ -136,7 +195,7 @@ def sync_accounts(self):
Synchronize accounts from MS Dynamics SDK to your application
"""
attribute_count = self.connection.accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect attribute_type in is_sync_allowed call

The attribute_type should be 'accounts' instead of 'bank_accounts' to match the SYNC_UPPER_LIMIT dictionary.

-        if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
+        if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):

Comment on lines 127 to 139
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']

self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling and return meaningful data

The method needs several improvements:

  1. Add try-catch block for error handling
  2. Return the bank_accounts data instead of an empty list
 def sync_bank_accounts(self):
     """
     sync business central bank accounts
     """
+    try:
         attribute_count = self.connection.bank_accounts.count()
         if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
             logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
             return
         bank_accounts = self.connection.bank_accounts.get_all()
         field_names = ['currencyCode', 'intercompanyEnabled', 'number']

         self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
-        return []
+        return bank_accounts
+    except Exception as e:
+        logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
+        raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
try:
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return bank_accounts
except Exception as e:
logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
raise

Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.734s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
tests/test_business_central/test_utils.py (2)

236-258: Fix inconsistent indentation in mock setup.

The indentation in the mock setup blocks is inconsistent with the rest of the file. Ensure consistent indentation throughout.


199-270: Consider adding integration test.

Consider adding a test that verifies the interaction between bank accounts and dimensions when both are synced. This would help ensure that syncing one doesn't affect the other and that they work correctly together.

Example:

def test_sync_bank_accounts_and_dimensions_integration(mocker, db, create_business_central_connection):
    """Test that bank accounts and dimensions can be synced together without interference."""
    workspace = create_business_central_connection.workspace
    
    # Mock both services
    mocker.patch.object(
        business_central_connection.connection.bank_accounts,
        'get_all',
        return_value=data['bank_accounts']
    )
    mocker.patch.object(
        business_central_connection.connection.dimensions,
        'get_all_dimensions',
        return_value=data['dimensions']
    )
    
    # Sync both
    business_central_connection.sync_bank_accounts()
    business_central_connection.sync_dimensions()
    
    # Verify both exist correctly
    bank_accounts = DestinationAttribute.objects.filter(
        workspace_id=workspace.id,
        attribute_type="BANK_ACCOUNT"
    )
    areas = DestinationAttribute.objects.filter(
        workspace_id=workspace.id,
        attribute_type="AREA"
    )
    
    assert bank_accounts.count() == 4
    assert areas.count() == 3
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 085d86c and c638418.

📒 Files selected for processing (1)
  • tests/test_business_central/test_utils.py (2 hunks)
🔇 Additional comments (2)
tests/test_business_central/test_utils.py (2)

201-228: Enhance test coverage and robustness for bank account sync.

The test needs several improvements to ensure reliability and completeness:

  1. Use workspace from fixture instead of hardcoded ID
  2. Add cleanup before test
  3. Verify actual bank account attributes
  4. Add negative test cases

The improvements suggested in the past review comment are still valid and should be implemented.


231-270: Improve test coverage and documentation for dimension sync.

The test needs several improvements:

  1. Use workspace from fixture instead of hardcoded ID
  2. Add cleanup before test
  3. Verify both dimensions and their values
  4. Add negative test cases
  5. Add docstring explaining dimension-area relationship

The improvements suggested in the past review comment are still valid and should be implemented.

@@ -136,7 +195,7 @@ def sync_accounts(self):
Synchronize accounts from MS Dynamics SDK to your application
"""
attribute_count = self.connection.accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

account type should be accounts

sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

attribute type should be bank_accounts

Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.241s ⏱️

…Invoice (#174)

* Feat: Support for dimension during export Journal Entry and purchase Invoice

* comment resolved

* added dimension support for JE and bug fix

* implementation of exporting dimensions for JE and PI

* bug fixed

* comment resolved

* flake fixed

* Feat: Adding new Journal Entry changes (#175)

* Feat: Adding new Journal Entry changes

* comment resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (1)
apps/business_central/utils.py (1)

20-20: Fix spacing after colon in dictionary entry

There's a missing space after the colon in the bank_accounts entry.

-    'bank_accounts':2000,
+    'bank_accounts': 2000,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c638418 and 382a3a4.

📒 Files selected for processing (1)
  • apps/business_central/utils.py (2 hunks)
🔇 Additional comments (1)
apps/business_central/utils.py (1)

120-120: LGTM! Correct handling of destination_id for bank accounts

The modification correctly ensures that bank accounts use 'id' instead of 'number' for the destination_id.

sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix attribute_type in is_sync_allowed call

According to previous discussions, the attribute_type should be 'accounts' to match the SYNC_UPPER_LIMIT dictionary.

-        if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
+        if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):

Comment on lines +127 to +139
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']

self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling and return meaningful data

The method needs error handling and should return the bank_accounts data instead of an empty list.

 def sync_bank_accounts(self):
     """
     sync business central bank accounts
     """
+    try:
         attribute_count = self.connection.bank_accounts.count()
         if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
             logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
             return
         bank_accounts = self.connection.bank_accounts.get_all()
         field_names = ['currencyCode', 'intercompanyEnabled', 'number']

         self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
-        return []
+        return bank_accounts
+    except Exception as e:
+        logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
+        raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'bank_accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return []
def sync_bank_accounts(self):
"""
sync business central bank accounts
"""
try:
attribute_count = self.connection.bank_accounts.count()
if not self.is_sync_allowed(attribute_type = 'accounts', attribute_count=attribute_count):
logger.info('Skipping sync of bank accounts for workspace %s as it has %s counts which is over the limit', self.workspace_id, attribute_count)
return
bank_accounts = self.connection.bank_accounts.get_all()
field_names = ['currencyCode', 'intercompanyEnabled', 'number']
self._sync_data(bank_accounts, 'BANK_ACCOUNT', 'bank_account', self.workspace_id, field_names)
return bank_accounts
except Exception as e:
logger.error('Error syncing bank accounts for workspace %s: %s', self.workspace_id, str(e))
raise

Copy link

Tests Skipped Failures Errors Time
125 0 💤 5 ❌ 1 🔥 9.181s ⏱️

Copy link

Tests Skipped Failures Errors Time
125 0 💤 5 ❌ 1 🔥 8.994s ⏱️

@Ashutosh619-sudo Ashutosh619-sudo added deploy deploy to staging environment and removed deploy deploy to staging environment labels Nov 22, 2024
Copy link

Tests Skipped Failures Errors Time
125 0 💤 5 ❌ 1 🔥 9.071s ⏱️

Copy link

Tests Skipped Failures Errors Time
125 0 💤 5 ❌ 1 🔥 8.781s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
apps/fyle/models.py (1)

Line range hint 106-171: Consider sync functionality impact on expense creation.

The create_expense_objects method might need to be updated to handle any new relationships or validations introduced by the bank account and dimension sync functionality. Specifically:

  1. Consider adding validation for bank account references
  2. Ensure dimension values in custom_properties are validated against synced dimensions
  3. Review the handling of posted_at and payment_number in relation to synced bank accounts

Consider implementing a post-save signal or hook to trigger necessary synchronization when expenses are created or updated.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 70ba7b8 and c2ecbed.

📒 Files selected for processing (1)
  • apps/fyle/models.py (1 hunks)
🔇 Additional comments (2)
apps/fyle/models.py (2)

82-82: ⚠️ Potential issue

Verify the impact of commenting out the is_posted_at_null field.

Instead of commenting out the field, it should be properly removed through Django's migration process. This change could break existing code that relies on this field.

Let's verify the field usage in the codebase:

To properly remove this field:

  1. Create a migration to remove the field using python manage.py makemigrations
  2. Update any code that references this field
  3. Apply the migration using python manage.py migrate
✅ Verification successful

Field removal is safe - no active usage found

The field is_posted_at_null appears to be unused in the codebase:

  • No references in any Python files except the commented line in models.py
  • No usage in database migrations
  • No usage in queries or filters

Since this is a commented-out field with no active usage, removing the commented line is safe and doesn't require a migration.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any references to is_posted_at_null field

# Search for field references in Python files
echo "Searching for field references in code..."
rg "is_posted_at_null" --type py

# Search for field in migrations
echo "Searching for field in migrations..."
fd ".*migrations.*\.py$" -x rg "is_posted_at_null" {}

# Search for field usage in queries/filters
echo "Searching for field in queries..."
rg "filter.*is_posted_at_null|is_posted_at_null.*filter" --type py

Length of output: 543


Line range hint 73-74: Verify field handling in sync implementation.

The following fields in the Expense model might be affected by the new sync functionality for bank accounts and dimensions:

  • posted_at: Tracks when money is taken from bank
  • payment_number: Links to payment details
  • custom_properties: May store dimension values

Please ensure that the new sync methods properly handle these fields and maintain data consistency.

Also applies to: 85-86, 90-90

✅ Verification successful

Based on the search results, I can now generate the final response since I have enough information about how these fields are handled in the codebase.

Fields are properly handled in sync and export implementations

The codebase shows these fields are correctly integrated:

  • posted_at is used in base export model for date handling (apps/business_central/exports/base_model.py)
  • payment_number is properly mapped in expense creation (apps/fyle/models.py)
  • custom_properties sync is handled through the dimension sync implementations (apps/business_central/helpers.py and apps/business_central/utils.py)

The sync functionality properly maintains data consistency through:

  1. Regular dimension syncs triggered by sync_dimensions()
  2. Automated sync checks via check_interval_and_sync_dimension()
  3. Export handlers that ensure field values are properly processed
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check how these fields are handled in the sync implementation

# Search for field usage in sync methods
echo "Searching for field usage in sync methods..."
rg "sync.*bank_accounts|sync.*dimensions" -A 10 -B 10 --type py

# Check for any existing field validation or processing
echo "Checking field processing..."
rg "posted_at|payment_number|custom_properties.*sync" --type py

Length of output: 43925

Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.755s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Outside diff range and nitpick comments (10)
apps/business_central/exports/purchase_invoice/tasks.py (2)

105-109: Add return type hint to the method signature.

The method's signature should include a return type hint for better type safety and documentation.

Apply this diff:

-    def construct_dimension_set_line_payload(self, dimensions: list, exported_response: dict):
+    def construct_dimension_set_line_payload(self, dimensions: list, exported_response: dict) -> List[Dict]:

113-117: Improve readability of document mapping construction.

The dictionary comprehension is complex and could be hard to maintain. Consider breaking it down into more readable steps.

Apply this diff:

-        document_mapping = {
-            response['body']['description2']: response['body']['id']
-            for response in exported_response
-            if response.get('body') and 'description2' in response['body'] and 'id' in response['body']
-        }
+        document_mapping = {}
+        for response in exported_response:
+            body = response.get('body', {})
+            if not body:
+                continue
+            
+            description2 = body.get('description2')
+            doc_id = body.get('id')
+            if description2 and doc_id:
+                document_mapping[description2] = doc_id
tests/test_business_central/test_tasks.py (4)

9-12: Consider adjusting the logging configuration for tests.

The current logging setup has a few concerns:

  1. Setting logging level to INFO in tests may produce verbose output
  2. Logger configuration should ideally be centralized in test configuration

Consider moving this to a centralized test configuration and using a more appropriate level:

-import logging
-
-logger = logging.getLogger(__name__)
-logger.level = logging.INFO
+from tests.conftest import logger  # Assuming centralized logger configuration

44-44: Improve logging clarity and test assertions.

The logging statement lacks context about what's being logged, and the test doesn't verify the ignored dimension value.

Consider these improvements:

-payload_list, _ = export_journal_entry._ExportJournalEntry__construct_journal_entry(journal_entry, lineitems)
-
-logger.info(payload_list)
+payload_list, dimensions = export_journal_entry._ExportJournalEntry__construct_journal_entry(journal_entry, lineitems)
+
+logger.info('Constructed journal entry payload: %s', payload_list)
+
+# Add assertions for dimensions
+assert dimensions is not None, "Dimensions should not be None"

Also applies to: 46-46


Line range hint 168-173: Improve code readability and test coverage.

The line continuation makes the code less readable, and the test doesn't verify the ignored dimension value.

Consider these improvements:

-    payload, batch_payload, _ = export_purchase_invoice.\
-        _ExportPurchaseInvoice__construct_purchase_invoice(
-            purchase_invoice,
-            lineitems
-        )
+    (
+        payload,
+        batch_payload,
+        dimensions
+    ) = export_purchase_invoice._ExportPurchaseInvoice__construct_purchase_invoice(
+        purchase_invoice,
+        lineitems
+    )
+
+    # Add assertions for dimensions
+    assert dimensions is not None, "Dimensions should not be None"

Based on the gathered information, I can now provide the final response:

Tests needed for dimension handling in export tasks

The implementation shows extensive dimension handling in both journal entry and purchase invoice exports, but the test coverage is insufficient:

  • The __construct_journal_entry test ignores the dimensions return value (using _ placeholder)
  • The __construct_purchase_invoice test ignores the dimensions return value (using _ placeholder)
  • No tests verify the construct_dimension_set_line_payload method in either class
  • No assertions check if dimensions are properly collected from line items
  • No verification of dimension payload construction and posting in the post methods

Required test coverage:

  • Test dimension collection from line items in both export classes
  • Verify dimension payload construction for both journal entries and purchase invoices
  • Assert proper dimension posting through the BusinessCentralConnector
  • Test error cases in dimension handling
🔗 Analysis chain

Line range hint 1-268: Verify test coverage for dimension synchronization.

While the tests have been updated to handle the new dimension values, they don't thoroughly verify the dimension synchronization functionality mentioned in the PR objectives.

Let's check the test coverage:


Let me gather more information about the dimension-related code and tests.


Let me check the actual dimension handling in the export tasks and their corresponding test coverage.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check test coverage for dimension-related code

# Look for dimension-related tests
rg -l "test.*dimension" "tests/"

# Look for dimension-related assertions
rg "assert.*dimension" "tests/"

Length of output: 468


Script:

#!/bin/bash
# Check dimension-related code in the test files
rg -A 5 "test.*dimension" "tests/test_business_central/"

# Check dimension-related code in the implementation
rg -l "dimension" "apps/business_central/"

# Look for dimension handling in export tasks
ast-grep --pattern 'def $_(self, $$$dimension$$$)'

Length of output: 3176


Script:

#!/bin/bash
# Check dimension handling in export tasks
rg -A 10 "dimension" "apps/business_central/exports/journal_entry/tasks.py" "apps/business_central/exports/purchase_invoice/tasks.py"

# Check test coverage for the export tasks
rg -B 5 -A 10 "__construct_(journal_entry|purchase_invoice)" "tests/test_business_central/test_tasks.py"

# Check if there are any dimension-related fixtures
rg "dimension" "tests/test_business_central/conftest.py"

Length of output: 20330

tests/test_business_central/test_models.py (4)

Line range hint 155-187: Add missing add_fyle_credentials fixture to test_create_or_update_purchase_invoice

The add_fyle_credentials fixture has been added to similar test functions but is missing here. For consistency, consider adding it to maintain parity with other test functions that create or update business central objects.

Apply this diff:

 def test_create_or_update_purchase_invoice(
         db,
         create_temp_workspace,
         create_export_settings,
         add_advanced_settings,
         create_accounting_export_expenses,
-        create_employee_mapping_with_employee
+        create_employee_mapping_with_employee,
+        add_fyle_credentials
 ):

Line range hint 1-479: Standardize fixture ordering across test functions

For better maintainability and readability, consider standardizing the order of fixtures across all test functions following this pattern:

  1. db
  2. Workspace-related fixtures (e.g., create_temp_workspace)
  3. Settings-related fixtures (e.g., create_export_settings, add_advanced_settings)
  4. Data-related fixtures (e.g., create_accounting_export_expenses)
  5. Mapping-related fixtures (e.g., create_employee_mapping_with_employee)
  6. Credentials-related fixtures (e.g., add_fyle_credentials)

Line range hint 1-479: Add test coverage for new sync functionality

The PR objectives mention adding sync methods for bank accounts and dimensions, but there are no corresponding test cases in this file. Consider adding test cases to cover:

  • Bank account synchronization
  • Dimension synchronization
  • Error scenarios for sync operations
  • Edge cases in sync operations

Would you like me to help generate test cases for the new sync functionality?


Line range hint 1-479: Enhance test assertions and error handling coverage

The current test assertions primarily focus on basic checks. Consider enhancing the test suite with:

  1. Negative test cases
  2. Boundary value testing
  3. Error scenario validations
  4. State verification before and after operations
  5. Mock response validation

Example areas to improve:

  • Test invalid credential scenarios
  • Verify error handling for network issues
  • Validate state transitions during sync operations
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c2ecbed and a028652.

📒 Files selected for processing (5)
  • apps/business_central/exports/journal_entry/models.py (4 hunks)
  • apps/business_central/exports/purchase_invoice/tasks.py (3 hunks)
  • tests/test_business_central/conftest.py (1 hunks)
  • tests/test_business_central/test_models.py (4 hunks)
  • tests/test_business_central/test_tasks.py (3 hunks)
🧰 Additional context used
🪛 Ruff (0.7.0)
apps/business_central/exports/journal_entry/models.py

109-109: Comparison to None should be cond is None

Replace with cond is None

(E711)

🔇 Additional comments (4)
apps/business_central/exports/journal_entry/models.py (3)

2-2: LGTM: New JSON fields are properly defined.

The new dimension-related fields are well-structured with appropriate defaults and help text.

Also applies to: 90-92


44-56: Add error handling for missing expense.

The code assumes an expense will always exist. Add proper error handling to prevent potential runtime errors.

Apply this diff:

- expense = accounting_export.expenses.first()
+ expense = accounting_export.expenses.first()
+ if not expense:
+     raise ValueError('No expense found for this accounting export')

  document_number = expense.expense_number

109-109: Fix None comparison style.

Use Python's identity comparison for None.

Apply this diff:

- category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)
+ category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category is None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)
🧰 Tools
🪛 Ruff (0.7.0)

109-109: Comparison to None should be cond is None

Replace with cond is None

(E711)

tests/test_business_central/conftest.py (1)

164-165: Verify fixture dependencies

Let's verify if all the fixtures are properly defined and imported:

Also applies to: 183-183

✅ Verification successful

Fixture dependencies are properly defined and used

The add_fyle_credentials fixture is:

  • Defined in tests/conftest.py
  • Imported and used extensively across multiple test files
  • Properly referenced in the test files where it's being used as a test dependency
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for the definition or import of add_fyle_credentials fixture

# Search for add_fyle_credentials definition in test files
echo "Searching for add_fyle_credentials fixture definition..."
rg -l "def add_fyle_credentials" tests/

# Search for potential imports of the fixture
echo "Searching for potential imports of add_fyle_credentials..."
rg -l "from .* import .*add_fyle_credentials" tests/

# Check if the fixture is used elsewhere
echo "Checking other usages of add_fyle_credentials..."
rg "add_fyle_credentials" tests/

Length of output: 6550

Comment on lines +104 to +114
lineitem = accounting_export.expenses.first()
journal_entry = JournalEntry.objects.get(accounting_export=accounting_export)

journal_entry_lineitems = []

for lineitem in expenses:
category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)

account = CategoryMapping.objects.filter(
source_category__value=category,
workspace_id=accounting_export.workspace_id
).first()

comment = self.get_expense_comment(accounting_export.workspace_id, lineitem, lineitem.category, advance_setting)

invoice_date = self.get_invoice_date(accounting_export=accounting_export)

account_type, account_id = self.get_account_id_type(accounting_export=accounting_export, export_settings=export_settings, merchant=lineitem.vendor)

document_number = accounting_export.description['claim_number'] if accounting_export.description and accounting_export.description.get('claim_number') else accounting_export.description['expense_number']

journal_entry_lineitems_object, _ = JournalEntryLineItems.objects.update_or_create(
journal_entry_id = journal_entry.id,
expense_id=lineitem.id,
defaults={
'amount': lineitem.amount * -1,
'account_id': account_id,
'account_type': account_type,
'document_number': document_number,
'accounts_payable_account_id': account.destination_account.destination_id,
'comment': comment,
'workspace_id': accounting_export.workspace_id,
'invoice_date': invoice_date,
'description': lineitem.purpose if lineitem.purpose else None
}
)
journal_entry_lineitems.append(journal_entry_lineitems_object)
category = lineitem.category if (lineitem.category == lineitem.sub_category or lineitem.sub_category == None) else '{0} / {1}'.format(lineitem.category, lineitem.sub_category)

account = CategoryMapping.objects.filter(
source_category__value=category,
workspace_id=accounting_export.workspace_id
).first()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize database queries.

The current implementation makes multiple separate database queries which could lead to N+1 query issues.

Consider these optimizations:

  1. Prefetch related fields when getting the expense:
lineitem = accounting_export.expenses.select_related('expense_number', 'category', 'sub_category', 'vendor').first()
  1. Add appropriate indexes to the CategoryMapping model:
class CategoryMapping(models.Model):
    class Meta:
        indexes = [
            models.Index(fields=['source_category', 'workspace_id']),
        ]
🧰 Tools
🪛 Ruff (0.7.0)

109-109: Comparison to None should be cond is None

Replace with cond is None

(E711)

Comment on lines +124 to +125
dimensions = self.get_dimension_object(accounting_export, lineitem)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for dimension retrieval.

The get_dimension_object call could potentially fail, but there's no error handling in place.

Apply this diff:

- dimensions = self.get_dimension_object(accounting_export, lineitem)
+ try:
+     dimensions = self.get_dimension_object(accounting_export, lineitem)
+ except Exception as e:
+     dimensions = []
+     # Log the error or handle it appropriately
+     logger.error(f'Failed to get dimensions for expense {lineitem.id}: {str(e)}')

Committable suggestion skipped: line range outside the PR's diff.

@@ -41,6 +41,7 @@ def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[Pu
:return: constructed expense_report
'''
batch_purchase_invoice_lineitem_payload = []
dimensions = []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update return type annotation to match implementation.

The method's return type annotation shows Dict but it's returning a tuple of three elements. Update the type annotation to match the actual return value.

Apply this diff:

-def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[PurchaseInvoiceLineitems]) -> Dict:
+def __construct_purchase_invoice(self, body: PurchaseInvoice, lineitems: List[PurchaseInvoiceLineitems]) -> Tuple[Dict, List, List]:

Don't forget to add Tuple to the imports:

-from typing import Dict, List
+from typing import Dict, List, Tuple

Also applies to: 70-70

Comment on lines +62 to +63
'description': lineitem.description if lineitem.description else '',
'description2': lineitem.expense.expense_number
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for accessing expense attribute.

The code assumes that lineitem.expense exists and has an expense_number attribute. Add error handling to gracefully handle cases where these might be None.

Apply this diff:

-                'description2': lineitem.expense.expense_number
+                'description2': lineitem.expense.expense_number if getattr(lineitem, 'expense', None) else ''

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +119 to +131
for dimension in dimensions:
expense_number = dimension.get('expense_number')
parent_id = document_mapping.get(expense_number)

if parent_id:
dimension_payload.append({
"id": dimension['id'],
"code": dimension['code'],
"parentId": parent_id,
"valueId": dimension['valueId'],
"valueCode": dimension['valueCode'],
"exported_module_id": dimension['exported_module_id']
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add validation for required dimension fields.

The code assumes all required fields exist in the dimension dictionary. Add validation to ensure required fields are present.

Apply this diff:

         for dimension in dimensions:
+            required_fields = ['id', 'code', 'valueId', 'valueCode', 'expense_number', 'exported_module_id']
+            if not all(field in dimension for field in required_fields):
+                logger.error('Missing required fields in dimension: %s', dimension)
+                continue
+
             expense_number = dimension.get('expense_number')
             parent_id = document_mapping.get(expense_number)

             if parent_id:
                 dimension_payload.append({
                     "id": dimension['id'],
                     "code": dimension['code'],
                     "parentId": parent_id,
                     "valueId": dimension['valueId'],
                     "valueCode": dimension['valueCode'],
                     "exported_module_id": dimension['exported_module_id']
                 })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for dimension in dimensions:
expense_number = dimension.get('expense_number')
parent_id = document_mapping.get(expense_number)
if parent_id:
dimension_payload.append({
"id": dimension['id'],
"code": dimension['code'],
"parentId": parent_id,
"valueId": dimension['valueId'],
"valueCode": dimension['valueCode'],
"exported_module_id": dimension['exported_module_id']
})
for dimension in dimensions:
required_fields = ['id', 'code', 'valueId', 'valueCode', 'expense_number', 'exported_module_id']
if not all(field in dimension for field in required_fields):
logger.error('Missing required fields in dimension: %s', dimension)
continue
expense_number = dimension.get('expense_number')
parent_id = document_mapping.get(expense_number)
if parent_id:
dimension_payload.append({
"id": dimension['id'],
"code": dimension['code'],
"parentId": parent_id,
"valueId": dimension['valueId'],
"valueCode": dimension['valueCode'],
"exported_module_id": dimension['exported_module_id']
})

Comment on lines +84 to +91
if dimensions:
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads))
dimension_line_responses = business_central_connection.post_dimension_lines(
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
)
response['dimension_line_responses'] = dimension_line_responses

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for dimension posting.

The dimension posting logic should handle potential failures gracefully to prevent partial updates.

Apply this diff:

     if dimensions:
-            dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
-            logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads))
-            dimension_line_responses = business_central_connection.post_dimension_lines(
-                dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
-            )
-            response['dimension_line_responses'] = dimension_line_responses
+            try:
+                dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
+                logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(
+                    accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads
+                ))
+                dimension_line_responses = business_central_connection.post_dimension_lines(
+                    dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
+                )
+                response['dimension_line_responses'] = dimension_line_responses
+            except Exception as e:
+                logger.error('Failed to post dimensions - WORKSPACE_ID: %s, ACCOUNTING_EXPORT_ID: %s, ERROR: %s',
+                    accounting_export.workspace_id, accounting_export.id, str(e)
+                )
+                raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if dimensions:
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads))
dimension_line_responses = business_central_connection.post_dimension_lines(
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
)
response['dimension_line_responses'] = dimension_line_responses
if dimensions:
try:
dimension_set_line_payloads = self.construct_dimension_set_line_payload(dimensions, response['bulk_post_response']['responses'])
logger.info('WORKSPACE_ID: {0}, ACCOUNTING_EXPORT_ID: {1}, DIMENSION_SET_LINE_PAYLOADS: {2}'.format(
accounting_export.workspace_id, accounting_export.id, dimension_set_line_payloads
))
dimension_line_responses = business_central_connection.post_dimension_lines(
dimension_set_line_payloads, 'PURCHASE_INVOICE', item.id
)
response['dimension_line_responses'] = dimension_line_responses
except Exception as e:
logger.error('Failed to post dimensions - WORKSPACE_ID: %s, ACCOUNTING_EXPORT_ID: %s, ERROR: %s',
accounting_export.workspace_id, accounting_export.id, str(e)
)
raise

Comment on lines +164 to +165
create_employee_mapping_with_employee,
add_fyle_credentials
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Missing add_fyle_credentials fixture definition

The changes add add_fyle_credentials as a dependency to multiple fixtures, but the fixture itself is not defined in this file. This will cause test failures.

Please add the missing fixture definition or ensure it's imported from another test configuration file.

Also applies to: 183-183

Copy link

Tests Skipped Failures Errors Time
125 0 💤 2 ❌ 41 🔥 15.416s ⏱️

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
tests/test_business_central/fixtures.py (2)

84-141: LGTM! Consider adding field descriptions

The expense object structure is well-defined and comprehensive. However, adding comments to document the purpose and expected format of critical fields would improve test maintainability.


160-221: Enhance test data coverage for bank accounts

Consider improving the test fixtures by:

  1. Including accounts with different currencies to test currency handling
  2. Adding test cases with all required fields populated
  3. Including a blocked account to test account status handling
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between a028652 and 280e1d4.

📒 Files selected for processing (3)
  • apps/fyle/models.py (1 hunks)
  • requirements.txt (1 hunks)
  • tests/test_business_central/fixtures.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/fyle/models.py
  • requirements.txt
🔇 Additional comments (2)
tests/test_business_central/fixtures.py (2)

145-159: LGTM! Field configuration is well structured

The expense fields configuration provides a clear mapping between enabled fields and their inclusion status, which is essential for testing the sync functionality.


231-255: ⚠️ Potential issue

Fix inconsistent dimensionId references

The dimension values reference dimensionId: "5dbc303f-4319-ee11-9cc4-6045bdc8dcac", but the AREA dimension has id: "5bbc303f-4319-ee11-9cc4-6045bdc8dcac". This mismatch will cause issues in tests.

Apply this fix to all dimension values:

        {
            "id": "1b4a2945-4319-ee11-9cc4-6045bdc8dcac",
            "code": "LARGE",
-           "dimensionId": "5dbc303f-4319-ee11-9cc4-6045bdc8dcac",
+           "dimensionId": "5bbc303f-4319-ee11-9cc4-6045bdc8dcac",
            "displayName": "Large Business",
            "consolidationCode": "",
            "lastModifiedDateTime": "2023-07-03T01:46:02.817Z",
        },

Likely invalid or redundant comment.

@Ashutosh619-sudo Ashutosh619-sudo added deploy deploy to staging environment and removed deploy deploy to staging environment labels Nov 22, 2024
Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.652s ⏱️

Copy link

Tests Skipped Failures Errors Time
125 0 💤 0 ❌ 0 🔥 8.591s ⏱️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deploy deploy to staging environment size/L Large PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants