Skip to content

Commit

Permalink
Merge pull request #57 from iresolis/main
Browse files Browse the repository at this point in the history
Added Feature to Check API results
  • Loading branch information
walter-iriusrisk authored Oct 7, 2024
2 parents cdaacf3 + 1edfd6c commit 1d5297c
Show file tree
Hide file tree
Showing 6 changed files with 377 additions and 19 deletions.
42 changes: 24 additions & 18 deletions Integrations/API Utility/README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Run the install script:
"./IriusRisk-Central/Integrations/API Utility/install_irtool_reqs"
#Command End

Copy the ir_api_util folder to you home directory:
Copy the ir_api_util folder to your home directory:
#Command Start
cp -r "./IriusRisk-Central/Integrations/API Utility/ir_api_util/" ~
#Command End
Expand All @@ -39,29 +39,35 @@ Follow the on-screen instructions to utilize the utility's features.
Features

Feature: Get Project List (1)
Feature Details:
This feature will return via json file, a list of the projects in your instance.

This feature will return a list of the projects in your instance via a JSON file.

Feature: Export IR Project Status (2)
Feature Details:
This feature will export to CSV and Excel the status of your project with respect to countermeasure status. It will prompt you for you IR project reference ID.
This feature will export the status of your project to CSV and Excel, focusing on countermeasure status. It will prompt you for your IR project reference ID.
In addition, it will include the countermeasure status of associated projects by tag.
For example, if your project contains a project component, and you tag that component with reference ID of its actual project, the countermeasure data will for that project component will be included along with the data for your target project.

For example, if your project contains a project component tagged with the reference ID of its actual project, the countermeasure data for that project component will be included along with the data for your target project.

Feature: User Access Report (8)
Feature Details:
This feature will produce a simple output which diplays the active or inactive users over a given period in days.
This feature generates a report that displays active or inactive users over a specified period in days.

Feature: Business Unit Reports (9)
This feature offers two options. The first option generates a CSV report for a single Business Unit by either Name or UUID. The second option generates the same report for all Business Units.

Feature: Audit Log Report (10)
This feature generates an Excel report focusing on Project Activity and User Activity, sourced from audit log events for up to 180 days.

Feature: API Query Checker (12)
This feature allows users to validate API queries by running checks against expected outputs. It provides two options:

1. Run API Query Checker
This option executes the API Query Checker to validate queries against sample output files. It checks if the API responses match the expected results.

Feature: Business Unit Reports
Feature Details:
This feature offers two options. The first option will generate a CSV reports for a single Business Unit by either Name or UUID.
The second option will generate the same reports for all business units.
2. Add New Query to be Checked
This option allows users to add a new API query for validation. You will be prompted to provide the following details:
- Friendly Name: A descriptive name for the API query (e.g., "v1 GET Project Details").
- HTTP Method: The type of request (GET, POST, PUT, DELETE).
- API URL Endpoint: The API call endpoint (e.g., /v1/projects/{reference-id}).
- Sample Output File: The path to a JSON file with the expected output.

Feature: Audit Log Report
Feature Details:
This feature will generate an Excel report focusing on Project Activity and User Activity, sourced from audit log events, for up to 180 days.
Once a query is added, it will be included in future checks performed by the API Query Checker.

Additional features to be added as needed in the future.
Additional features may be added as needed in the future.
1 change: 1 addition & 0 deletions Integrations/API Utility/install_irtool_reqs
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ sudo apt-get install python3-pip -y
pip3 install pandas
pip3 install openpyxl
pip3 install pyarrow
pip3 install deepdiff
131 changes: 131 additions & 0 deletions Integrations/API Utility/ir_api_util/addEndPoint.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
import sys
import json
import os



def parse_sample_response(sample):
if isinstance(sample, dict):
parsed = {}
for key, value in sample.items():
if value is None:
parsed[key] = None # Allow None values
elif isinstance(value, bool):
parsed[key] = "bool"
elif isinstance(value, str):
parsed[key] = "string"
elif isinstance(value, int):
parsed[key] = "int"
elif isinstance(value, float):
parsed[key] = "float"
elif isinstance(value, list):
if not value:
parsed[key] = [] # Empty list
else:
if isinstance(value[0], dict):
parsed[key] = [parse_sample_response(value[0])]
elif isinstance(value[0], str):
parsed[key] = ["string"] # Handle lists of strings
else:
parsed[key] = ["unknown"]
elif isinstance(value, dict):
parsed[key] = parse_sample_response(value)
else:
parsed[key] = "unknown"
return parsed
elif isinstance(sample, list):
if not sample:
return []
elif isinstance(sample[0], dict):
return [parse_sample_response(sample[0])]
else:
return ["string"] if isinstance(sample[0], str) else ["unknown"]
else:
return "unknown"



def read_credentials(api_token_path='~/ir/.ir_user_token', instance_domain_path='~/ir/ir_instance_domain'):
try:
with open(os.path.expanduser(instance_domain_path), 'r') as domain_file:
instance_domain = domain_file.read().strip()
return instance_domain
except FileNotFoundError as e:
print(f"Error: {e}. Make sure the paths are correct.")
sys.exit(1) # Exit if credentials cannot be read



def add_endpoint_to_queries(name, method, url, sample_structure, instance_domain, filename='apiChecker.json'):
parsed_structure = parse_sample_response(sample_structure)

if not url.startswith("https://") and not url.startswith("http://"):
url = f"https://{instance_domain}.iriusrisk.com{url}"

# Detect if it's v1 or v2 based on the URL pattern
if "/v2/" in url:
accept_header = "application/hal+json"
else:
accept_header = "application/json"

new_endpoint = {
"name": name, # Use the provided name
"method": method.upper(),
"url": url,
"headers": {
"Accept": accept_header # Use the correct Accept header based on v1 or v2
},
"expected_status": 200,
"expected_response": parsed_structure
}

try:
if not os.path.exists(filename):
print(f"{filename} not found, creating a new one.")
data = {"endpoints": [new_endpoint]}
with open(filename, 'w') as file:
json.dump(data, file, indent=4)
else:
with open(filename, 'r+') as file:
try:
data = json.load(file)
data['endpoints'].append(new_endpoint)
except json.JSONDecodeError:
data = {"endpoints": [new_endpoint]}
file.seek(0)
json.dump(data, file, indent=4)
print(f"Successfully added the endpoint {name} to {filename}.")
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Error handling {filename}: {e}")




def main():
if len(sys.argv) != 5:
print("Usage: python3 addEndPoint.py <Name> <HTTP Method> <URL> <sample_output_file>")
sys.exit(1)

name = sys.argv[1] # Take the name as an argument
method = sys.argv[2].upper()
url = sys.argv[3]
sample_output_file = sys.argv[4]

valid_methods = ["GET", "POST", "PUT", "DELETE"]
if method not in valid_methods:
print(f"Error: Unsupported HTTP method '{method}'. Supported methods are {', '.join(valid_methods)}.")
sys.exit(1)

try:
with open(sample_output_file, 'r') as f:
sample_structure = json.load(f)
except json.JSONDecodeError as e:
print(f"Error parsing sample output: {e}")
sys.exit(1)

instance_domain = read_credentials()

add_endpoint_to_queries(name, method, url, sample_structure, instance_domain)

if __name__ == "__main__":
main()
175 changes: 175 additions & 0 deletions Integrations/API Utility/ir_api_util/apiChecker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
import sys
import requests
import os
import json
from deepdiff import DeepDiff
from auth import Auth

# Function to load queries from the JSON file
def load_queries(filename):
try:
with open(filename, 'r') as file:
return json.load(file)
except (FileNotFoundError, json.JSONDecodeError) as e:
print("")
print("The apiChecker.json file was not found. Try using the Add feature to create a new query to be checked.")
print("")
return None
#sys.exit(1)

# Read config file for output path and page size
def read_config(config_path):
try:
with open(config_path, 'r') as config_file:
config = json.load(config_file)
output_path = os.path.expanduser(config.get('output_path', '~/'))
os.makedirs(output_path, exist_ok=True)
page_size = config.get('page_size', 2000)
return output_path, page_size
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Error reading config file: {e}. Defaulting to home directory.")
output_path = os.path.expanduser('~/')
os.makedirs(output_path, exist_ok=True)
return output_path, 2000

# Function to compare types
def compare_types(expected, actual):
# If the expected value is None, allow any actual value
if expected is None:
return True, None # Null means any value is valid

# If the actual value is None, allow it for any expected type
if actual is None:
return True, None # Null is acceptable for any expected type

if isinstance(expected, dict) and isinstance(actual, dict):
for key in expected:
if key not in actual:
return False, f"Missing key: {key}"
match, error = compare_types(expected[key], actual[key])
if not match:
return False, error
elif isinstance(expected, list) and isinstance(actual, list):
if len(expected) == 0 or len(actual) == 0:
return True, None # Allow empty lists
return compare_types(expected[0], actual[0])
else:
# For string types
if expected == "string" and isinstance(actual, str):
return True, None
# For int types (ensuring bools are not mistaken for ints)
elif expected == "int" and isinstance(actual, int) and not isinstance(actual, bool):
return True, None
# For bool types
elif expected == "bool" and isinstance(actual, bool):
return True, None
# For float types
elif expected == "float" and isinstance(actual, float):
return True, None
# For list types
elif expected == "list" and isinstance(actual, list):
return True, None
# For dict types
elif expected == "dict" and isinstance(actual, dict):
return True, None
else:
return False, f"Type mismatch: expected {expected}, got {type(actual).__name__}"

return True, None


# Class to handle API checking
class APIChecker:
def __init__(self, api_token_path='~/ir/.ir_user_token', instance_domain_path='~/ir/ir_instance_domain'):
self.auth = Auth()
self.api_token_path = os.path.expanduser(api_token_path)
self.instance_domain_path = os.path.expanduser(instance_domain_path)
self.auth.check_user_instance_file(self.instance_domain_path)
self.auth.check_user_token_file(self.api_token_path)
self.api_token, self.instance_domain = self.read_credentials()
script_dir = os.path.dirname(os.path.realpath(__file__))
self.output_path, self.page_size = read_config(os.path.join(script_dir, 'config.json'))

def read_credentials(self):
try:
with open(self.api_token_path, 'r') as token_file:
api_token = token_file.read().strip()
with open(self.instance_domain_path, 'r') as domain_file:
instance_domain = domain_file.read().strip()
return api_token, instance_domain
except FileNotFoundError as e:
print(f"Error: {e}. Make sure the paths are correct.")
sys.exit(1) # Exit if credentials cannot be read

def test_endpoint(self, endpoint):
method = endpoint.get("method", "GET").upper()
relative_url = endpoint["url"]

# Ensure the URL is properly formatted
if not relative_url.startswith("http"):
url = f"https://{self.instance_domain}.iriusrisk.com{relative_url}"
else:
url = relative_url

headers = endpoint.get("headers", {})
headers['api-token'] = self.api_token # Use the stored API token
expected_status = endpoint["expected_status"]
expected_response = endpoint["expected_response"]

try:
response = requests.request(method, url, headers=headers)
except requests.exceptions.RequestException as e:
print(f"Error fetching {url}: {e}")
return False

status_code = response.status_code
try:
response_json = response.json()
except json.JSONDecodeError:
print(f"Invalid JSON response from {url}")
return False

print(f"Testing {endpoint['name']} - {method} {url}")
print(f"Expected Status: {expected_status}, Actual Status: {status_code}")

if status_code != expected_status:
print(f"Status Code Mismatch! Expected {expected_status}, got {status_code}")
return False

# Compare response JSON structure with expected structure
if isinstance(response_json, list) and isinstance(expected_response, list):
for i, item in enumerate(response_json):
match, error = compare_types(expected_response[0], item)
if not match:
print(f"Response Mismatch at item {i}! {error}")
print(f"Expected: {expected_response[0]}\nGot: {item}")
return False
else:
match, error = compare_types(expected_response, response_json)
if not match:
print(f"Response Mismatch Found! {error}")
print(f"Expected: {expected_response}\nGot: {response_json}")
return False

print("Test Passed!")
return True

def run_tests(self, queries):
print(f"Found {len(queries['endpoints'])} endpoints to test.")
for i, endpoint in enumerate(queries['endpoints']):
print(f"Running test {i + 1} of {len(queries['endpoints'])}...")
success = self.test_endpoint(endpoint)
if not success:
print(f"Test Failed for {endpoint['name']}!\n")
else:
print(f"Test Succeeded for {endpoint['name']}!\n")

def main():
script_dir = os.path.dirname(os.path.realpath(__file__))
api_checker = APIChecker()
queries = load_queries(os.path.join(script_dir, 'apiChecker.json'))
api_checker.run_tests(queries)

# Proper entry point check
if __name__ == "__main__":
main()
3 changes: 2 additions & 1 deletion Integrations/API Utility/ir_api_util/config.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
{
"output_path": "~/ir_api_util_output"
"output_path": "~/reports_ir",
"page_size": 2000
}
Loading

0 comments on commit 1d5297c

Please sign in to comment.