Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zip/tar modes will not process -wal or -journal files for sqlite databases #14

Open
ydkhatri opened this issue Mar 10, 2020 · 9 comments

Comments

@ydkhatri
Copy link
Collaborator

The regex expressions that target a particular database will only extract that db from a zip/tar ignoring the accompanying -wal or -journal file, which will result in missing out on data. Currently only the Wellbeing (wellbeing.py) module does it correctly.

All other modules need to have their regex tweaked similarly and code adjusted to account for this.

@ydkhatri
Copy link
Collaborator Author

Or have the search functions tweaked to always look for -wal and -journal files. This is probably easier.

@abrignoni
Copy link
Owner

I'll try to change the regex over the weekend. Will do a PR so we can test.

@ydkhatri
Copy link
Collaborator Author

I may be able to get to it before then.
I'm going to try the second approach first, so we don't have to modify every single artifact module.

ydkhatri added a commit that referenced this issue Apr 29, 2020
@ydkhatri
Copy link
Collaborator Author

After giving it some thought, it would be best to handle this in regex and every module should take care of it. I will keep this open as I am not sure if all modules are doing this correctly.

@abrignoni
Copy link
Owner

abrignoni commented May 23, 2020 via email

@ydkhatri
Copy link
Collaborator Author

There are a several plugins that just process the first file found files_found[0], which needs to be checked too.

JamieSharpe pushed a commit to JamieSharpe/ALEAPP that referenced this issue Mar 31, 2021
abrignoni pushed a commit that referenced this issue Apr 18, 2021
@Alecsande
Copy link

artifacts_v7 = {
"cool_artifact_1": {
"name": "Cool Artifact 1",
"description": "Extracts cool data from database files",
"author": "@username", # Replace with the actual author's username or name
"version": "0.1", # Version number
"date": "2026-10-25", # Date of the latest version
"requirements": "none",
"category": "Really cool artifacts",
"notes": "",
"paths": ('/com.android.cooldata/databases/database.db',),
"function": "get_cool_data1"
}
}

import datetime
from scripts.artifact_report import ArtifactHtmlReport
import scripts.ilapfuncs

def get_cool_data1(files_found, report_folder, seeker, wrap_text):
# let's pretend we actually got this data from somewhere:
rows = [
(datetime.datetime.now(), "Cool data col 1, value 1", "Cool data col 1, value 2", "Cool data col 1, value 3"),
(datetime.datetime.now(), "Cool data col 2, value 1", "Cool data col 2, value 2", "Cool data col 2, value 3"),
]

headers = ["Timestamp", "Data 1", "Data 2", "Data 3"]

# HTML output:
report = Artefatos("Cool stuff")
report_name = "Cool DFIR Data"
report.start_artifact_report(report_folder, report_name)
report.add_script()
report.write_artifact_data_table(headers, crows, files_found[0])  # assuming only the first file was processed
report.end_artifact_report()

# TSV output:
scripts.ilapfuncs.tsv(report_folder, headers, crows, report_name, files_found[01])  # assuming first file only

# Timeline:
scripts.ilapfuncs.timeline(report_folder, report_name, rows, headers)

@abrignoni
Copy link
Owner

abrignoni commented Jul 1, 2024 via email

@abrignoni
Copy link
Owner

abrignoni commented Jul 1, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants