-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zip/tar modes will not process -wal or -journal files for sqlite databases #14
Comments
Or have the search functions tweaked to always look for -wal and -journal files. This is probably easier. |
I'll try to change the regex over the weekend. Will do a PR so we can test. |
I may be able to get to it before then. |
After giving it some thought, it would be best to handle this in regex and every module should take care of it. I will keep this open as I am not sure if all modules are doing this correctly. |
I agree. I was planning on addressing it as you described.
…On Sat, May 23, 2020, 12:29 AM Yogesh Khatri ***@***.***) < ***@***.***> wrote:
After giving it some thought, it would be best to handle this in regex and
every module should take care of it. I will keep this open as I am not sure
if all modules are doing this correctly.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#14 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AG3DPCYU6IKXSG57TLFBIA3RS5GLBANCNFSM4LFGDXQQ>
.
|
There are a several plugins that just process the first file found |
update to latest
artifacts_v7 = { import datetime def get_cool_data1(files_found, report_folder, seeker, wrap_text):
|
On the paths do: get_cool_data*
Make sure that you iterate files_found and execute the query on the
database. The python library will take the Wal file into account since it
was pulled in the paths section as well.
You can check other SQLite artifacts to see examples on the previous.
…On Mon, Jul 1, 2024, 11:00 AM Alecsande ***@***.***> wrote:
*artifacts_v7* = {
"cool_artifact_1": {
"name": "Cool Artifact 1",
"description": "Extracts cool data from database files",
"author": ***@***.***", # Replace with the actual author's username or name
"version": "0.1", # Version number
"date": "2026-10-25", # Date of the latest version
"requirements": "none",
"category": "Really cool artifacts",
"notes": "",
"paths": ('*/com.android.cooldata/databases/database*.db',),
"function": "get_cool_data1"
}
}
import datetime
from scripts.artifact_report import ArtifactHtmlReport
import scripts.ilapfuncs
def get_cool_data1(files_found, report_folder, seeker, wrap_text):
# let's pretend we actually got this data from somewhere:
rows = [
(datetime.datetime.now(), "Cool data col 1, value 1", "Cool data col 1,
value 2", "Cool data col 1, value 3"),
(datetime.datetime.now(), "Cool data col 2, value 1", "Cool data col 2,
value 2", "Cool data col 2, value 3"),
]
headers = ["Timestamp", "Data 1", "Data 2", "Data 3"]
# HTML output:
report = Artefatos("Cool stuff")
report_name = "Cool DFIR Data"
report.start_artifact_report(report_folder, report_name)
report.add_script()
report.write_artifact_data_table(headers, crows, files_found[0]) # assuming only the first file was processed
report.end_artifact_report()
# TSV output:
scripts.ilapfuncs.tsv(report_folder, headers, crows, report_name, files_found[01]) # assuming first file only
# Timeline:
scripts.ilapfuncs.timeline(report_folder, report_name, rows, headers)
—
Reply to this email directly, view it on GitHub
<#14 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AG3DPC57ZUH27R5NNQALSDTZKFVI3AVCNFSM6AAAAABKFYJEP6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBQGM4TSMJQGY>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
get_cool_data1.db*
…On Mon, Jul 1, 2024, 11:05 AM Alexis Brignoni ***@***.***> wrote:
On the paths do: get_cool_data*
Make sure that you iterate files_found and execute the query on the
database. The python library will take the Wal file into account since it
was pulled in the paths section as well.
You can check other SQLite artifacts to see examples on the previous.
On Mon, Jul 1, 2024, 11:00 AM Alecsande ***@***.***> wrote:
> *artifacts_v7* = {
> "cool_artifact_1": {
> "name": "Cool Artifact 1",
> "description": "Extracts cool data from database files",
> "author": ***@***.***", # Replace with the actual author's username or name
> "version": "0.1", # Version number
> "date": "2026-10-25", # Date of the latest version
> "requirements": "none",
> "category": "Really cool artifacts",
> "notes": "",
> "paths": ('*/com.android.cooldata/databases/database*.db',),
> "function": "get_cool_data1"
> }
> }
>
> import datetime
> from scripts.artifact_report import ArtifactHtmlReport
> import scripts.ilapfuncs
>
> def get_cool_data1(files_found, report_folder, seeker, wrap_text):
> # let's pretend we actually got this data from somewhere:
> rows = [
> (datetime.datetime.now(), "Cool data col 1, value 1", "Cool data col 1,
> value 2", "Cool data col 1, value 3"),
> (datetime.datetime.now(), "Cool data col 2, value 1", "Cool data col 2,
> value 2", "Cool data col 2, value 3"),
> ]
>
> headers = ["Timestamp", "Data 1", "Data 2", "Data 3"]
>
> # HTML output:
> report = Artefatos("Cool stuff")
> report_name = "Cool DFIR Data"
> report.start_artifact_report(report_folder, report_name)
> report.add_script()
> report.write_artifact_data_table(headers, crows, files_found[0]) # assuming only the first file was processed
> report.end_artifact_report()
>
> # TSV output:
> scripts.ilapfuncs.tsv(report_folder, headers, crows, report_name, files_found[01]) # assuming first file only
>
> # Timeline:
> scripts.ilapfuncs.timeline(report_folder, report_name, rows, headers)
>
> —
> Reply to this email directly, view it on GitHub
> <#14 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AG3DPC57ZUH27R5NNQALSDTZKFVI3AVCNFSM6AAAAABKFYJEP6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBQGM4TSMJQGY>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
The regex expressions that target a particular database will only extract that db from a zip/tar ignoring the accompanying -wal or -journal file, which will result in missing out on data. Currently only the Wellbeing (wellbeing.py) module does it correctly.
All other modules need to have their regex tweaked similarly and code adjusted to account for this.
The text was updated successfully, but these errors were encountered: