Skip to content

Commit

Permalink
Merge branch '4.9.0' into 254-extend-event-generator-tool
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexRuiz7 authored Jul 8, 2024
2 parents 29fb9b1 + e43828f commit c5209ce
Show file tree
Hide file tree
Showing 11 changed files with 159 additions and 19 deletions.
10 changes: 5 additions & 5 deletions integrations/amazon-security-lake/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,18 @@
A demo of the integration can be started using the content of this folder and Docker. Open a terminal in the `wazuh-indexer/integrations` folder and start the environment.

```console
docker compose -f ./docker/amazon-security-lake.yml up -d
docker compose -f ./docker/compose.amazon-security-lake.yml up -d
```

This Docker Compose project will bring up these services:

- a _wazuh-indexer_ node
- a _wazuh-dashboard_ node
- a _logstash_ node
- our [events generator](./tools/events-generator/README.md)
- our [events generator](../tools/events-generator/README.md)
- an AWS Lambda Python container.

On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](./tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`.
On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](../tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`.

The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format.

Expand All @@ -33,13 +33,13 @@ After 5 minutes, the first batch of data will show up in http://localhost:9444/u
bash amazon-security-lake/src/invoke-lambda.sh <file>
```

Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./amazon-security-lake/).
Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./requirements.txt).

```bash
parquet-tools show <parquet-file>
```

If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file.
If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [compose.amazon-security-lake.yml](../docker/compose.amazon-security-lake.yml) file.

For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,17 @@ services:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
volumes:
- ./certs/:/usr/share/opensearch-dashboards/config/certs/
- ./certs/wazuh.dashboard-key.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.key
- ./certs/wazuh.dashboard.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.pem
- ./certs/root-ca.pem:/usr/share/opensearch-dashboards/config/certs/root-ca.pem
environment:
OPENSEARCH_HOSTS: '["https://wazuh.indexer:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
SERVER_SSL_ENABLED: 'true'
SERVER_SSL_KEY: '/usr/share/opensearch-dashboards/config/certs/opensearch.key'
SERVER_SSL_CERTIFICATE: '/usr/share/opensearch-dashboards/config/certs/opensearch.pem'
OPENSEARCH_SSL_CERTIFICATEAUTHORITIES: '/usr/share/opensearch-dashboards/config/certs/root-ca.pem'

wazuh.integration.security.lake:
image: wazuh/indexer-security-lake-integration
Expand Down Expand Up @@ -128,10 +137,33 @@ services:
- ../amazon-security-lake/src:/var/task
ports:
- "9000:8080"

generate-certs-config:
image: alpine:latest
volumes:
- ./config:/config
command: |
sh -c "
echo '
nodes:
indexer:
- name: wazuh.indexer
ip: \"wazuh.indexer\"
server:
- name: wazuh.manager
ip: \"wazuh.manager\"
dashboard:
- name: wazuh.dashboard
ip: \"wazuh.dashboard\"
' > /config/certs.yml
"
wazuh-certs-generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
depends_on:
generate-certs-config:
condition: service_completed_successfully
container_name: wazuh-certs-generator
entrypoint: sh -c "/entrypoint.sh; chown -R 1000:999 /certificates; chmod 740 /certificates; chmod 440 /certificates/*"
volumes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,44 @@ services:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
volumes:
- ./certs/:/usr/share/opensearch-dashboards/config/certs/
- ./certs/wazuh.dashboard-key.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.key
- ./certs/wazuh.dashboard.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.pem
- ./certs/root-ca.pem:/usr/share/opensearch-dashboards/config/certs/root-ca.pem
environment:
OPENSEARCH_HOSTS: '["https://wazuh.indexer:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
SERVER_SSL_ENABLED: 'true'
SERVER_SSL_KEY: '/usr/share/opensearch-dashboards/config/certs/opensearch.key'
SERVER_SSL_CERTIFICATE: '/usr/share/opensearch-dashboards/config/certs/opensearch.pem'
OPENSEARCH_SSL_CERTIFICATEAUTHORITIES: '/usr/share/opensearch-dashboards/config/certs/root-ca.pem'

generate-certs-config:
image: alpine:latest
volumes:
- ./config:/config
command: |
sh -c "
echo '
nodes:
indexer:
- name: wazuh.indexer
ip: \"wazuh.indexer\"
server:
- name: wazuh.manager
ip: \"wazuh.manager\"
dashboard:
- name: wazuh.dashboard
ip: \"wazuh.dashboard\"
' > /config/certs.yml
"
wazuh-certs-generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
depends_on:
generate-certs-config:
condition: service_completed_successfully
entrypoint: sh -c "/entrypoint.sh; chown -R 1000:999 /certificates; chmod 740 /certificates; chmod 440 /certificates/*"
volumes:
- ./certs/:/certificates/
Expand Down Expand Up @@ -105,6 +137,12 @@ services:
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
Expand Down Expand Up @@ -181,12 +219,15 @@ services:
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_SSL_ENABLED=true
- SERVER_SSL_KEY=/usr/share/kibana/config/certs/kibana/kibana.key
- SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/kibana/kibana.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
'CMD-SHELL',
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
"curl -s -I https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,36 @@ services:
SERVER.SSL_CERTIFICATE: '/usr/share/opensearch-dashboards/config/certs/opensearch.pem'
OPENSEARCH_SSL_CERTIFICATEAUTHORITIES: '/usr/share/opensearch-dashboards/config/certs/root-ca.pem'

generate-certs-config:
image: alpine:latest
volumes:
- ./config:/config
command: |
sh -c "
echo '
nodes:
indexer:
- name: wazuh.indexer
ip: \"wazuh.indexer\"
- name: opensearch.node
ip: \"opensearch.node\"
server:
- name: wazuh.manager
ip: \"wazuh.manager\"
dashboard:
- name: wazuh.dashboard
ip: \"wazuh.dashboard\"
- name: opensearch.dashboards
ip: \"opensearch.dashboards\"
' > /config/certs.yml
"
wazuh-certs-generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
depends_on:
generate-certs-config:
condition: service_completed_successfully
entrypoint: sh -c "/entrypoint.sh; chown -R 1000:999 /certificates; chmod 740 /certificates; chmod 440 /certificates/*"
volumes:
- ./certs/:/certificates/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,44 @@ services:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
volumes:
- ./certs/:/usr/share/opensearch-dashboards/config/certs/
- ./certs/wazuh.dashboard-key.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.key
- ./certs/wazuh.dashboard.pem:/usr/share/opensearch-dashboards/config/certs/opensearch.pem
- ./certs/root-ca.pem:/usr/share/opensearch-dashboards/config/certs/root-ca.pem
environment:
OPENSEARCH_HOSTS: '["https://wazuh.indexer:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
SERVER_SSL_ENABLED: 'true'
SERVER_SSL_KEY: '/usr/share/opensearch-dashboards/config/certs/opensearch.key'
SERVER_SSL_CERTIFICATE: '/usr/share/opensearch-dashboards/config/certs/opensearch.pem'
OPENSEARCH_SSL_CERTIFICATEAUTHORITIES: '/usr/share/opensearch-dashboards/config/certs/root-ca.pem'

generate-certs-config:
image: alpine:latest
volumes:
- ./config:/config
command: |
sh -c "
echo '
nodes:
indexer:
- name: wazuh.indexer
ip: \"wazuh.indexer\"
server:
- name: wazuh.manager
ip: \"wazuh.manager\"
dashboard:
- name: wazuh.dashboard
ip: \"wazuh.dashboard\"
' > /config/certs.yml
"
wazuh-certs-generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
depends_on:
generate-certs-config:
condition: service_completed_successfully
entrypoint: sh -c "/entrypoint.sh; chown -R 1000:999 /certificates; chmod 740 /certificates; chmod 440 /certificates/*"
volumes:
- ./certs/:/certificates/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,12 @@ services:
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
Expand Down Expand Up @@ -226,12 +232,15 @@ services:
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- SERVER_SSL_ENABLED=true
- SERVER_SSL_KEY=/usr/share/kibana/config/certs/kibana/kibana.key
- SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/kibana/kibana.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
'CMD-SHELL',
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
"curl -s -I https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
Expand Down
File renamed without changes.
File renamed without changes.
8 changes: 4 additions & 4 deletions integrations/elastic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ This document describes how to prepare a Docker Compose environment to test the
1. Clone the Wazuh repository and navigate to the `integrations/` folder.
2. Run the following command to start the environment:
```bash
docker compose -f ./docker/elastic.yml up -d
docker compose -f ./docker/compose.indexer-elastic.yml up -d
```
3. If you prefer, you can start the integration with the Wazuh Manager as data source:
```bash
docker compose -f ./docker/manager-elastic.yml up -d
docker compose -f ./docker/compose.manager-elastic.yml up -d
```

The Docker Compose project will bring up the following services:
Expand All @@ -29,12 +29,12 @@ The Docker Compose project will bring up the following services:

For custom configurations, you may need to modify these files:

- [docker/elastic.yml](../docker/elastic.yml): Docker Compose file.
- [docker/compose.indexer-elastic.yml](../docker/compose.indexer-elastic.yml): Docker Compose file.
- [docker/.env](../docker/.env): Environment variables file.
- [elastic/logstash/pipeline/indexer-to-elastic.conf](./logstash/pipeline/indexer-to-elastic.conf): Logstash Pipeline configuration file.

If you opted to start the integration with the Wazuh Manager, you can modify the following files:
- [docker/manager-elastic.yml](../docker/manager-elastic.yml): Docker Compose file.
- [docker/compose.manager-elastic.yml](../docker/compose.manager-elastic.yml): Docker Compose file.
- [elastic/logstash/pipeline/manager-to-elastic.conf](./logstash/pipeline/manager-to-elastic.conf): Logstash Pipeline configuration file.

Check the files above for **credentials**, ports, and other configurations.
Expand Down
8 changes: 4 additions & 4 deletions integrations/opensearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ This document describes how to prepare a Docker Compose environment to test the
1. Clone the Wazuh repository and navigate to the `integrations/` folder.
2. Run the following command to start the environment:
```bash
docker compose -f ./docker/opensearch.yml up -d
docker compose -f ./docker/compose.indexer-opensearch.yml up -d
```
3. If you prefer, you can start the integration with the Wazuh Manager as data source:
```bash
docker compose -f ./docker/manager-opensearch.yml up -d
docker compose -f ./docker/compose.manager-opensearch.yml up -d
```

The Docker Compose project will bring up the following services:
Expand All @@ -29,12 +29,12 @@ The Docker Compose project will bring up the following services:

For custom configurations, you may need to modify these files:

- [docker/opensearch.yml](../docker/opensearch.yml): Docker Compose file.
- [docker/compose.indexer-opensearch.yml](../docker/compose.indexer-opensearch.yml): Docker Compose file.
- [docker/.env](../docker/.env): Environment variables file.
- [opensearch/logstash/pipeline/indexer-to-opensearch.conf](./logstash/pipeline/indexer-to-opensearch.conf): Logstash Pipeline configuration file.

If you opted to start the integration with the Wazuh Manager, you can modify the following files:
- [docker/manager-opensearch.yml](../docker/manager-opensearch.yml): Docker Compose file.
- [docker/compose.manager-opensearch.yml](../docker/compose.manager-opensearch.yml): Docker Compose file.
- [opensearch/logstash/pipeline/manager-to-opensearch.conf](./logstash/pipeline/manager-to-opensearch.conf): Logstash Pipeline configuration file.

Check the files above for **credentials**, ports, and other configurations.
Expand Down
8 changes: 4 additions & 4 deletions integrations/splunk/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ This document describes how to prepare a Docker Compose environment to test the
1. Clone the Wazuh repository and navigate to the `integrations/` folder.
2. Run the following command to start the environment:
```bash
docker compose -f ./docker/splunk.yml up -d
docker compose -f ./docker/compose.indexer-splunk.yml up -d
```
3. If you prefer, you can start the integration with the Wazuh Manager as data source:
```bash
docker compose -f ./docker/manager-splunk.yml up -d
docker compose -f ./docker/compose.manager-splunk.yml up -d
```

The Docker Compose project will bring up the following services:
Expand All @@ -28,12 +28,12 @@ The Docker Compose project will bring up the following services:

For custom configurations, you may need to modify these files:

- [docker/splunk.yml](../docker/splunk.yml): Docker Compose file.
- [docker/compose.indexer-splunk.yml](../docker/compose.indexer-splunk.yml): Docker Compose file.
- [docker/.env](../docker/.env): Environment variables file.
- [splunk/logstash/pipeline/indexer-to-splunk.conf](./logstash/pipeline/indexer-to-splunk.conf): Logstash Pipeline configuration file.

If you opted to start the integration with the Wazuh Manager, you can modify the following files:
- [docker/manager-splunk.yml](../docker/manager-splunk.yml): Docker Compose file.
- [docker/compose.manager-splunk.yml](../docker/compose.manager-splunk.yml): Docker Compose file.
- [splunk/logstash/pipeline/manager-to-splunk.conf](./logstash/pipeline/manager-to-splunk.conf): Logstash Pipeline configuration file.

Check the files above for **credentials**, ports, and other configurations.
Expand Down

0 comments on commit c5209ce

Please sign in to comment.