Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update variable description #377

Merged
merged 1 commit into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:
env:
FOSSA_API_KEY: ${{ secrets.FOSSA_API_KEY }}
- name: upload THIRDPARTY file
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: THIRDPARTY
path: /tmp/THIRDPARTY
Expand Down Expand Up @@ -158,7 +158,7 @@ jobs:
run: |
make testall

- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
with:
name: splunk-firehose-nozzle
path: splunk-firehose-nozzle
Expand Down Expand Up @@ -198,7 +198,7 @@ jobs:
ruby-version: ${{ env.RUBY_VERSION }}
- run: ruby -v

- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: splunk-firehose-nozzle

Expand Down Expand Up @@ -253,7 +253,7 @@ jobs:
go-version: ${{ env.GO_VERSION }}
- run: go version

- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: splunk-firehose-nozzle

Expand All @@ -267,7 +267,7 @@ jobs:
echo "tile_name=$(ls tile/product | grep ".pivotal")" >> "$GITHUB_ENV"

- name: Upload tile
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: ${{ env.tile_name }}
path: tile/product/${{ env.tile_name }}
Expand Down Expand Up @@ -312,7 +312,7 @@ jobs:
ruby-version: ${{ env.RUBY_VERSION }}
- run: ruby -v

- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: splunk-firehose-nozzle

Expand Down
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,32 +88,32 @@ __Advanced Configuration Features:__
This is recommended for dev environments only. (Default: false)
* `SKIP_SSL_VALIDATION_SPLUNK`: Skips SSL certificate validation for connection to Splunk. Secure communications will not check SSL certificates against a trusted certificate authority. (Default: false)
This is recommended for dev environments only.
* `FIREHOSE_SUBSCRIPTION_ID`: Tags nozzle events with a Firehose subscription id. See https://docs.pivotal.io/pivotalcf/1-11/loggregator/log-ops-guide.html. (Default: splunk-firehose)
* `FIREHOSE_SUBSCRIPTION_ID`: Tags nozzle events with a Firehose subscription id. See https://docs.vmware.com/en/VMware-Tanzu-Application-Service/6.0/tas-for-vms/log-ops-guide.html. (Default: splunk-firehose)
* `FIREHOSE_KEEP_ALIVE`: Keep alive duration for the Firehose consumer. (Default: 25s)
* `ADD_APP_INFO`: Enrich raw data with app info. A comma separated list of app metadata (AppName,OrgName,OrgGuid,SpaceName,SpaceGuid). (Default: "")
* `ADD_TAGS`: Add additional tags from envelope to splunk event. (Default: false)
(Please note: Adding tags / Enabling this feature may slightly impact the performance due to the increased event size)
(Please note: Enabling this feature may slightly impact the performance due to the increased event size)
* `IGNORE_MISSING_APP`: If the application is missing, then stop repeatedly querying application info from Cloud Foundry. (Default: true)
* `MISSING_APP_CACHE_INVALIDATE_TTL`: How frequently the missing app info cache invalidates (in s/m/h. For example, 3600s or 60m or 1h). (Default: 0s) (see below for more details)
* `APP_CACHE_INVALIDATE_TTL`: How frequently the app info local cache invalidates (in s/m/h. For example, 3600s or 60m or 1h). (Default: 0s) (see below for more details)
* `ORG_SPACE_CACHE_INVALIDATE_TTL`: How frequently the org and space cache invalidates (in s/m/h. For example, 3600s or 60m or 1h). (Default: 72h)
* `APP_LIMITS`: Restrict to APP_LIMITS the most updated apps per request when populating the app metadata cache. keep it 0 to update all the apps. (Default: 0)
* `BOLTDB_PATH`: Bolt database path. (Default: cache.db)
* `EVENTS`: A comma separated list of events to include. It is a required field. Possible values: ValueMetric,CounterEvent,Error,LogMessage,HttpStartStop,ContainerMetric. If no eventtype is selected, nozzle will automatically select LogMessage to keep the nozzle running. (Default: "ValueMetric,CounterEvent,ContainerMetric")
* `EVENTS`: A comma separated list of events to include. It is a required field. Possible values: ValueMetric,CounterEvent,Error,LogMessage,HttpStartStop,ContainerMetric. If no event type is selected, nozzle will automatically select LogMessage to keep the nozzle running. (Default: "ValueMetric,CounterEvent,ContainerMetric")
* `EXTRA_FIELDS`: Extra fields to annotate your events with (format is key:value,key:value). (Default: "")
* `FLUSH_INTERVAL`: Time interval (in s/m/h. For example, 3600s or 60m or 1h) for flushing queue to Splunk regardless of CONSUMER_QUEUE_SIZE. Protects against stale events in low throughput systems. (Default: 5s)
* `FLUSH_INTERVAL`: Time interval (in s/m/h. For example, 3600s or 60m or 1h) for flushing queue to Splunk regardless of `CONSUMER_QUEUE_SIZE`. Protects against stale events in low throughput systems. (Default: 5s)
* `CONSUMER_QUEUE_SIZE`: Sets the internal consumer queue buffer size. Events will be pushed to Splunk after queue is full. (Default: 10000)
* `HEC_BATCH_SIZE`: Set the batch size for the events to push to HEC (Splunk HTTP Event Collector). (Default: 100)
* `HEC_RETRIES`: Retry count for sending events to Splunk. After expiring, events will begin dropping causing data loss. (Default: 5)
* `HEC_WORKERS`: Set the amount of Splunk HEC workers to increase concurrency while ingesting in Splunk. (Default: 8)
* `ENABLE_EVENT_TRACING`: Enables event trace logging. Splunk events will now contain a UUID, Splunk Nozzle Event Counts, and a Subscription-ID for Splunk correlation searches. (Default: false)
* `SPLUNK_LOGGING_INDEX`: The Splunk index where logs from the nozzle of the sourcetype `cf:splunknozzle` will be sent to. Warning: Setting an invalid index will cause events to be lost. This index must match one of the selected indexes for the Splunk HTTP event collector token used for the SPLUNK_TOKEN parameter. When not provided, all logging events will be forwarded to the default SPLUNK_INDEX. The default value is `""`
* `STATUS_MONITOR_INTERVAL`: Time interval (in s/m/h. For example, 3600s or 60m or 1h) for Enabling Monitoring (Metric data of insights with in the connectors). Default is 0s (Disabled).
* `SPLUNK_LOGGING_INDEX`: The Splunk index where logs from the nozzle of the sourcetype `cf:splunknozzle` will be sent to. Warning: Setting an invalid index will cause events to be lost. This index must match one of the selected indexes for the Splunk HTTP event collector token used for the SPLUNK_TOKEN parameter. When not provided, all logging events will be forwarded to the default SPLUNK_INDEX. (Default: "")
* `STATUS_MONITOR_INTERVAL`: Time interval (in s/m/h. For example, 3600s or 60m or 1h) to enable monitoring of metric data within the connector. (This increases CPU load and should be used only for insights purposes. Default: 0s).
* `SPLUNK_METRIC_INDEX`: Index in which metric data will be ingested when monitoring module is enabled
* `SELECTED_MONITORING_METRICS`: Name of the metrics that you want to monitor and add using comma seprated values. List of the metrics that are supported in the metrics modules are given below
* `REFRESH_SPLUNK_CONNECTION`: If set to true, PCF will periodically refresh connection to Splunk (how often depends on KEEP_ALIVE_TIMER value). If set to false connection will be kept alive and reused. (Default: false)
* `KEEP_ALIVE_TIMER`: Time after which connection to Splunk will be refreshed, if REFRESH_SPLUNK_CONNECTION is set to true (in s/m/h. For example, 3600s or 60m or 1h). (Default: 30s)
* `MEMORY_BALLAST_SIZE`: Size of memory allocated to reduce GC cycles. Default is 0, Size should be less than the total memory.
* `KEEP_ALIVE_TIMER`: Time after which connection to Splunk will be refreshed, if `REFRESH_SPLUNK_CONNECTION` is set to true (in s/m/h. For example, 3600s or 60m or 1h). (Default: 30s)
* `MEMORY_BALLAST_SIZE`: Size of memory allocated to reduce GC cycles. Size should be less than the total memory. (Default: 0).

__About app cache params:__

Expand Down Expand Up @@ -418,7 +418,7 @@ A correct setup logs a start message with configuration parameters of the Nozzle
skip-ssl: true
splunk-host: http://localhost:8088
splunk-index: atomic
subscription-id: splunk-firehose
firehose-subscription-id: splunk-firehose
trace-logging: true
status-monitor-interval: 0s
version:
Expand Down Expand Up @@ -487,7 +487,7 @@ sourcetype="cf:counterevent"

### 7. Nozzle is not collecting any data with 'websocket' (bad handshake) error

If the nozzle reports below error, then check if the configured "subscription-id" has '#' as a prefix. Please remove the prefix or prepend any other character than '#' to fix this issue.
If the nozzle reports below error, then check if the configured "firehose-subscription-id" has '#' as a prefix. Please remove the prefix or prepend any other character than '#' to fix this issue.
```
Error dialing trafficcontroller server: websocket: bad handshake.\nPlease ask your Cloud Foundry Operator to check the platform configuration (trafficcontroller is wss://****:443).
```
Expand Down
2 changes: 1 addition & 1 deletion eventsink/splunk.go
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ func (s *Splunk) buildEvent(fields map[string]interface{}) map[string]interface{

if s.config.TraceLogging {
extraFields["nozzle-event-counter"] = strconv.FormatUint(atomic.AddUint64(&s.eventCount, 1), 10)
extraFields["subscription-id"] = s.config.SubscriptionID
extraFields["firehose-subscription-id"] = s.config.SubscriptionID
extraFields["uuid"] = s.config.UUID
}
for k, v := range s.config.ExtraFields {
Expand Down
6 changes: 3 additions & 3 deletions splunknozzle/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ type Config struct {

SkipSSLCF bool `json:"skip-ssl-cf"`
SkipSSLSplunk bool `json:"skip-ssl-splunk"`
SubscriptionID string `json:"subscription-id"`
SubscriptionID string `json:"firehose-subscription-id"`
KeepAlive time.Duration `json:"keep-alive"`

AddAppInfo string `json:"add-app-info"`
Expand Down Expand Up @@ -99,7 +99,7 @@ func NewConfigFromCmdFlags(version, branch, commit, buildos string) *Config {
OverrideDefaultFromEnvar("SKIP_SSL_VALIDATION_CF").Default("false").BoolVar(&c.SkipSSLCF)
kingpin.Flag("skip-ssl-validation-splunk", "Skip cert validation (for dev environments").
OverrideDefaultFromEnvar("SKIP_SSL_VALIDATION_SPLUNK").Default("false").BoolVar(&c.SkipSSLSplunk)
kingpin.Flag("subscription-id", "Id for the subscription.").
kingpin.Flag("firehose-subscription-id", "Id for the subscription.").
sbylica-splunk marked this conversation as resolved.
Show resolved Hide resolved
OverrideDefaultFromEnvar("FIREHOSE_SUBSCRIPTION_ID").Default("splunk-firehose").StringVar(&c.SubscriptionID)
kingpin.Flag("firehose-keep-alive", "Keep Alive duration for the firehose consumer").
OverrideDefaultFromEnvar("FIREHOSE_KEEP_ALIVE").Default("25s").DurationVar(&c.KeepAlive)
Expand Down Expand Up @@ -141,7 +141,7 @@ func NewConfigFromCmdFlags(version, branch, commit, buildos string) *Config {
kingpin.Flag("keep-alive-timer", "Interval used to close and refresh connection to Splunk").
OverrideDefaultFromEnvar("KEEP_ALIVE_TIMER").Default("30s").DurationVar(&c.KeepAliveTimer)

kingpin.Flag("enable-event-tracing", "Enable event trace logging: Adds splunk trace logging fields to events. uuid, subscription-id, nozzle event counter").
kingpin.Flag("enable-event-tracing", "Enable event trace logging: Adds splunk trace logging fields to events. uuid, firehose-subscription-id, nozzle event counter").
OverrideDefaultFromEnvar("ENABLE_EVENT_TRACING").Default("false").BoolVar(&c.TraceLogging)
kingpin.Flag("debug", "Enable debug mode: forward to standard out instead of splunk").
OverrideDefaultFromEnvar("DEBUG").Default("false").BoolVar(&c.Debug)
Expand Down
2 changes: 1 addition & 1 deletion splunknozzle/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ var _ = Describe("Config", func() {
"--job-host=nozzle.example.comc",
"--skip-ssl-validation-cf",
"--skip-ssl-validation-splunk",
"--subscription-id=my-nozzlec",
"--firehose-subscription-id=my-nozzlec",
"--firehose-keep-alive=24s",
"--add-app-info=OrgName",
"--ignore-missing-app",
Expand Down
6 changes: 3 additions & 3 deletions testing/integration/testcases/test_nozzle_configurations.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def test_search_event_on_splunk_is_not_empty(self, test_env, splunk_logger):
@pytest.mark.Critical
@pytest.mark.parametrize("query_input", [
"index={} cf_app_name=data_gen nozzle-event-counter>0", # nozzle-event-counter should be searchable
"index={} cf_app_name=data_gen subscription-id::splunk-ci", # subscription-id should be searchable
"index={} cf_app_name=data_gen firehose-subscription-id::splunk-ci", # subscription-id should be searchable
"index={} cf_app_name=data_gen uuid::*" # uuid should be searchable
])
def test_enable_event_tracing_is_true(self, test_env, splunk_logger, query_input):
Expand Down Expand Up @@ -109,7 +109,7 @@ def test_search_by_wrong_extra_fields(self, test_env, splunk_logger, query_input

@pytest.mark.Critical
@pytest.mark.parametrize("query_input", [
"index={} cf_app_name=data_gen subscription-id::* event_type=LogMessage"
"index={} cf_app_name=data_gen firehose-subscription-id::* event_type=LogMessage"
])
def test_fields_and_values_in_splunk_event(self, test_env, splunk_logger, query_input):
self.splunk_api = SplunkApi(test_env, splunk_logger)
Expand All @@ -125,7 +125,7 @@ def test_fields_and_values_in_splunk_event(self, test_env, splunk_logger, query_
'index': test_env['splunk_index'],
'source': 'compute',
'sourcetype': 'cf:logmessage',
'subscription-id': 'splunk-ci'
'firehose-subscription-id': 'splunk-ci'
}

assert_json_contains(expect_content, last_event, "Event raw data results mismatch")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def test_search_by_extra_fields(self, query_input, is_result_empty, test_env, sp
@pytest.mark.Critical
@pytest.mark.parametrize("query_input", [
"index={0} test_tag::{1} nozzle-event-counter>0", # nozzle-event-counter should not be searchable
"index={0} test_tag::{1} subscription-id::splunk-ci", # subscription-id should not be searchable
"index={0} test_tag::{1} firehose-subscription-id::splunk-ci", # subscription-id should not be searchable
"index={0} test_tag::{1} uuid::*" # uuid should not be searchable
])
def test_enable_event_tracing_is_false(self, test_env, query_input, splunk_logger):
Expand Down
Loading
Loading