Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
elamaran11 authored Feb 7, 2024
2 parents 5bdcaaf + bf4b538 commit 55006a7
Show file tree
Hide file tree
Showing 15 changed files with 393 additions and 24 deletions.
Binary file added docs/patterns/images/ADOT_container_logs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/patterns/images/logs-fargate-fluentbit.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Single Cluster Open Source Observability - Container Logs Collection

## Objective

Following the [announcement](https://aws.amazon.com/about-aws/whats-new/2023/11/logs-support-aws-distro-opentelemetry/) of logs support in AWS Distro for OpenTelemetry, this pattern demonstrates how to use the _New EKS Cluster Open Source Observability Accelerator_ to forward container logs to cloud watch using ADOT containers log collector.

## Prerequisites

Ensure that you have installed the following tools on your machine.

1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [cdk](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)
4. [npm](https://docs.npmjs.com/cli/v8/commands/npm-install)

## Deploying

Please follow the _Deploying_ instructions of the [New EKS Cluster Open Source Observability Accelerator](./single-new-eks-opensource-observability.md) pattern, except for step 7, where you need to replace "context" in `~/.cdk.json` with the following:

```typescript
"context": {
"fluxRepository": {
"name": "grafana-dashboards",
"namespace": "grafana-operator",
"repository": {
"repoUrl": "https://github.com/aws-observability/aws-observability-accelerator",
"name": "grafana-dashboards",
"targetRevision": "main",
"path": "./artifacts/grafana-operator-manifests/eks/infrastructure"
},
"values": {
"GRAFANA_CLUSTER_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/cluster.json",
"GRAFANA_KUBELET_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/kubelet.json",
"GRAFANA_NSWRKLDS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/namespace-workloads.json",
"GRAFANA_NODEEXP_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodeexporter-nodes.json",
"GRAFANA_NODES_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodes.json",
"GRAFANA_WORKLOADS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/workloads.json"
},
"kustomizations": [
{
"kustomizationPath": "./artifacts/grafana-operator-manifests/eks/infrastructure"
}
]
},
"adotcontainerlogs.pattern.enabled": true
}
```

!! warning This scenario might need larger worker node for the pod.


Once completed the rest of the _Deploying_ steps, you can move on with the deployment of the Nginx workload.

## Viewing Logs in CloudWatch Log Groups and Logs Insights

Navigate to CloudWatch, then go to "Log groups"

Search for log group with the name "/aws/eks/single-new-eks-mixed-observability-accelerator" and open it

You will see log streams created using the node name

![ADOT_container_logs_group](../images/ADOT_container_logs_group.png)

Open the log stream and you view the logs forwarded by the container logs collector to CloudWatch

![ADOT_container_logs](../images/ADOT_container_logs.png)

Navigate to CloudWatch, then go to "Logs Insights"

In the dropdown, select log group with name "/aws/eks/single-new-eks-mixed-observability-accelerator" and run a query.

![ADOT_container_logs_insights](../images/ADOT_container_logs_insights.png)

Then you can view the results of your query:

![ADOT_container_logs_insights](../images/ADOT_container_logs_insights_results.png)

## Teardown

You can teardown the whole CDK stack with the following command:

```bash
make pattern single-new-eks-opensource-observability destroy
```
Original file line number Diff line number Diff line change
Expand Up @@ -194,8 +194,10 @@ You should now see a new dashboard named `Java/JMX`, under `Observability Accele

## Viewing Logs

By default, we deploy a FluentBit daemon set in the cluster to collect worker logs for all namespaces. Logs are collected and exported to Amazon CloudWatch Logs, which enables you to centralize the logs from all of your systems, applications,
and AWS services that you use, in a single, highly scalable service.
Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don't explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated [`ConfigMap`](../../../lib/common/resources/fluent-bit/fluent-bit-fargate-config.ytpl). Logs are collected and exported to Amazon CloudWatch Logs, which enables you to centralize the logs from all of your systems, applications,
and AWS services that you use, in a single, highly scalable service. By default, the logs are exported to us-east-1 region but you can modify the `ConfigMap` for your region of choice. At least one supported `OUTPUT` plugin has to be provided in the `ConfigMap` to enable logging. You can also modify the destination from cloudwatch to Cloudwatch (default), Amazon OpenSearch Service or Kinesis Data Firehose. Read more about [EKS Fargate logging](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html).

![fargate-fluentbit](../images/logs-fargate-fluentbit.png)

## Teardown

Expand Down
42 changes: 42 additions & 0 deletions lib/common/resources/fluent-bit/fluent-bit-fargate-config.ytpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
flb_log_cw: "{{enableFlbProcessLogs}}" # Set to true to ship Fluent Bit process logs to CloudWatch.
filters.conf: |
[FILTER]
Name parser
Match *
Key_name log
Parser crio
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
region {{awsRegion}}
log_group_name {{logGroupName}}
log_stream_prefix {{log_stream_prefix}}
auto_create_group true
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
110 changes: 109 additions & 1 deletion lib/common/resources/otel-collector-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ metadata:
namespace: "{{namespace}}"
spec:
mode: "{{deploymentMode}}"
image: public.ecr.aws/aws-observability/aws-otel-collector:v0.33.1
image: public.ecr.aws/aws-observability/aws-otel-collector:v0.37.0
resources:
limits:
cpu: "1"
Expand All @@ -18,6 +18,22 @@ spec:
cpu: "1"
memory: "2Gi"
serviceAccount: adot-collector
podSecurityContext:
runAsGroup: 0
runAsUser: 0
volumeMounts:
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
volumes:
- name: varlogpods
hostPath:
path: /var/log/pods
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
config: |
receivers:
prometheus:
Expand Down Expand Up @@ -1740,13 +1756,101 @@ spec:
source_labels:
- __meta_kubernetes_pod_phase
{{ stop enableIstioMonJob }}
{{ start enableAdotContainerLogsReceiver }}
filelog:
include: [ /var/log/pods/*/*/*.log ]
include_file_name: false
include_file_path: true
start_at: end
operators:
# Find out which format is used by kubernetes
- type: router
id: get-format
routes:
- output: parser-docker
expr: 'body matches "^\\{"'
- output: parser-crio
expr: 'body matches "^[^ Z]+ "'
- output: parser-containerd
expr: 'body matches "^[^ Z]+Z"'
# Parse CRI-O format
- type: regex_parser
id: parser-crio
regex:
'^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*)
?(?P<log>.*)$'
output: extract_metadata_from_filepath
timestamp:
parse_from: attributes.time
layout_type: gotime
layout: '2006-01-02T15:04:05.999999999Z07:00'
# Parse CRI-Containerd format
- type: regex_parser
id: parser-containerd
regex:
'^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*)
?(?P<log>.*)$'
output: extract_metadata_from_filepath
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
# Parse Docker format
- type: json_parser
id: parser-docker
output: extract_metadata_from_filepath
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes.log
to: body
# Extract metadata from file path
- type: regex_parser
id: extract_metadata_from_filepath
regex: '^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]{36})\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$'
parse_from: attributes["log.file.path"]
cache:
size: 128 # default maximum amount of Pods per Node is 110
# Rename attributes
- type: move
from: attributes.stream
to: attributes["log.iostream"]
- type: move
from: attributes.container_name
to: resource["k8s.container.name"]
- type: move
from: attributes.namespace
to: resource["k8s.namespace.name"]
- type: move
from: attributes.pod_name
to: resource["k8s.pod.name"]
- type: move
from: attributes.restart_count
to: resource["k8s.container.restart_count"]
- type: move
from: attributes.uid
to: resource["k8s.pod.uid"]
{{ stop enableAdotContainerLogsReceiver }}
processors:
k8sattributes:
batch:
exporters:
prometheusremotewrite:
endpoint: "{{remoteWriteEndpoint}}"
auth:
authenticator: sigv4auth
logging:
loglevel: info
{{ start enableAdotContainerLogsExporter }}
awscloudwatchlogs:
log_group_name: "{{logGroupName}}"
log_stream_name: "{{logStreamName}}"
region: "{{awsRegion}}"
log_retention: {{logRetentionDays}}
raw_log: false
{{ stop enableAdotContainerLogsExporter }}
extensions:
sigv4auth:
region: "{{awsRegion}}"
Expand All @@ -1762,6 +1866,10 @@ spec:
metrics:
receivers: [prometheus]
exporters: [logging, prometheusremotewrite]
logs:
receivers: [filelog]
processors: [batch,k8sattributes]
exporters: [awscloudwatchlogs]
{{ start enableAdotMetricsCollectionTelemetry }}
telemetry:
metrics:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export default class SingleNewEksAWSNativeFargateobservabilityConstruct {
// Define fargate cluster provider and pass the profile options
const fargateClusterProvider : blueprints.FargateClusterProvider = new blueprints.FargateClusterProvider({
fargateProfiles,
version: eks.KubernetesVersion.of("1.27")
version: eks.KubernetesVersion.of("1.28")
});

const certManagerAddOnProps : blueprints.CertManagerAddOnProps = {
Expand All @@ -50,7 +50,7 @@ export default class SingleNewEksAWSNativeFargateobservabilityConstruct {
};

const coreDnsAddOnProps : blueprints.CoreDnsAddOnProps = {
version:"v1.10.1-eksbuild.1",
version:"v1.10.1-eksbuild.6",
configurationValues:{
computeType: "Fargate"
}
Expand All @@ -64,7 +64,6 @@ export default class SingleNewEksAWSNativeFargateobservabilityConstruct {
.withCertManagerProps(certManagerAddOnProps)
.withCoreDnsProps(coreDnsAddOnProps)
.enableFargatePatternAddOns()
.enableControlPlaneLogging()
.clusterProvider(fargateClusterProvider)
.addOns(...addOns)
.build(scope, stackId);
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import 'source-map-support/register';
import * as blueprints from '@aws-quickstart/eks-blueprints';
import { KubectlProvider, ManifestDeployment } from "@aws-quickstart/eks-blueprints/dist/addons/helm-addon/kubectl-provider";
import { loadYaml, readYamlDocument } from '@aws-quickstart/eks-blueprints/dist/utils';

/**
* Configuration options for the fluentbit configmap
*/
export interface FluentBitConfigMapProps {

/**
* Region to send cloudwatch logs.
*/
awsRegion: string;

/**
* Log Group Name in cloudwatch
*/
logGroupName: string

/**
* Prefix for logs stream
*/
logStreamPrefix: string;

/**
* Enable logs from fluentBit process
*/
enableFlbProcessLogs?: boolean
}

/**
* Default props for the add-on.
*/
const defaultProps: FluentBitConfigMapProps = {
awsRegion: "us-east-1",
logGroupName: "fargate-observability",
logStreamPrefix: "from-fluent-bit-",
enableFlbProcessLogs: false
};

/**
* Creates 'aws-observability' namespace and configurable ConfigMap
* to enable the Fargate built-in log router based on Fluent Bit
* https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
*/
export class FluentBitConfigMap implements blueprints.ClusterAddOn {
id?: string | undefined;
readonly props: FluentBitConfigMapProps;

constructor(props?: FluentBitConfigMapProps) {
this.props = { ...defaultProps, ...props };
}

deploy(clusterInfo: blueprints.ClusterInfo): void {
const cluster = clusterInfo.cluster;

Check warning on line 56 in lib/single-new-eks-fargate-opensource-observability-pattern/fluentbitconfigmap.ts

View workflow job for this annotation

GitHub Actions / build (18)

'cluster' is assigned a value but never used

const doc = readYamlDocument(__dirname + '/../common/resources/fluent-bit/fluent-bit-fargate-config.ytpl');
const manifest = doc.split("---").map(e => loadYaml(e));

const values: blueprints.Values = {
awsRegion: this.props.awsRegion,
logGroupName: this.props.logGroupName,
log_stream_prefix: this.props.logStreamPrefix,
enableFlbProcessLogs: this.props.enableFlbProcessLogs,
};

const manifestDeployment: ManifestDeployment = {
name: 'aws-logging',
namespace: 'aws-observability',
manifest,
values
};

const kubectlProvider = new KubectlProvider(clusterInfo);
kubectlProvider.addManifest(manifestDeployment);

}
}
Loading

0 comments on commit 55006a7

Please sign in to comment.