-
Notifications
You must be signed in to change notification settings - Fork 4
/
scaling.html.md.erb
84 lines (65 loc) · 4.04 KB
/
scaling.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
title: Scaling PCF Log Search
owner: PCF Metrics
---
This topic describes when and how to scale Pivotal Cloud Foundry (PCF) Log Search.
<strong>Prerequisite</strong>: The procedures in this topic assume that you are logged into Kibana. For more information, see [Log in to Kibana](./using.html#login).
## <a id="need"></a>Determine if You Need to Scale
Follow the steps below to view a histogram for your recent logs and identify if all logs are being indexed.
1. Enter `*` in the Kibana search bar to search for all logs.
1. Examine the histogram for a gap in recent logs as shown in the image below. A gap means that Log Search cannot keep up with indexing the incoming data. To address this issue, you can [scale up your cluster](#scale-up) to handle the load, [reduce resource usage](#resource-usage), or both.
![Falling behind on log indexing](images/falling-behind-on-log-indexing.png)
## <a id="scale-up"></a>Scale Up Your Cluster
Follow these steps to scale up your PCF Log Search cluster to handle more incoming data.
1. On the **Log Search** tile in Ops Manager, click the **Status** tab.
![Log Search Status tab](images/status-tab-logsearch.png)
1. Examine the **CPU** and **PERS DISK** usage for the **Elasticsearch Data nodes** job. If the load is high, especially if **CPU** > **50%** or **PERS DISK** > **70%**, follow the steps below:
1. In Kibana, enter a search for `*`.
1. Identify the log volume per second by changing the interval to **Second** and examining the height of the bars.
![volume](images/volume.png)
1. Navigate to the PCF Log Search tile and click **Resource Config**.
1. Edit the values for **Elasticsearch Data nodes** and **Log parser** in the **Resource Config** section of the PCF Log Search tile. Use the table below as a guide. As a general rule, use more Log parser nodes than Elasticsearch Data nodes.
<table border='1' class="nice">
<thead>
<tr>
<th>Log volume</th>
<th>Elasticsearch Data nodes</th>
<th>Log parser nodes</th>
</tr>
</thead>
<tr>
<td>200 / second</td>
<td>2 x medium.mem (~8GB RAM) with 50GB persistent disks</td>
<td>2 x small (1 CPU)</td>
</tr>
<tr>
<td>2,000 / second</td>
<td>~10 x xlarge.mem (~32GB RAM) with 200GB persistent disks</td>
<td>~15 x medium (2 CPU)</td>
</tr>
<tr>
<td>10,000 / second</td>
<td>~30 x xlarge.mem (~32GB RAM) with 500GB persistent disks</td>
<td>~40 x medium (2 CPU)</td>
</tr>
</table>
## <a id="resource-usage"></a>Reduce Resource Usage
PCF produces a large volume of logs. As a result, Log Search can consume significant resources. Use the following strategies to reduce the resources consumed by Log Search.
### Reduce the Log Retention Period
Shorter retention periods dramatically reduce the amount of disk resources required by Log Search. Follow these steps to reduce the log retention period for Log Search:
1. Navigate to the **Settings** section of the PCF Log Search tile.
1. Enter a lower value for **Log retention period**.
1. Click **Save** and **Apply Changes**.
### Index Less Data
If you [attached Log Search to the Elastic Runtime firehose](./installing.html#firehose), application logs can make up a large portion of the total logs indexed. Follow these steps to index less data:
1. In Kibana, navigate to the **Visualize** tab.
1. Select **Pie Chart**, and then **From a new search**.
1. Search for `tags:LogMessage`.
1. In the **buckets** section, select **Split Chart**:
1. For **Aggregation**, choose **Terms**.
1. For **Field**, choose **cf.app_id**.
1. (Optional) Increase or decrease the **Size** value to see results for more or fewer apps.
<img src="images/buckets.png" width="300px">
1. Click the play button to update the visualization.
1. Examine the pie chart to identify apps that produce a disproportionately high number of log entries.
1. Contact the developers of those apps and ask them to reduce the logging verbosity.