You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22-10Lines changed: 22 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,14 +2,29 @@
2
2
3
3
A script for collecting a diagnostic snapshot (support bundle) from each node in a Cassandra based cluster.
4
4
5
-
The code for the collector script is in the _ds-collector/_ directory.
6
5
7
-
Then _ds-collector/_ code must first be built into a `ds-collector*.tar.gz` tarball.
6
+
# Users: Running the Collector against your Cluster
8
7
9
-
The built `ds-collector*.tar.gz`tarball is then extracted onto a bastion or jumpbox that has access to the nodes in the cluster. Once extracted, the configuration file (collector.conf) can be edited to match any cluster deployment customisations (e.g. non-default port numbers, non-default log location, etc). The ds-collector script can then be executed; first in test mode and then in collection mode.
8
+
Download the latest `ds-collector.GENERIC-*.tar.gz`release from the [releases page](https://github.com/datastax/diagnostic-collection/releases).
10
9
10
+
This `ds-collector*.tar.gz` tarball is then extracted onto a bastion or jumpbox that has access to the nodes in the cluster.
11
11
12
-
# Pre-configuring the Collector Configuration
12
+
Instructions for running the Collector is found in [`ds-collector/README.md`](https://github.com/datastax/diagnostic-collection/blob/master/ds-collector/README.md).
13
+
14
+
If you hit any issues please also read [`ds-collector/TROUBLESHOOTING.md`](https://github.com/datastax/diagnostic-collection/blob/master/ds-collector/TROUBLESHOOTING.md).
15
+
16
+
These instructions are also bundled into the built collector tarball.
17
+
18
+
19
+
# Developers: Building from source code
20
+
21
+
The code for the collector script is in the _ds-collector/_ directory. This top-level directory contains the `Makefile` for developers wishing to build the ds-collector bundle for themselves.
22
+
23
+
Then _ds-collector/_ code gets built into a `ds-collector*.tar.gz` tarball.
24
+
25
+
26
+
27
+
## Pre-configuring the Collector Configuration
13
28
When building the collector, it can be instructed to pre-configure the collector.conf by setting the following variables:
14
29
15
30
```bash
@@ -24,7 +39,7 @@ export is_k8s=true
24
39
If no variables are set, then the collector will be pre-configured to assume Apache Cassandra running on hosts which can be accessed via SSH.
25
40
26
41
27
-
# Building the Collector
42
+
##Building the Collector
28
43
Build the collector using the following make command syntax. You will need make and Docker.
29
44
30
45
```bash
@@ -36,7 +51,7 @@ make
36
51
This will generate a _.tar.gz_ tarball with the `issueId` set in the packaged configuration file. The archive will named in the format `ds-collector.$ISSUE.tar.gz`.
37
52
38
53
39
-
# Building the Collector with automatic s3 upload ability
54
+
##Building the Collector with automatic s3 upload ability
40
55
41
56
If the collector is built with the following variables defined, all collected diagnostic snapshots will be encrypted and uploaded to a specific AWS S3 bucket. Encryption will use a one-off built encryption key that is created locally.
42
57
@@ -56,7 +71,7 @@ This will then generate a .tar.gz tarball as described above, additionally with
56
71
In addition to the _.tar.gz_ tarball, an encryption key is now generated. The encryption key must be placed in the same directory as the extracted collector tarball for it to execute. If the tarball is being sent to someone else, it is recommeneded to send the encryption key via a different (and preferably secured) medium.
57
72
58
73
59
-
# Storing Encryption keys within the AWS Secrets Manager
74
+
##Storing Encryption keys within the AWS Secrets Manager
60
75
The collector build process also supports storing and retrieving keys from the AWS secrets manager, to use this feature, 2 additional environment variables must be provided before the script is run.
61
76
62
77
```bash
@@ -76,6 +91,3 @@ When the collector is built, it will also upload the generated encryption key to
76
91
Please be careful with the encryption keys. They should only be stored in a secure vault (such as the AWS Secrets Manager), and temporarily on the jumpbox or bastion where and while the collector script is being executed. The encryption key ensures the diagnostic snapshots are secured when transferred over the network and stored in the AWS S3 bucket.
77
92
78
93
79
-
# Executing the Collector Script against a Cluster
80
-
81
-
Instructions for execution of the Collector script are found in `ds-collector/README.md`. These instructions are also bundled into the built collector tarball.
The Diagnostic Collector bundle is used to collect diagnostic snapshots (support bundles) over all nodes in an Apache Cassandra, or Cassandra based product, cluster.
5
5
6
-
It can be run on Linux or Mac server that has ssh/docker/k8s access to the nodes in the cluster. It cannot be directly run on a node in the cluster.
6
+
It can be run on Linux or Mac server (jumpbox or bastion) that has ssh/docker/k8s access to the nodes in the cluster.
7
7
8
+
Download the latest `ds-collector.GENERIC-*.tar.gz` release from the [releases page](https://github.com/datastax/diagnostic-collection/releases).
9
+
10
+
This `ds-collector*.tar.gz` tarball is then extracted on jumpbox or bastion.
8
11
9
12
10
13
Quick Start
@@ -17,9 +20,6 @@ Just do it, the following instructions work for most people.
17
20
tar -xvf ds-collector.*.tar.gz
18
21
cd collector
19
22
20
-
# if an encryption file has been provided, copy it to this folder
21
-
cp <some-path>/*_secret.key .
22
-
23
23
# go through the configuration file, set all parameters as suited
0 commit comments