github.com/m3db/m3@v1.5.1-0.20231129193456-75a402aa583b/scripts/development/m3_stack/README.md (about) 1 # Local Development 2 3 This docker-compose file will setup the following environment: 4 5 1. 1 M3DB nodes with a single node acting as an ETCD seed 6 2. 1 M3Coordinator node 7 3. 1 Grafana node (with a pre-configured Prometheus source) 8 4. 1 Prometheus node that scrapes the M3DB/M3Coordinator nodes and writes the metrics to M3Coordinator 9 10 The environment variables that let's you configure this setup are: 11 - `USE_MULTI_DB_NODES=true`: uses 3 database nodes instead of 1 for cluster. 12 - `USE_JAEGER=true`: look at traces emitted by M3 services. 13 - `USE_PROMETHEUS_HA=true`: send data to M3 from two HA Prometheus instances to replicate deployments of HA Prometheus sending data to M3. 14 - `USE_AGGREGATOR=true`: use dedicate aggregator to aggregate metrics. 15 - `USE_AGGREGATOR_HA=true`: use two dedicated aggregators for HA aggregated metrics. 16 - `USE_MULTIPROCESS_COORDINATOR=true`: use multi-process coordinator, with default number of processes configured. 17 18 ## Usage 19 20 Use the `start_m3.sh` and `stop_m3.sh` scripts. Requires successful run of `make m3dbnode` from project root first. 21 22 ## Grafana 23 24 Use Grafana by navigating to `http://localhost:3000` and using `admin` for both the username and password. The M3DB dashboard should already be populated and working. 25 26 ## Jaeger 27 28 To start Jaeger, you need to set the environment variable `USE_JAEGER` to `true` when you run `start_m3.sh`. 29 30 ``` 31 USE_JAEGER=true ./start_m3.sh 32 ``` 33 34 To modify the sampling rate, etc. you can modify the following in your `m3dbnode.yml` file under `db`: 35 36 ```yaml 37 tracing: 38 backend: jaeger 39 jaeger: 40 reporter: 41 localAgentHostPort: jaeger:6831 42 sampler: 43 type: const 44 param: 1 45 ``` 46 47 Use Jaeger by navigating to `http://localhost:16686`. 48 49 ## Prometheus 50 51 Use Prometheus by navigating to `http://localhost:9090`. 52 53 ## Increasing Load 54 55 Load can easily be increased by modifying the `prometheus.yml` file to reduce the scrape interval to `1s`. 56 57 ## Containers Hanging / Unresponsive 58 59 Running the entire stack can be resource intensive. If the containers are unresponsive try increasing the amount of cores and memory that the docker daemon is allowed to use.