github.com/grafana/pyroscope@v1.18.0/examples/language-sdk-instrumentation/rust/README.md (about) 1 ## Continuous Profiling for Rust applications 2 3 ### Profiling a Rust Rideshare App with Pyroscope 4 5  6 7 > [!NOTE] 8 > For documentation on Pyroscope's Rust integration, refer to the [Rust push mode](https://grafana.com/docs/pyroscope/latest/configure-client/language-sdks/rust/) documentation. 9 10 ## Background 11 12 This example shows a simplified, basic use case of Pyroscope that uses a "ride share" company which has three 13 endpoints found in `main.rs`: 14 15 - `/bike` : calls the `order_bike(search_radius)` function to order a bike 16 - `/car` : calls the `order_car(search_radius)` function to order a car 17 - `/scooter` : calls the `order_scooter(search_radius)` function to order a scooter 18 19 The example also simulates running 3 distinct servers in 3 different regions ( 20 via [docker-compose.yml](https://github.com/grafana/pyroscope/blob/main/examples/language-sdk-instrumentation/rust/rideshare/docker-compose.yml)): 21 22 - us-east 23 - eu-north 24 - ap-south 25 26 Pyroscope lets you tag your data in a way that is meaningful to you. In 27 this case, there are two natural divisions, and so data is "tagged" to represent them: 28 29 - `region`: statically tags the region of the server running the code 30 - `vehicle`: dynamically tags the endpoint (similar to how one might tag a controller) 31 32 ## Tagging static region 33 34 Tagging something static, like the `region`, can be done using `PyroscopeAgentBuilder#tags` method in the initialization 35 code in the `main` function: 36 37 ```rust 38 let agent = PyroscopeAgent::builder(server_address, app_name.to_owned()) 39 .backend(pprof_backend(PprofConfig::new().sample_rate(100))) 40 .tags(vec![("region", ®ion)]) 41 .build()?; 42 ``` 43 44 ## Tagging dynamically within functions 45 46 Tagging something more dynamically can be done using `PyroscopeAgent#tag_wrapper`. For example, you'd use code like this for the `vehicle` tag: 47 48 ```rust 49 let (add_tag, remove_tag) = agent_running.tag_wrapper(); 50 let add = Arc::new(add_tag); 51 let remove = Arc::new(remove_tag); 52 let car = warp::path("car").map(move || { 53 add("vehicle".to_string(), "car".to_string()); 54 order_car(3); 55 remove("vehicle".to_string(), "car".to_string()); 56 "Car ordered" 57 }); 58 ``` 59 60 This block does the following: 61 62 1. Add the label `vehicle=car` 63 2. Execute the `order_car` function 64 3. Remove the label `vehicle=car` 65 66 ## Resulting flame graph / performance results from the example 67 68 ### Running the example 69 70 To run the example, use the following commands: 71 72 ``` 73 # Pull latest pyroscope and grafana images: 74 docker pull grafana/pyroscope:latest 75 docker pull grafana/grafana:latest 76 77 # Run the example project: 78 docker-compose up --build 79 80 # Reset the database (if needed): 81 # docker-compose down 82 ``` 83 84 This example runs all the code mentioned above and also sends some mock-load to the 3 servers as well as 85 their respective 3 endpoints. If you select `rust-ride-sharing-app` from the dropdown, you should see a 86 flame graph that looks like this (below). Wait 20-30 seconds for the flame graph to update, and then click the 87 refresh button to see 3 functions at the bottom of the flame graph taking CPU `resources _proportional` to the `size_` 88 of their respective `search_radius` parameters. 89 90 [//]: # (http://localhost:3000/a/grafana-pyroscope-app/profiles-explorer?searchText=&panelType=time-series&layout=grid&hideNoData=off&explorationType=flame-graph&var-serviceName=rust-ride-sharing-app&var-profileMetricId=process_cpu:cpu:nanoseconds:cpu:nanoseconds&var-dataSource=local-pyroscope&var-groupBy=all&var-filters=&maxNodes=16384&from=now-5m&to=now&var-filtersBaseline=&var-filtersComparison=) 91 92 ## Where's the performance bottleneck? 93 94  95 96 To analyze a profile outputted from your application, take note of the _largest node_ which is 97 where your application is spending the most resources. In this case, it happens to be the `order_car` function. 98 99 ThePyroscope package lets you investigate further as to _why_ the `order_car()` 100 function is problematic. Tagging both `region` and `vehicle` allows us to test two good hypotheses: 101 102 - Something is wrong with the `/car` endpoint code 103 - Something is wrong with one of our regions 104 105 To analyze this, select one or more tags on the "Labels" page: 106 107  108 109 ## Narrowing in on the Issue Using Tags 110 111 Since you know there is an issue with the `order_car` function, select that tag. After inspecting 112 multiple `region` tags, the timeline shows that there is an issue with the `eu-north` region, 113 where it alternates between high-cpu times and low-cpu times. 114 115 Note that the `mutex_lock()` function is consuming almost 70% of CPU resources during this time period. 116 117  118 119 ## Visualizing Diff Between Two Flame graphs 120 121 While the difference _in this case_ is stark enough to see in the comparison view, sometimes the diff between the two 122 flame graphs is better visualized with them overlayed over each other. Without changing any parameters, you can 123 select the diff view tab and see the difference represented in a color-coded diff flame graph. 124 125  126 127 ### More use cases 128 129 We have been beta testing this feature with several different companies and some of the ways that we've seen companies 130 tag their performance data: 131 132 - Tagging Kubernetes attributes 133 - Tagging controllers 134 - Tagging regions 135 - Tagging jobs from a queue 136 - Tagging commits 137 - Tagging staging / production environments 138 - Tagging different parts of their testing suites 139 - Etc... 140 141 ### Future Roadmap 142 143 We would love for you to try out this example and see what ways you can adapt this to your Rust application. Continuous 144 profiling has become an increasingly popular tool for the monitoring and debugging of performance issues (arguably the 145 fourth pillar of observability). 146 147 We'd love to continue to improve our Rust integrations, and so we would love to hear what features _you would like to 148 see_.