github.com/pyroscope-io/pyroscope@v0.37.3-0.20230725203016-5f6947968bd0/examples/golang-push/README.md (about)

     1  ## Continuous Profiling for Golang applications
     2  ### Profiling a Golang Rideshare App with Pyroscope
     3  ![golang_example_architecture_new_00](https://user-images.githubusercontent.com/23323466/173370161-f8ba5c0a-cacf-4b3b-8d84-dd993019c486.gif)
     4  
     5  Note: For documentation on Pyroscope's golang integration visit our website for [golang push mode](https://pyroscope.io/docs/golang/) or [golang pull mode](https://pyroscope.io/docs/golang-pull-mode/)
     6  ## Background
     7  In this example we show a simplified, basic use case of Pyroscope. We simulate a "ride share" company which has three endpoints found in `main.go`:
     8  - `/bike`    : calls the `OrderBike(search_radius)` function to order a bike
     9  - `/car`     : calls the `OrderCar(search_radius)` function to order a car
    10  - `/scooter` : calls the `OrderScooter(search_radius)` function to order a scooter
    11  
    12  We also simulate running 3 distinct servers in 3 different regions (via [docker-compose.yml](https://github.com/pyroscope-io/pyroscope/blob/main/examples/golang-push/rideshare/docker-compose.yml))
    13  - us-east
    14  - eu-north
    15  - ap-south
    16  
    17  One of the most useful capabilities of Pyroscope is the ability to tag your data in a way that is meaningful to you. In this case, we have two natural divisions, and so we "tag" our data to represent those:
    18  - `region`: statically tags the region of the server running the code
    19  - `vehicle`: dynamically tags the endpoint (similar to how one might tag a controller)
    20  
    21  
    22  ## Tagging static region
    23  Tagging something static, like the `region`, can be done in the initialization code in the `main()` function:
    24  ```
    25  	pyroscope.Start(pyroscope.Config{
    26  		ApplicationName: "ride-sharing-app",
    27  		ServerAddress:   serverAddress,
    28  		Logger:          pyroscope.StandardLogger,
    29  		Tags:            map[string]string{"region": os.Getenv("REGION")},
    30  	})
    31  ```
    32  
    33  ## Tagging dynamically within functions
    34  Tagging something more dynamically, like we do for the `vehicle` tag can be done inside our utility `FindNearestVehicle()` function using `pyroscope.TagWrapper`
    35  ```
    36  func FindNearestVehicle(search_radius int64, vehicle string) {
    37  	pyroscope.TagWrapper(context.Background(), pyroscope.Labels("vehicle", vehicle), func(ctx context.Context) {
    38  
    39          // Mock "doing work" to find a vehicle
    40          var i int64 = 0
    41  		start_time := time.Now().Unix()
    42  		for (time.Now().Unix() - start_time) < search_radius {
    43  			i++
    44  		}
    45  	})
    46  }
    47  ```
    48  
    49  What this block does, is:
    50  1. Add the label `pyroscope.Labels("vehicle", vehicle)`
    51  2. execute the `FindNearestVehicle()` function
    52  3. Before the block ends it will (behind the scenes) remove the `pyroscope.Labels("vehicle", vehicle)` from the application since that block is complete
    53  
    54  ## Resulting flamegraph / performance results from the example
    55  ### Running the example
    56  To run the example run the following commands:
    57  ```
    58  # Pull latest pyroscope image:
    59  docker pull pyroscope/pyroscope:latest
    60  
    61  # Run the example project:
    62  docker-compose up --build
    63  
    64  # Reset the database (if needed):
    65  # docker-compose down
    66  ```
    67  
    68  What this example will do is run all the code mentioned above and also send some mock-load to the 3 servers as well as their respective 3 endpoints. If you select our application: `ride-sharing-app.cpu` from the dropdown, you should see a flamegraph that looks like this (below). After we give 20-30 seconds for the flamegraph to update and then click the refresh button we see our 3 functions at the bottom of the flamegraph taking CPU resources _proportional to the size_ of their respective `search_radius` parameters.
    69  
    70  ## Where's the performance bottleneck?
    71  
    72  ![golang_first_slide](https://user-images.githubusercontent.com/23323466/149688998-ca94dc82-f1e5-46fd-9a73-233c1e56d8e5.jpg)
    73  
    74  The first step when analyzing a profile outputted from your application, is to take note of the _largest node_ which is where your application is spending the most resources. In this case, it happens to be the `OrderCar` function.
    75  
    76  The benefit of using the Pyroscope package, is that now that we can investigate further as to _why_ the `OrderCar()` function is problematic. Tagging both `region` and `vehicle` allows us to test two good hypotheses:
    77  - Something is wrong with the `/car` endpoint code
    78  - Something is wrong with one of our regions
    79  
    80  To analyze this we can select one or more tags from the "Select Tag" dropdown:
    81  
    82  ![image](https://user-images.githubusercontent.com/23323466/135525308-b81e87b0-6ffb-4ef0-a6bf-3338483d0fc4.png)
    83  
    84  ## Narrowing in on the Issue Using Tags
    85  Knowing there is an issue with the `OrderCar()` function we automatically select that tag. Then, after inspecting multiple `region` tags, it becomes clear by looking at the timeline that there is an issue with the `eu-north` region, where it alternates between high-cpu times and low-cpu times.
    86  
    87  We can also see that the `mutexLock()` function is consuming almost 70% of CPU resources during this time period.
    88  
    89  ![golang_second_slide-01](https://user-images.githubusercontent.com/23323466/149689013-2c0afeeb-53e2-4780-b52a-26b140627d9c.jpg)
    90  
    91  ## Comparing two time periods
    92  Using Pyroscope's "comparison view" we can actually select two different time ranges from the timeline to compare the resulting flamegraphs. The pink section on the left timeline results in the left flamegraph, and the blue section on the right represents the right flamegraph.
    93  
    94  When we select a period of low-cpu utilization and a period of high-cpu utilization we can see that there is clearly different behavior in the `mutexLock()` function where it takes **33% of CPU** during low-cpu times and **71% of CPU** during high-cpu times.
    95  
    96  ![golang_third_slide-01](https://user-images.githubusercontent.com/23323466/149689026-8b4ab3b1-6380-455c-990f-7ff35811f26b.jpg)
    97  
    98  ## Visualizing Diff Between Two Flamegraphs
    99  While the difference _in this case_ is stark enough to see in the comparison view, sometimes the diff between the two flamegraphs is better visualized with them overlayed over each other. Without changing any parameters, we can simply select the diff view tab and see the difference represented in a color-coded diff flamegraph.
   100  
   101  ![golang_fourth_slide-01](https://user-images.githubusercontent.com/23323466/149689038-50d12031-2879-470f-a3be-a4c71d8c3b7a.jpg)
   102  
   103  ### More use cases
   104  We have been beta testing this feature with several different companies and some of the ways that we've seen companies tag their performance data:
   105  - Tagging Kubernetes attributes
   106  - Tagging controllers
   107  - Tagging regions
   108  - Tagging jobs from a queue
   109  - Tagging commits
   110  - Tagging staging / production environments
   111  - Tagging different parts of their testing suites
   112  - Etc...
   113  
   114  ### Future Roadmap
   115  We would love for you to try out this example and see what ways you can adapt this to your golang application. While this example focused on CPU debugging, Golang also provides memory profiling as well. Continuous profiling has become an increasingly popular tool for the monitoring and debugging of performance issues (arguably the fourth pillar of observability).
   116  
   117  We'd love to continue to improve our golang integrations and so we would love to hear what features _you would like to see_.