github.com/alwaysproblem/mlserving-tutorial@v0.0.0-20221124033215-121cfddbfbf4/TFserving/ClientAPI/cpp/make-static-lib/README.md (about)

     1  # CPP API
     2  
     3  ## Requirement
     4  
     5  - protobuf == 3.12.4
     6  - grpc == 1.29.1
     7  
     8  ## Start Server docker
     9  
    10  - [start tfserver](../README.md)
    11  - netowork setting
    12    - this we need to docker image as develop image, so plase find the server ip with `docker network inspect bridge`
    13  
    14      ```bash
    15      $ docker ps # using this to find the server name under NAMES feild.
    16      $ docker network inspect bridge
    17      [
    18          {
    19            ...
    20                  "82811806166f9250d0b1734479db6c368a8b90193811231e5125fdab1dfee6a0": {
    21                      "Name": "focused_borg",   # this is the name of server (need to check)
    22                      "EndpointID": "1af2f89e7617a837f28fe573aeaf5b57d650216167180b00c70a4be11cfb1510",
    23                      "MacAddress": "02:42:ac:11:00:03",
    24                      "IPv4Address": "172.17.0.3/16", # if name is right then this is your server IP
    25                      "IPv6Address": ""
    26            ...
    27          }
    28      ]
    29      ```
    30  
    31  - enter cpp directory
    32  
    33    ```bash
    34    $ cd cpp/
    35    ```
    36  
    37  ## Build your own C++ TFclient (optional)
    38  
    39  - environment preparation (detail on the [doeckerfile](./grpc-cpp.dockerfile))
    40  
    41    - [grpc](https://github.com/grpc/grpc/tree/master/src/cpp)
    42    - [protobuf](https://github.com/protocolbuffers/protobuf/tree/master/src)
    43  
    44  - build docker
    45  
    46    ```bash
    47    $ docker build -t grpc-cpp -f grpc-cpp-static.dockerfile .
    48    ```
    49  
    50  - start and enter `grpc-cpp` shell
    51  
    52    ```bash
    53    $ docker run --rm -ti -v `pwd`:/cpp  grpc-cpp
    54    root@5b9f27acaefe:/# git clone https://github.com/tensorflow/tensorflow
    55    root@5b9f27acaefe:/# git clone https://github.com/tensorflow/serving
    56    root@5b9f27acaefe:/# cd /cpp
    57    root@5b9f27acaefe:/cpp# mkdir gen
    58    root@5b9f27acaefe:/cpp# bash build-cpp-api.sh
    59    root@5b9f27acaefe:/cpp# mv gen ./src
    60    root@5b9f27acaefe:/cpp# cd /cpp/src/predict-service
    61    root@5b9f27acaefe:/cpp# cd /cpp/src/predict-service
    62    root@5b9f27acaefe:/cpp/src/predict-service# make
    63    root@5b9f27acaefe:/cpp/src/predict-service# ./bin/main
    64    # calling prediction service on 172.17.0.3:8500
    65    # call predict ok
    66    # outputs size is 1
    67    #
    68    # output_1:
    69    # 0.999035
    70    # 0.999735
    71    # 0.999927
    72    ```
    73  
    74  ## Run client examples
    75  
    76  - run go client for a simple example
    77    - enter the docker terminal
    78  
    79    ```bash
    80    $ docker run --rm -ti -v `pwd`:/cpp  grpc-cpp # or you can docker exec -ti <docker name> /bin/bash
    81    root@5b9f27acaefe:/# cp -R /cpps/make-static-lib /cpp && cd /cpp/src
    82    root@5b9f27acaefe:/cpp/src#
    83    ```
    84  
    85    **assume you are in the src directory**
    86    - build static library
    87  
    88      ```bash
    89      # run under static-lib directory
    90      $ make
    91      ```
    92  
    93    - build with static library
    94  
    95      ```bash
    96      # run under build-with-a-file directory
    97      # copy main.cc to `build-with-a-file`
    98      $ make
    99      $ ./bin/main
   100      ```
   101  
   102    - request data from server
   103  
   104      ```bash
   105      # run under predict-service directory
   106      $ make
   107      $ ./bin/main
   108      # calling prediction service on 172.17.0.3:8500
   109      # call predict ok
   110      # outputs size is 1
   111      #
   112      # output_1:
   113      # 0.999035
   114      # 0.999735
   115      # 0.999927
   116      # Done.
   117      ```
   118  
   119    - request different model name
   120  
   121      ```bash
   122      # run under predict-service directory
   123      $ make
   124      $ ./bin/main --model_name Toy
   125      # calling prediction service on 172.17.0.3:8500
   126      # call predict ok
   127      # outputs size is 1
   128      #
   129      # output_1:
   130      # 0.999035
   131      # 0.999735
   132      # 0.999927
   133      # Done.
   134      $ ./bin/main --model_name Toy_double
   135      # calling prediction service on 172.17.0.3:8500
   136      # call predict ok
   137      # outputs size is 1
   138  
   139      # output_1:
   140      # 6.80302
   141      # 8.26209
   142      # 9.72117
   143      # Done.
   144      ```
   145  
   146    - request different version through the version number
   147  
   148      ```bash
   149      # run under predict-service directory
   150      $ make
   151      $ ./bin/main --model_name Toy --model_version 1
   152      # calling prediction service on 172.17.0.3:8500
   153      # call predict ok
   154      # outputs size is 1
   155  
   156      # output_1:
   157      # 10.8054
   158      # 14.0101
   159      # 17.2148
   160      # Done.
   161      $ ./bin/main --model_name Toy --model_version 2
   162      # calling prediction service on 172.17.0.3:8500
   163      # call predict ok
   164      # outputs size is 1
   165  
   166      # output_1:
   167      # 0.999035
   168      # 0.999735
   169      # 0.999927
   170      # Done.
   171      ```
   172  
   173    - request different version through the version annotation
   174  
   175      ```bash
   176      # run under predict-service directory
   177      $ make
   178      $ ./bin/main --model_name Toy --model_version_label stable
   179      # calling prediction service on 172.17.0.3:8500
   180      # call predict ok
   181      # outputs size is 1
   182  
   183      # output_1:
   184      # 10.8054
   185      # 14.0101
   186      # 17.2148
   187      # Done.
   188      $ ./bin/main --model_name Toy --model_version_label canary
   189      # calling prediction service on 172.17.0.3:8500
   190      # call predict ok
   191      # outputs size is 1
   192  
   193      # output_1:
   194      # 0.999035
   195      # 0.999735
   196      # 0.999927
   197      # Done.
   198      ```
   199  
   200    - request multiple task model <!--  TODO: -->
   201  
   202      ```bash
   203      $ cd ...
   204      $ make
   205      $ ./bin/main
   206      ```
   207  
   208    - request model status
   209  
   210      ```bash
   211      # run under model-status directory
   212      $ make
   213      $ ./bin/main --model_name Toy
   214      # calling model service on 172.17.0.3:8500
   215      # model_spec {
   216      #   name: "Toy"
   217      #   signature_name: "serving_default"
   218      # }
   219      #
   220      # call predict ok
   221      # metadata size is 0
   222      # metadata DebugString is
   223      # model_version_status {
   224      #   version: 3
   225      #   state: END
   226      #   status {
   227      #   }
   228      # }
   229      # model_version_status {
   230      #   version: 2
   231      #   state: AVAILABLE
   232      #   status {
   233      #   }
   234      # }
   235      # model_version_status {
   236      #   version: 1
   237      #   state: AVAILABLE
   238      #   status {
   239      #   }
   240      # }
   241      ```
   242  
   243    - request model metadata
   244  
   245      ```bash
   246      # run under model-metadata directory
   247      $ make
   248      $ ./bin/main --model_name Toy
   249      # calling prediction service on 172.17.0.3:8500
   250      # call predict ok
   251      # metadata size is 1
   252      # metadata DebugString is
   253      # model_spec {
   254      #   name: "Toy"
   255      #   version {
   256      #     value: 2
   257      #   }
   258      # }
   259      # metadata {
   260      #   key: "signature_def"
   261      #   value {
   262      #     [type.googleapis.com/tensorflow.serving.SignatureDefMap] {
   263      #       signature_def {
   264      #         key: "__saved_model_init_op"
   265      #         value {
   266      #           outputs {
   267      #             key: "__saved_model_init_op"
   268      #             value {
   269      #               name: "NoOp"
   270      #               tensor_shape {
   271      #                 unknown_rank: true
   272      #               }
   273      #             }
   274      #           }
   275      #         }
   276      #       }
   277      #       signature_def {
   278      #         key: "serving_default"
   279      #         value {
   280      #           inputs {
   281      #             key: "input_1"
   282      #             value {
   283      #               name: "serving_default_input_1:0"
   284      #               dtype: DT_FLOAT
   285      #               tensor_shape {
   286      #                 dim {
   287      #                   size: -1
   288      #                 }
   289      #                 dim {
   290      #                   size: 2
   291      #                 }
   292      #               }
   293      #             }
   294      #           }
   295      #           outputs {
   296      #             key: "output_1"
   297      #             value {
   298      #               name: "StatefulPartitionedCall:0"
   299      #               dtype: DT_FLOAT
   300      #               tensor_shape {
   301      #                 dim {
   302      #                   size: -1
   303      #                 }
   304      #                 dim {
   305      #                   size: 1
   306      #                 }
   307      #               }
   308      #             }
   309      #           }
   310      #           method_name: "tensorflow/serving/predict"
   311      #         }
   312      #       }
   313      #     }
   314      #   }
   315      # }
   316      #
   317      ```
   318  
   319    - reload model through gRPC API
   320  
   321      ```bash
   322      # run under model-reload directory
   323      $ make
   324      $ ./bin/main --model_name Toy
   325      # calling model service on 172.17.0.3:8500
   326      # call model service ok
   327      # model Toy reloaded successfully.
   328      ```
   329  
   330    - request model log
   331  
   332      ```bash
   333      # run under predict-log directory
   334      $ make
   335      $ ./bin/main --model_name Toy # --model_version 1 --model_version_label stable
   336      # calling prediction service on 172.17.0.3:8500
   337      # call predict ok
   338      # outputs size is 1
   339  
   340      # output_1:
   341      # 0.999035
   342      # 0.999735
   343      # 0.999927
   344      # ********************Predict Log*********************
   345      # request {
   346      #   model_spec {
   347      #     name: "Toy"
   348      #     signature_name: "serving_default"
   349      #   }
   350      #   inputs {
   351      #     key: "input_1"
   352      #     value {
   353      #       dtype: DT_FLOAT
   354      #       tensor_shape {
   355      #         dim {
   356      #           size: 3
   357      #         }
   358      #         dim {
   359      #           size: 2
   360      #         }
   361      #       }
   362      #       float_val: 1
   363      #       float_val: 2
   364      #       float_val: 1
   365      #       float_val: 3
   366      #       float_val: 1
   367      #       float_val: 4
   368      #     }
   369      #   }
   370      # }
   371      # response {
   372      #   outputs {
   373      #     key: "output_1"
   374      #     value {
   375      #       dtype: DT_FLOAT
   376      #       tensor_shape {
   377      #         dim {
   378      #           size: 3
   379      #         }
   380      #         dim {
   381      #           size: 1
   382      #         }
   383      #       }
   384      #       float_val: 0.999035
   385      #       float_val: 0.999734938
   386      #       float_val: 0.999927282
   387      #     }
   388      #   }
   389      #   model_spec {
   390      #     name: "Toy"
   391      #     version {
   392      #       value: 2
   393      #     }
   394      #     signature_name: "serving_default"
   395      #   }
   396      # }
   397      # ****************************************************
   398      # Done.
   399      ```
   400  
   401  ## Static Lib
   402  
   403  - [abseil-cpp#250 Wrong order](https://github.com/abseil/abseil-cpp/issues/250#issuecomment-455831883)
   404  *Note that: the `absl_int128` need `absl_string` so the `-labsl_string` need to be infront of `-labsl_int128` in the `Makefile`*
   405  
   406  - generate the static lib
   407  
   408    ```bash
   409    # run under static-lib directory
   410    $ make
   411    $ ls
   412    # libtfserving.a Makefile
   413    ```
   414  
   415  - test with generated static library
   416  
   417    ```bash
   418    # enter build-with-a-file directory
   419    $ make
   420    $ ./bin/main -s 172.17.0.3:8500
   421    # calling prediction service on 172.17.0.3:8500
   422    # call predict ok
   423    # outputs size is 1
   424    #
   425    # output_1:
   426    # 0.999035
   427    # 0.999735
   428    # 0.999927
   429    # Done.
   430    ```