github.com/alwaysproblem/mlserving-tutorial@v0.0.0-20221124033215-121cfddbfbf4/TFserving/ClientAPI/cpp/cmake/README.md (about)

     1  # GO API
     2  
     3  ## Start Server docker
     4  
     5  - [start tfserver](../README.md)
     6  - netowork setting
     7    - this we need to docker image as develop image, so plase find the server ip with `docker network inspect bridge`
     8  
     9      ```bash
    10      $ docker ps # using this to find the server name under NAMES feild.
    11      $ docker network inspect bridge
    12      [
    13          {
    14            ...
    15                  "82811806166f9250d0b1734479db6c368a8b90193811231e5125fdab1dfee6a0": {
    16                      "Name": "focused_borg",   # this is the name of server (need to check)
    17                      "EndpointID": "1af2f89e7617a837f28fe573aeaf5b57d650216167180b00c70a4be11cfb1510",
    18                      "MacAddress": "02:42:ac:11:00:03",
    19                      "IPv4Address": "172.17.0.3/16", # if name is right then this is your server IP
    20                      "IPv6Address": ""
    21            ...
    22          }
    23      ]
    24      ```
    25  
    26  - enter cpp directory
    27  
    28    ```bash
    29    $ cd cpp/
    30    ```
    31  
    32  ## Build your own C++ TFclient (optional)
    33  
    34  - environment preparation (detail on the [doeckerfile](./grpc-cpp.dockerfile))
    35  
    36    - [grpc](https://github.com/grpc/grpc/tree/master/src/cpp)
    37    - [protobuf](https://github.com/protocolbuffers/protobuf/tree/master/src)
    38  
    39  - build docker
    40  
    41    ```bash
    42    $ docker build -t grpc-cpp -f grpc-cpp.dockerfile .
    43    ```
    44  
    45  - start and enter `grpc-cpp` shell
    46  
    47    ```bash
    48    $ docker run --rm -ti -v `pwd`:/cpp  grpc-cpp
    49    root@5b9f27acaefe:/# git clone https://github.com/tensorflow/tensorflow
    50    root@5b9f27acaefe:/# git clone https://github.com/tensorflow/serving
    51    root@5b9f27acaefe:/# cd /cpp
    52    root@5b9f27acaefe:/cpp# mkdir gen
    53    root@5b9f27acaefe:/cpp# bash build-cpp-api.sh
    54    root@5b9f27acaefe:/cpp# mv gen ./src
    55    root@5b9f27acaefe:/cpp# cd /cpp/src/predict-service
    56    root@5b9f27acaefe:/cpp# cd /cpp/src/predict-service
    57    root@5b9f27acaefe:/cpp/src/predict-service# make
    58    root@5b9f27acaefe:/cpp/src/predict-service# ./bin/main
    59    # calling prediction service on 172.17.0.3:8500
    60    # call predict ok
    61    # outputs size is 1
    62    #
    63    # output_1:
    64    # 0.999035
    65    # 0.999735
    66    # 0.999927
    67    ```
    68  
    69  ## Run client examples
    70  
    71  - run go client for a simple example
    72    - enter the docker terminal
    73  
    74    ```bash
    75    $ docker run --rm -ti -v `pwd`:/cpps  grpc-cpp # or you can docker exec -ti <docker name> /bin/bash
    76    root@5b9f27acaefe:/# cp -R /cpps/cmake /cpp && cd /cpp/src
    77    root@5b9f27acaefe:/cpp/src#
    78    ```
    79  
    80    **assume you are in the src directory**
    81    - request data from server
    82  
    83      ```bash
    84      # run under predict-service directory
    85      $ mkdir build
    86      $ cd build && cmake ..
    87      $ make
    88      $ ./bin/main
    89      # calling prediction service on 172.17.0.3:8500
    90      # call predict ok
    91      # outputs size is 1
    92      #
    93      # output_1:
    94      # 0.999035
    95      # 0.999735
    96      # 0.999927
    97      # Done.
    98      ```
    99  
   100    - request different model name
   101  
   102      ```bash
   103      # run under predict-service directory
   104      $ mkdir build
   105      $ cd build && cmake ..
   106      $ make
   107      $ ./bin/main --model_name Toy
   108      # calling prediction service on 172.17.0.3:8500
   109      # call predict ok
   110      # outputs size is 1
   111      #
   112      # output_1:
   113      # 0.999035
   114      # 0.999735
   115      # 0.999927
   116      # Done.
   117      $ ./bin/main --model_name Toy_double
   118      # calling prediction service on 172.17.0.3:8500
   119      # call predict ok
   120      # outputs size is 1
   121  
   122      # output_1:
   123      # 6.80302
   124      # 8.26209
   125      # 9.72117
   126      # Done.
   127      ```
   128  
   129    - request different version through the version number
   130  
   131      ```bash
   132      # run under predict-service directory
   133      $ mkdir build
   134      $ cd build && cmake ..
   135      $ make
   136      $ ./bin/main --model_name Toy --model_version 1
   137      # calling prediction service on 172.17.0.3:8500
   138      # call predict ok
   139      # outputs size is 1
   140  
   141      # output_1:
   142      # 10.8054
   143      # 14.0101
   144      # 17.2148
   145      # Done.
   146      $ ./bin/main --model_name Toy --model_version 2
   147      # calling prediction service on 172.17.0.3:8500
   148      # call predict ok
   149      # outputs size is 1
   150  
   151      # output_1:
   152      # 0.999035
   153      # 0.999735
   154      # 0.999927
   155      # Done.
   156      ```
   157  
   158    - request different version through the version annotation
   159  
   160      ```bash
   161      # run under predict-service directory
   162      $ mkdir build
   163      $ cd build && cmake ..
   164      $ make
   165      $ ./bin/main --model_name Toy --model_version_label stable
   166      # calling prediction service on 172.17.0.3:8500
   167      # call predict ok
   168      # outputs size is 1
   169  
   170      # output_1:
   171      # 10.8054
   172      # 14.0101
   173      # 17.2148
   174      # Done.
   175      $ ./bin/main --model_name Toy --model_version_label canary
   176      # calling prediction service on 172.17.0.3:8500
   177      # call predict ok
   178      # outputs size is 1
   179  
   180      # output_1:
   181      # 0.999035
   182      # 0.999735
   183      # 0.999927
   184      # Done.
   185      ```
   186  
   187    - request multiple task model <!--  TODO: -->
   188  
   189      ```bash
   190      $ cd ...
   191      $ make
   192      $ ./bin/main
   193      ```
   194  
   195    - request model status
   196  
   197      ```bash
   198      # run under model-status directory
   199      $ mkdir build
   200      $ cd build && cmake ..
   201      $ make
   202      $ ./bin/main --model_name Toy
   203      # calling model service on 172.17.0.3:8500
   204      # model_spec {
   205      #   name: "Toy"
   206      #   signature_name: "serving_default"
   207      # }
   208      #
   209      # call predict ok
   210      # metadata size is 0
   211      # metadata DebugString is
   212      # model_version_status {
   213      #   version: 3
   214      #   state: END
   215      #   status {
   216      #   }
   217      # }
   218      # model_version_status {
   219      #   version: 2
   220      #   state: AVAILABLE
   221      #   status {
   222      #   }
   223      # }
   224      # model_version_status {
   225      #   version: 1
   226      #   state: AVAILABLE
   227      #   status {
   228      #   }
   229      # }
   230      ```
   231  
   232    - request model metadata
   233  
   234      ```bash
   235      # run under model-metadata directory
   236      $ mkdir build
   237      $ cd build && cmake ..
   238      $ make
   239      $ ./bin/main --model_name Toy
   240      # calling prediction service on 172.17.0.3:8500
   241      # call predict ok
   242      # metadata size is 1
   243      # metadata DebugString is
   244      # model_spec {
   245      #   name: "Toy"
   246      #   version {
   247      #     value: 2
   248      #   }
   249      # }
   250      # metadata {
   251      #   key: "signature_def"
   252      #   value {
   253      #     [type.googleapis.com/tensorflow.serving.SignatureDefMap] {
   254      #       signature_def {
   255      #         key: "__saved_model_init_op"
   256      #         value {
   257      #           outputs {
   258      #             key: "__saved_model_init_op"
   259      #             value {
   260      #               name: "NoOp"
   261      #               tensor_shape {
   262      #                 unknown_rank: true
   263      #               }
   264      #             }
   265      #           }
   266      #         }
   267      #       }
   268      #       signature_def {
   269      #         key: "serving_default"
   270      #         value {
   271      #           inputs {
   272      #             key: "input_1"
   273      #             value {
   274      #               name: "serving_default_input_1:0"
   275      #               dtype: DT_FLOAT
   276      #               tensor_shape {
   277      #                 dim {
   278      #                   size: -1
   279      #                 }
   280      #                 dim {
   281      #                   size: 2
   282      #                 }
   283      #               }
   284      #             }
   285      #           }
   286      #           outputs {
   287      #             key: "output_1"
   288      #             value {
   289      #               name: "StatefulPartitionedCall:0"
   290      #               dtype: DT_FLOAT
   291      #               tensor_shape {
   292      #                 dim {
   293      #                   size: -1
   294      #                 }
   295      #                 dim {
   296      #                   size: 1
   297      #                 }
   298      #               }
   299      #             }
   300      #           }
   301      #           method_name: "tensorflow/serving/predict"
   302      #         }
   303      #       }
   304      #     }
   305      #   }
   306      # }
   307      #
   308      ```
   309  
   310    - reload model through gRPC API
   311  
   312      ```bash
   313      # run under model-reload directory
   314      $ mkdir build
   315      $ cd build && cmake ..
   316      $ make
   317      $ ./bin/main --model_name Toy
   318      # calling model service on 172.17.0.3:8500
   319      # call model service ok
   320      # model Toy reloaded successfully.
   321      ```
   322  
   323    - request model log
   324  
   325      ```bash
   326      # run under predict-log directory
   327      $ mkdir build
   328      $ cd build && cmake ..
   329      $ make
   330      $ ./bin/main --model_name Toy # --model_version 1 --model_version_label stable
   331      # calling prediction service on 172.17.0.3:8500
   332      # call predict ok
   333      # outputs size is 1
   334  
   335      # output_1:
   336      # 0.999035
   337      # 0.999735
   338      # 0.999927
   339      # ********************Predict Log*********************
   340      # request {
   341      #   model_spec {
   342      #     name: "Toy"
   343      #     signature_name: "serving_default"
   344      #   }
   345      #   inputs {
   346      #     key: "input_1"
   347      #     value {
   348      #       dtype: DT_FLOAT
   349      #       tensor_shape {
   350      #         dim {
   351      #           size: 3
   352      #         }
   353      #         dim {
   354      #           size: 2
   355      #         }
   356      #       }
   357      #       float_val: 1
   358      #       float_val: 2
   359      #       float_val: 1
   360      #       float_val: 3
   361      #       float_val: 1
   362      #       float_val: 4
   363      #     }
   364      #   }
   365      # }
   366      # response {
   367      #   outputs {
   368      #     key: "output_1"
   369      #     value {
   370      #       dtype: DT_FLOAT
   371      #       tensor_shape {
   372      #         dim {
   373      #           size: 3
   374      #         }
   375      #         dim {
   376      #           size: 1
   377      #         }
   378      #       }
   379      #       float_val: 0.999035
   380      #       float_val: 0.999734938
   381      #       float_val: 0.999927282
   382      #     }
   383      #   }
   384      #   model_spec {
   385      #     name: "Toy"
   386      #     version {
   387      #       value: 2
   388      #     }
   389      #     signature_name: "serving_default"
   390      #   }
   391      # }
   392      # ****************************************************
   393      # Done.
   394      ```