github.com/hyperledger/burrow@v0.34.5-0.20220512172541-77f09336001d/docs/reference/vent.md (about)

     1  # Vent - SQL mapping layer
     2  
     3  
     4  Vent reads specification files called 'projections', parses their contents, and maps EVM LOG event fields to corresponding SQL columns to create or alter database structures. 
     5  It listens for a stream of block events from Burrow's GRPC service then parses, unpacks, decodes event data, and builds rows to be upserted in matching event tables, rows are 
     6  upserted atomically in a single database transaction per block.
     7  
     8  There are two modes of operation: view mode and log mode. In view mode a primary key is used to locate the row in a table which should be updated (if exists) or inserted 
     9  (if it does not exist). If the event contains a field matching the optional `DeleteMarkerField` then the row will instead be deleted. As such in view mode Vent can map a 
    10  stream of EVM LOG events to a CRUD-style table - a view over entities as defined by the choice primary key. Alternatively if no primary keys are specified for a projection 
    11  Vent operates in log mode where all matched events are inserted - and so log mode operates as an append-only log. Note there is no explicit setting for mode - it depends on 
    12  the presence or absence of a `"Primary": true` entry in one of the `FieldMappings` of a projection (see below for an example).
    13  
    14  Vent writes each block of updates atomically and is guaranteed to be crash tolerant. If the Vent process is killed it will resume at the last written height. Burrow stores all 
    15  previous events in its state so even if you delete the Vent database it can be regenerated deterministically. This feature being a core feature of Vent.
    16  
    17  There is a [presentation on vent here](https://competent-yalow-f210f7.netlify.app).
    18  
    19  
    20  ## Projections
    21  A projection is the name  given to the configuration files that Vent uses to interpret EVM events as updates or deletion from SQL tables. They provide an object relational mapping 
    22  between Solidity events and SQL tables.
    23  
    24  Given a projection, like the following:
    25  
    26  ```json
    27  [
    28    {
    29      "TableName" : "EventTest",
    30      "Filter" : "Log1Text = 'I am LOG1'",
    31      "DeleteMarkerField": "__DELETE__",
    32      "FieldMappings"  : [
    33        {
    34          "Field": "key", 
    35          "ColumnName" : "testname", 
    36          "Type": "bytes32", 
    37          "Primary" : true
    38        },
    39        {
    40          "Field": "description", 
    41          "ColumnName" : "testdescription", 
    42          "Type": "bytes32", 
    43          "BytesToString": true
    44        }
    45      ]
    46    }
    47  ]
    48  ```
    49  
    50  And a solidity contract like:
    51  
    52  ```solidity
    53  pragma solidity ^0.4.25;
    54  
    55  contract EventEmitter {
    56      event UpdateEvent(
    57          // The first indexed field will appear as the the LOG1 topic - we can use it like a namespace
    58          bytes32 indexed IAmLog1,
    59          // Our primary key in our projection above
    60          bytes32 key,
    61          // Some 'mutable' text - we can update this by emitting an UpdateEvent with the same key but a new description
    62          bytes32 description
    63      );
    64  
    65      event DeletionEvent(
    66          bytes32 indexed IAmLog1,
    67          bytes32 key,
    68          // This marker field can be of any type - it is purely matched on name - if an event contains a field with the
    69          // the specified marker field it is interpreted as an instruction to delete the row corresponding to key
    70          bool __DELETE__
    71      );
    72      
    73      function update() external {
    74          // Update or inserts 'key0001' row
    75          emit UpdateEvent("I am LOG1", "key0001", "some description");
    76      }
    77  
    78      function update2() external {
    79          // Update or inserts 'key0001' row
    80          emit UpdateEvent("I am LOG1", "key0001", "a different description");
    81      }
    82  
    83      function remove() external {
    84          // Removes 'key0001' row
    85          emit DeletionEvent("I am LOG1", "key0001", true);
    86      }
    87  }
    88  ```
    89  
    90  We can maintain a view-mode table that feels like that of a ordinary CRUD app though it is backed by a stream of events coming from our Solidity contracts.
    91  
    92  Burrow can also emit a JSONSchema for the projection file format with `burrow vent schema`. You can use this to validate your projections using any of the 
    93  [JSONSchema](https://json-schema.org/) tooling.
    94  
    95  ### Projection specification
    96  A projection file is defined as a JSON array of `EventClass` objections. Each `EventClass` specifies a class of events that should be consumed (specified via a filter) 
    97  in order to generate a SQL table. An `EventClass` holds `FieldMappings` that specify how to map the event fields of a matched EVM event to a destination column 
    98  (identified by `ColumnName`) of the destination table (identified by `TableName`)
    99  
   100  #### EventClass
   101  | Field | Type | Required? | Description |
   102  |-------|------|-----------|-------------|
   103  | `TableName` | String | Required | The case-sensitive name of the destination SQL table for the `EventClass`|
   104  | `Filter` | String | Required | A filter to be applied to EVM Log events using the [available tags](../../protobuf/rpcevents.proto) written according to the event [query.peg](../../event/query/query.peg) grammar |
   105  | `FieldMappings` | array of `FieldMapping` | Required | Mappings between EVM event fields and columns see table below |
   106  | `DeleteMarkerField` | String | Optional | Field name of an event field that when present in a matched event indicates the event should result on a deletion of a row (matched on the primary keys of that row) rather than the default upsert action |
   107  
   108  #### FieldMapping
   109  | Field | Type | Required? | Description |
   110  |-------|------|-----------|-------------|
   111  | `Field` | String | Required | EVM field name to match exactly when creating a SQL upsert/delete |
   112  | `Type` | String | Required | EVM type of the field (which also dictates the SQL type that will be used for table definition) |
   113  | `ColumnName` | String | Required | The destination SQL column for the mapped value |
   114  | `Primary` | Boolean | Optional | Whether this SQL column should be part of the primary key |
   115  | `BytesToString` | Boolean | Optional | When type is `bytes<N>` (for some N) indicates that the value should be interpreted as (converted to) a string  |
   116  | `Notify` | array of String | Optional | A list of notification channels on which a payload should be sent containing the value of this column when it is updated or deleted. The payload on a particular channel will be the JSON object containing all column/value pairs for which the notification channel is a member of this notify array (see [triggers](#triggers) below) |
   117  
   118  Vent builds dictionary, log and event database tables for the defined tables & columns and maps input types to proper sql types.
   119  
   120  Database structures are created or altered on the fly based on specifications (just adding new columns is supported).
   121  
   122  Abi files can be generated from bin files like so:
   123  
   124  ```bash
   125  cat *.bin | jq '.Abi[] | select(.type == "event")' > events.abi
   126  ```
   127  
   128  ## Adapters:
   129  
   130  Adapters are database implementations, Vent can store data in different rdbms.
   131  
   132  In `sqldb/adapters` there's a list of supported adapters (there is also a README.md file in that folder that helps to understand how to implement a new one).
   133  
   134  ### <a name="triggers"></a>Notification Triggers
   135  Notification triggers are configured with the `Notify` array of a `FieldMapping`. In a supported database (currently only postrges) they allow you to specify a set of 
   136  channels on which to notify when a column changes. By including a channel in the `Notify` the column is added to the set of columns for which that channel should receive 
   137  a notification payload. For example if we have the following spec:
   138  
   139  ```json
   140  [  
   141    {
   142      "TableName" : "UserAccounts",
   143      "Filter" : "Log1Text = 'USERACCOUNTS'",
   144      "FieldMappings"  : [
   145        {"Field": "userAddress", "ColumnName" : "address", "Type": "address", "Notify": ["user", "address"]},
   146        {"Field": "userName", "ColumnName" : "username", "Type": "string", "Notify":  ["user"]}
   147      ]
   148    }
   149   ]
   150  ```
   151  
   152  Then Vent will record a mapping `user -> username, address` and `address -> address` where the left hand side is the notification channel and the right hand side the columns 
   153  included in the payload on that channel.
   154  
   155  For each of these mappings a notification trigger function is defined and attached as a trigger for the table to run after an insert, update, or delete. This function calls 
   156  `pg_notify` (in the case of postgres, the only database for which we support notifications - this is non-standard and we may use a different mechanism in other databases if present). 
   157  These notification can be consumed by any client connected to the postgres database with `LISTEN <channel>;`, see [Postgres NOTIFY documentation](https://www.postgresql.org/docs/11/sql-notify.html).
   158  
   159  ## Setup PostgreSQL Database with Docker:
   160  
   161  ```bash
   162  # Create postgres container (only once):
   163  docker run --name postgres-local -e POSTGRES_USER=user -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=vent -p 5432:5432 -d postgres:10.4-alpine
   164  
   165  # Start postgres container:
   166  docker start postgres-local
   167  
   168  # Stop postgres container:
   169  docker stop postgres-local
   170  
   171  # Delete postgres container:
   172  docker container rm postgres-local
   173  ```
   174  
   175  ## Run Unit Tests:
   176  
   177  ```bash
   178  # From the main repo folder:
   179  make test_integration_vent
   180  ```
   181  
   182  ## Run Vent Command:
   183  
   184  ```bash
   185  # Install burrow (from root of repo):
   186  make install
   187  
   188  # Print command help:
   189  burrow vent --help
   190  
   191  # Run vent command with postgres adapter, spec & abi files path, also stores block & tx data:
   192  burrow vent start --db-adapter="postgres" --db-url="postgres://user:pass@localhost:5432/vent?sslmode=disable" --db-schema="vent" --grpc-addr="localhost:10997" --http-addr="0.0.0.0:8080" --log-level="debug" --spec="<sqlsol specification file path>" --abi="<abi file path>" --db-block=true
   193  
   194  # Run vent command with sqlite adapter, spec & abi directories path, does not store block & tx data:
   195  burrow vent start --db-adapter="sqlite" --db-url="./vent.sqlite" --grpc-addr="localhost:10997" --http-addr="0.0.0.0:8080" --log-level="debug" --spec="<sqlsol specification directory path>" --abi="<abi files directory path>"
   196  ```
   197  
   198  Configuration Flags:
   199  
   200  + `db-adapter`: (string) Database adapter, 'postgres' or 'sqlite' are fully supported
   201  + `db-url`: (string) PostgreSQL database URL or SQLite db file path
   202  + `db-schema`: (string) PostgreSQL database schema or empty for SQLite
   203  + `http-addr`: (string) Address to bind the HTTP server
   204  + `grpc-addr`: (string) Address to listen to gRPC Hyperledger Burrow server
   205  + `log-level`: (string) Logging level (error, warn, info, debug)
   206  + `spec-file`: (string) SQLSol specification json file (full path)
   207  + `spec-dir`: (string) Path of a folder to look for SQLSol json specification files
   208  + `abi-file`: (string) Event Abi specification file full path
   209  + `abi-dir`: (string) Path of a folder to look for event Abi specification files
   210  + `db-block`: (boolean) Create block & transaction tables and persist related data (true/false)
   211  
   212  
   213  NOTES:
   214  
   215  One of `spec-file` or `spec-dir` must be provided.
   216  If `spec-dir` is given, vent will search for all `.json` spec files in given directory.
   217  
   218  Also one of `abi-file` or `abi-dir` must be provided.
   219  If `abi-dir` is given, vent will search for all `.abi` spec files in given directory.
   220  
   221  if `db-block` is set to true (block explorer mode), Block and Transaction tables are created in addition to log and event tables to store block & tx raw info.
   222  
   223  It can be checked that vent is connected and ready sending a request to `http://<http-addr>/health` which will return a `200` OK response in case everything's fine.