github.com/nmanchovski/burrow@v0.25.0/vent/README.md (about)

     1  # Vent Component
     2  
     3  Vent reads sqlsol specification & abi files, parses their contents, and maps column types to corresponding sql types to create or alter database structures. It listens for a stream of block events from Burrow's GRPC service then parses, unpacks, decodes event data, and builds rows to be upserted in matching event tables, rows are upserted atomically in a single database transaction per block.
     4  
     5  Block height and context info are stored in Log tables in order to resume getting pending blocks or rewind to a previous state.
     6  
     7  ## SQLSol specification
     8  SQLSol is the name (object relational mapping between Solidity events and SQL tables) given to the configuration files that Vent uses to interpret EVM events as updates or deletion from SQL tables
     9  
    10  Given a sqlsol specification, like the following:
    11  
    12  ```json
    13  [
    14    {
    15      "TableName" : "EventTest",
    16      "Filter" : "Log1Text = 'LOGEVENT1'",
    17      "DeleteMarkerField": "__DELETE__",
    18      "FieldMappings"  : [
    19        {"Field": "key", "ColumnName" : "testname", "Type": "bytes32", "Primary" : true},
    20        {"Field": "description", "ColumnName" : "testdescription", "Type": "bytes32", "Primary" : false, "BytesToString": true}
    21      ]
    22    },
    23    {
    24      "TableName" : "UserAccounts",
    25      "Filter" : "Log1Text = 'USERACCOUNTS'",
    26      "FieldMappings"  : [
    27        {"Field": "userAddress", "ColumnName" : "address", "Type": "address", "Primary" : true},
    28        {"Field": "userName", "ColumnName" : "username", "Type": "string", "Primary" : false}
    29      ]
    30    }
    31  ]
    32  
    33  ```
    34  
    35  Burrow can emit a JSONSchema for the sqlsol file format with `burrow vent schema`. You can use this to validate your sqlsol files using any of the [JSONSchema](https://json-schema.org/) tooling.
    36  
    37  ### SQLSol specification
    38  A sqlsol file is defined as a JSON array of `EventClass` objections. Each `EventClass` specifies a class of events that should be consumed (specified via a filter) in order to generate a SQL table. An `EventClass` holds `FieldMappings` that specify how to map the event fields of a matched EVM event to a destination column (identified by `ColumnName`) of the destination table (indentified by `TableName`)
    39  
    40  #### EventClass
    41  | Field | Type | Required? | Description |
    42  |-------|------|-----------|-------------|
    43  | `TableName` | String | Required | The case-sensitive name of the destination SQL table for the `EventClass`|
    44  | `Filter` | String | Required | A filter to be applied to EVM Log events using the [available tags](../protobuf/rpcevents.proto) written according to the event [query.peg](../event/query/query.peg) grammar |
    45  | `FieldMappings` | array of `FieldMapping` | Required | Mappings between EVM event fields and columns see table below |
    46  | `DeleteMarkerField` | String | Optional | Field name of an event field that when present in a matched event indicates the event should result on a deletion of a row (matched on the primary keys of that row) rather than the default upsert action |
    47  
    48  #### FieldMapping
    49  | Field | Type | Required? | Description |
    50  |-------|------|-----------|-------------|
    51  | `Field` | String | Required | EVM field name to match exactly when creating a SQL upsert/delete |
    52  | `Type` | String | Required | EVM type of the field (which also dictates the SQL type that will be used for table definition) |
    53  | `ColumnName` | String | Required | The destination SQL column for the mapped value |
    54  | `Primary` | Boolean | Optional | Whether this SQL column should be part of the primary key |
    55  | `BytesToString` | Boolean | Optional | When type is `bytes<N>` (for some N) indicates that the value should be interpreted as (converted to) a string  |
    56  | `Notify` | array of String | Optional | A list of notification channels on which a payload should be sent containing the value of this column when it is updated or deleted. The payload on a particular channel will be the JSON object containing all column/value pairs for which the notification channel is a member of this notify array (see [triggers](#triggers) below) |
    57  
    58  Vent builds dictionary, log and event database tables for the defined tables & columns and maps input types to proper sql types.
    59  
    60  Database structures are created or altered on the fly based on specifications (just adding new columns is supported).
    61  
    62  Abi files can be generated from bin files like so:
    63  
    64  ```bash
    65  cat *.bin | jq '.Abi[] | select(.type == "event")' > events.abi
    66  ```
    67  
    68  
    69  ## Adapters:
    70  
    71  Adapters are database implementations, Vent can store data in different rdbms.
    72  
    73  In `sqldb/adapters` there's a list of supported adapters (there is also a README.md file in that folder that helps to understand how to implement a new one).
    74  
    75  ### <a name="triggers"></a>Notification Triggers
    76  Notification triggers are configured with the `Notify` array of a `FieldMapping`. In a supported database (currently only postrges) they allow you to specify a set of channels on which to notify when a column changes. By including a channel in the `Notify` the column is added to the set of columns for which that channel should receive a notification payload. For example if we have the following spec:
    77  
    78  ```json
    79  [  
    80    {
    81      "TableName" : "UserAccounts",
    82      "Filter" : "Log1Text = 'USERACCOUNTS'",
    83      "FieldMappings"  : [
    84        {"Field": "userAddress", "ColumnName" : "address", "Type": "address", "Notify": ["user", "address"]},
    85        {"Field": "userName", "ColumnName" : "username", "Type": "string", "Notify":  ["user"]}
    86      ]
    87    }
    88   ]
    89  ```
    90  
    91  Then Vent will record a mapping `user -> username, address` and `address -> address` where the left hand side is the notification channel and the right hand side the columns included in the payload on that channel.
    92  
    93  For each of these mappings a notification trigger function is defined and attached as a trigger for the table to run after an insert, update, or delete. This function calls `pg_notify` (in the case of postgres, the only database for which we support notifications - this is non-standard and we may use a different mechanism in other databases if present). These notification can be consumed by any client connected to the postgres database with `LISTEN <channel>;`, see [Postgres NOTIFY documentation](https://www.postgresql.org/docs/11/sql-notify.html).
    94  
    95  ## Setup PostgreSQL Database with Docker:
    96  
    97  ```bash
    98  # Create postgres container (only once):
    99  docker run --name postgres-local -e POSTGRES_USER=user -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=vent -p 5432:5432 -d postgres:10.4-alpine
   100  
   101  # Start postgres container:
   102  docker start postgres-local
   103  
   104  # Stop postgres container:
   105  docker stop postgres-local
   106  
   107  # Delete postgres container:
   108  docker container rm postgres-local
   109  ```
   110  
   111  ## Run Unit Tests:
   112  
   113  ```bash
   114  # From the main repo folder:
   115  make test_integration_vent
   116  ```
   117  
   118  ## Run Vent Command:
   119  
   120  ```bash
   121  # Install vent command:
   122  go install ./vent
   123  
   124  # Print command help:
   125  vent --help
   126  
   127  # Run vent command with postgres adapter, spec & abi files path, also stores block & tx data:
   128  vent --db-adapter="postgres" --db-url="postgres://user:pass@localhost:5432/vent?sslmode=disable" --db-schema="vent" --grpc-addr="localhost:10997" --http-addr="0.0.0.0:8080" --log-level="debug" --spec-file="<sqlsol specification file path>" --abi-file="<abi file path>" --db-block=true
   129  
   130  # Run vent command with sqlite adapter, spec & abi directories path, does not store block & tx data:
   131  vent --db-adapter="sqlite" --db-url="./vent.sqlite" --grpc-addr="localhost:10997" --http-addr="0.0.0.0:8080" --log-level="debug" --spec-dir="<sqlsol specification directory path>" --abi-dir="<abi files directory path>"
   132  ```
   133  
   134  Configuration Flags:
   135  
   136  + `db-adapter`: (string) Database adapter, 'postgres' or 'sqlite' are fully supported
   137  + `db-url`: (string) PostgreSQL database URL or SQLite db file path
   138  + `db-schema`: (string) PostgreSQL database schema or empty for SQLite
   139  + `http-addr`: (string) Address to bind the HTTP server
   140  + `grpc-addr`: (string) Address to listen to gRPC Hyperledger Burrow server
   141  + `log-level`: (string) Logging level (error, warn, info, debug)
   142  + `spec-file`: (string) SQLSol specification json file (full path)
   143  + `spec-dir`: (string) Path of a folder to look for SQLSol json specification files
   144  + `abi-file`: (string) Event Abi specification file full path
   145  + `abi-dir`: (string) Path of a folder to look for event Abi specification files
   146  + `db-block`: (boolean) Create block & transaction tables and persist related data (true/false)
   147  
   148  
   149  NOTES:
   150  
   151  One of `spec-file` or `spec-dir` must be provided.
   152  If `spec-dir` is given, vent will search for all `.json` spec files in given directory.
   153  
   154  Also one of `abi-file` or `abi-dir` must be provided.
   155  If `abi-dir` is given, vent will search for all `.abi` spec files in given directory.
   156  
   157  if `db-block` is set to true (block explorer mode), Block and Transaction tables are created in addition to log and event tables to store block & tx raw info.
   158  
   159  It can be checked that vent is connected and ready sending a request to `http://<http-addr>/health` which will return a `200` OK response in case everything's fine.