github.com/pingcap/tiflow@v0.0.0-20240520035814-5bf52d54e205/dm/docs/RFCS/20200412_syncer_plugin.md (about) 1 # Proposal: Binlog replication(syncer) unit support plugin 2 3 - Author(s): [wangxiang](https://github.com/WangXiangUSTC) 4 - Last updated: 2020-04-13 5 6 ## Abstract 7 8 This proposal introduces why we need to support plugin in binlog replication(syncer) unit, and how to implements it. 9 10 ## Background 11 12 Some users have customization requirements like below, but it is not suitable to implement in DM. We can support plugin in DM, and then user can implement their requirements by plugin. 13 14 ### Replication incompatible DDL in TiDB 15 16 DM is a tool used to replication data from MySQL to TiDB, we know that TiDB is compatible with MySQL in most case, but some DDL is not supported in TiDB now. For example: TiDB can't reduce column's length, if you execute these SQLs in MySQL: 17 18 ```SQL 19 CREATE DATABASE test; 20 CREATE TABLE test.t1(id int primary key, name varchar(100)); 21 ALTER TABLE test.t1 MODIFY COLUMN name varchar(50); 22 ``` 23 24 And then DM will replication these SQLs to TiDB and get error `Error 1105: unsupported modify column length 50 is less than origin 100` 25 26 DM and TiDB can't handle these SQLs now, users need to execute compatible DDL in TiDB and then skip this DDL by [binlog-event-filter](https://pingcap.com/docs/tidb-data-migration/stable/feature-overview/#binlog-event-filter) in DM. It is not convenient for users, and cannot be automated. 27 28 In fact, the incompatible DDL `ALTER TABLE test.t1 MODIFY COLUMN name varchar(50)` can be translated to: 29 30 ```SQL 31 ALTER TABLE test.t1 ADD COLUMN name_tmp varchar(50) AFTER id; 32 REPLACE INTO test.t1(id, name_tmp) SELECT id, name AS name_tmp FROM test.t1; 33 ALTER TABLE test.t1 DROP COLUMN name; 34 ALTER TABLE test.t1 CHANGE COLUMN name_tmp name varchar(50); 35 ``` 36 37 Maybe we can execute them automated when meet `unsupported modify column` error. 38 39 ### Double write 40 41 DM only support replication data to TiDB, but some users 42 want to send binlog to other platform while replication to TiDB. 43 44 For example, user expect to send DDL binlog to Kafka after DDL is replication to TiDB, and then read binlog from Kafka for notifing business change. 45 46 ## Implementation 47 48 ### Interface 49 50 For handle DDL which is not supported in TiDB or used for double write, we need to design at least three interface in plugin. 51 52 #### Init 53 54 We can do some initial job in the `Init` interface, for example, create connection to the downstream platform(like Kafka) for double write. 55 56 #### HandleDDLJobResult 57 58 This interface used to handle the DDL job's result in binlog replication unit(syncer) 59 60 - When the ddl job execute failed, judge the error type by error code or error message, then do something to resolve it. 61 - When the ddl job execute success, then send it to other platform. 62 63 #### HandleDMLJobResult 64 65 This interface used to handle the DML job's result in binlog replication unit(syncer) 66 67 - When the dml job execute success, then send it to other platform. 68 69 ### Hook 70 71 `Init` can be execute when [initial](https://github.com/pingcap/dm/blob/9023c789964fde0f5134e0c49435db557e21fdf7/syncer/syncer.go#L257) binlog replication unit. 72 73 `HandleDDLJobResult` handle the result of [handleQueryEvent](https://github.com/pingcap/dm/blob/9023c789964fde0f5134e0c49435db557e21fdf7/syncer/syncer.go#L1279) and then do something. 74 75 `HandleDMLJobResult` handle the result of [handleRowsEvent](https://github.com/pingcap/dm/blob/9023c789964fde0f5134e0c49435db557e21fdf7/syncer/syncer.go#L1274) and then do something. 76 77 ## How to use 78 79 1. User implements these interface designed in plugin 80 2. Build the go file in plugin mode, and generate a `.so` file 81 3. Set plugin in task's config file 82 4. Binlog replication load the plugin