github.com/voedger/voedger@v0.0.0-20240520144910-273e84102129/design/consistency/inv-articles-consistency.md (about) 1 ## Abstract 2 3 Study existing articles about consistency and isolation 4 5 ## Articles 6 7 In the order of importance: 8 9 [CRIT] [A Critique of ANSI SQL Isolation Levels, Jun 1995, Microsoft Research](https://arxiv.org/ftp/cs/papers/0701/0701157.pdf) 10 - Science paper by Microsoft, Sybase, UMass 11 12 [COSMOS] [Microsoft, Consistency levels in Azure Cosmos DB, 2022](https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels) 13 - microsoft.com: 14 - Very cool set of Consistency levels 15 16 [YB] [D. Yadav, M. Butler, Rigorous Design of Fault-Tolerant Transactions for Replicated Database Systems using Event B, School of Electronics and Computer Science University of Southampton](https://eprints.soton.ac.uk/262096/1/reft.pdf) 17 - > (!!!) The One Copy Serializability [7] is the highest correctness criterion for 18 replica control protocols. It is achieved by coupling consistency criteria of one 19 copy equivalence and providing serializable execution of transactions. In order 20 to achieve this correctness criterion, it is required that interleaved execution of 21 transactions on replicas be equivalent to serial execution of those transactions 22 on one copy of a database. 23 24 [ABACAS] [D. Abadi, Correctness Anomalies Under Serializable Isolation, blogspot.com, June 2019](https://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html) 25 - Daniel Abadi is the Darnell-Kanal Professor of Computer Science at University of Maryland, College Park. 26 - (!!!) A third class of systems are "strong partition serializable" systems that guarantee strict serializability only on a per-partition basis. Data is divided into a number of disjoint partitions. Within each partition, transactions that access data within that partition are guaranteed to be strictly serializable 27 28 [ABAISO] [D. Abadi, Introduction to Transaction Isolation Levels, blogspot.com, May 2019](http://dbmsmusings.blogspot.com/2019/05/introduction-to-transaction-isolation.html) 29 30 [BD] [Ben Darnell, How to Talk about Consistency and Isolation in Distributed DBs, cockroachlabs.com, Feb 11, 2022](https://www.cockroachlabs.com/blog/db-consistency-isolation-terminology/) 31 - cockroachlabs.com 32 33 [SEADIC] [Difference of Isolation and Consistency](https://seanhu93.medium.com/difference-of-isolation-and-consistency-cc9ddbfb88e0) 34 - seanhu93.medium.com 35 - Refers to dbmsmusings.blogspot.com 36 37 [SEARISO] [Revisit Database Isolation](https://seanhu93.medium.com/revisit-database-isolation-863b3ca06f5f) 38 - (!!!) Nice pics 39 - seanhu93.medium.com 40 - Lost Update, Dirty Writes, Dirty Reads, Non-Repeatable Reads, Phantom Reads, Write Skew, 41 - I found the original blog from Prof. Daniel Abadi when I was googling some data consistency problems in distributed systems. It was astonishing to find such a great blog 42 43 [SIT] [Ivan Prisyazhnyy, Transaction isolation anomalies, github.io, Jul 2019](https://sitano.github.io/theory/databases/2019/07/30/tx-isolation-anomalies/) 44 - sitano.github.io, 45 - Strong math style 46 47 [HABR] [К чему может привести ослабление уровня изоляции транзакций в базах данных](https://habr.com/ru/company/otus/blog/501294) 48 - habr.com 49 - (!!!) На русском 50 51 [FAU] [Demystifying Database Systems, Part 4: Isolation levels vs. Consistency levels](https://fauna.com/blog/demystifying-database-systems-part-4-isolation-levels-vs-consistency-levels) 52 - fauna.com/blog 53 - ??? 54 55 56 ## [ABAISO] D. Abadi, Introduction to Transaction Isolation Levels, blogspot.com, May 2019 57 58 http://dbmsmusings.blogspot.com/2019/05/introduction-to-transaction-isolation.html 59 60 > **Database isolation** refers to the ability of a database to allow a transaction to execute as if there are no other concurrently running transactions (even though in reality there can be a large number of concurrently running transactions). The overarching goal is to prevent reads and writes of temporary, aborted, or otherwise incorrect data written by concurrent transactions. 61 62 > The key point for our purposes is that we are defining **“perfect isolation”** as the ability of a system to run transactions in parallel, but in a way that is equivalent to as if they were running one after the other. In the SQL standard, this perfect isolation level is called **serializability**. 63 64 ### Anomalies in Concurrent Systems 65 66 - lost-update anomaly 67 - dirty-write anomaly 68 - dirty-read anomaly 69 - non-repeatable read anomaly 70 - phantom read anomaly 71 - write skew anomaly 72 73 ### Definitions in The ISO SQL Standard 74 75 > There are many, many problems which how the SQL standard defines these isolation levels. Most of these problems were already pointed out in 1995, but inexplicably, revision after revision of the SQL standard have been released since that point without fixing these problems. 76 77 > A second (related) problem is that using anomalies to define isolation levels only gives the end user a guarantee of what specific types of concurrency bugs are impossible. It does not give a precise definition of the potential database states that are viewable by any particular transaction. 78 79 > A third problem is that the standard does not define, nor provide correctness constraints on one of the most popular reduced isolation levels used in practice: snapshot isolation 80 81 > A fourth problem is that the SQL standard seemingly gives two different definitions of the SERIALIZABLE isolation level. First, it defines SERIALIZABLE correctly: that the final result must be equivalent to a result that could have occured if there were no concurrency. But then, it presents the above table, which seems to imply that as long as an isolation level does not allow dirty reads, non-repeatable reads, or phantom reads, it may be called SERIALIZABLE. 82 83 ## [ABACAS] [D. Abadi, Correctness Anomalies Under Serializable Isolation, blogspot, 2019] 84 85 - https://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html 86 - https://fauna.com/blog/demystifying-database-systems-correctness-anomalies-under-serializable-isolation 87 88 89 > (!!!) As long as particular transaction code is correct in the sense that if nothing else is running at the same time, the transaction will take the current database state from one correct state to another correct state (where “correct” is defined as not violating any semantics of an application), then serializable isolation will guarantee that the presence of concurrently running transactions will not cause any kind of race conditions that could allow the database to get to an incorrect state. 90 91 92 > In the good old days of having a “database server” which is running on a single physical machine, serializable isolation was indeed sufficient, and database vendors never attempted to sell database software with stronger correctness guarantees than SERIALIZABLE. However, **as distributed and replicated database systems have started** to proliferate in the last few decades, anomalies and bugs have started to appear in applications even when running over a database system that guarantees serializable isolation. As a consequence, database system vendors started to release systems with **stronger correctness guarantees than serializable isolation**, which promise a lack of vulnerability to these newer anomalies. In this post, we will discuss several well known **bugs and anomalies in serializable distributed database systems**, and modern correctness guarantees that ensure avoidance of these anomalies. 93 94 ### What does “serializable” mean in a distributed/replicated system? 95 96 97 Rony Attar, Phil Bernstein, and Nathan Goodman expanded the concept of serializability in 1984 to define correctness in the context of replicated systems. The basic idea is that all the replicas of a data item behave like a single logical data item. When we say that a concurrent execution of transactions is “equivalent to processing them in a particular serial order”, this implies that whenever a data item is read, the value returned will be the most recent write to that data item by a previous transaction in the (equivalent) serial order --- no matter which copy was written by that write. In this context “most recent write” means the write by the closest (previous) transaction in that serial order. In our example above, either the withdrawal in Europe or the withdrawal in the US will be ordered first in the equivalent serial order. Whichever transaction is second --- when it reads the balance --- it must read the value written by the first transaction. Attar et. al. named this guarantee “one copy serializability” or “1SR”, because the isolation guarantee is equivalent to serializability in an unreplicated system with “one copy” of every data item. 98 99 NB: See also: [YB] 100 101 The next few sections describe some forms of time-travel anomalies that occur in distributed and/or replicated systems, and the types of application bugs that they may cause. 102 103 ### The immortal write (бессмертная запись) 104 105 Anomaly: 106 - History: w1[x=Daniel]...c1...w2[x=Danny]...c2...w3[x=Danger]...c3 107 - Equivalent serial order: w1[x=Daniel]...w3[x=Danger]...w2[x=Danny] 108 - w3 goes back in time (time-travel, анахронизм, anachronism) 109 110 Notes: 111 - Can be caused by async replication AND Unsynchronized Clock problem 112 - System can decide to do that due to other reasons, since this does not violate serializability guarantee 113 - Side note: when the “Danny” transaction and/or the other name-change transactions also perform a read to the database as part of the same transaction as the write to the name, the ability to time-travel without violating serializability becomes much more difficult. But for “blind write” transactions such as these examples, time-travel is easy to accomplish. 114 115 ### The stale read (несвежее чтение) 116 117 Anomaly: 118 - History: w1[x=50]...с1...w2[x=0]...c2...r3...c3 119 - Equivalent serial order: w1[x=50]...r3[x=50]...w2[x=0] 120 - r3 goes back in time (time-travel) 121 122 Reasons: 123 1. Async replication (distributed) 124 2. Unsynchronized Clock problem (distributed) 125 3. Projection update delay (single node) 126 4. System can decide to do that due to other reasons 127 128 ### The causal reverse (обратная причинность) 129 130 Anomaly (exchange x and y): 131 - History: [x=1000000, y=0]...r1[x, y]...c1...w2[x=0]...c2...w3[y=1000000]...c3 132 - Equivalent serial order: w3[y=1000000]...r1[x=1000000, y=1000000]...w2[x=0] 133 134 "Real-life" scenario: 135 - User has 1000000 on accountx and 0 on accounty 136 - User gets 1000000 cash from accountx 137 - User puts 1000000 cash to accounty 138 - This enables a read (in CockroachDB’s case, this read has to be sent to the system before the two write transactions) to potentially see the write of the later transaction, but not the earlier one 139 140 One example of a distributed database system that allows the causal reverse is CockroachDB (aka CRDB) 141 - CockroachDB partitions a database such that each partition commits writes and synchronously replicates data separately from other partitions 142 - Each write receives a timestamp based on the local clock on one of the servers within that partition 143 - In general, it is impossible to perfectly synchronize clocks across a large number of machines, so CockroachDB allows a maximum clock skew for which clocks across a deployment can differ 144 - It is possible in CockroachDB for a transaction to commit, and a later transaction to come along (that writes data to a different partition), that was caused by the earlier one (that started after the earlier one finished), and still receive an earlier timestamp than the earlier transaction. 145 - This enables a read (in CockroachDB’s case, this read has to be sent to the system before the two write transactions) to potentially see the write of the later transaction, but not the earlier one 146 147 ### Avoiding time travel anomalies 148 149 > In distributed and replicated database systems, this additional guarantee of “no time travel” on top of the other serializability guarantees is non-trivial, but has nonetheless been accomplished by several systems such as FaunaDB/Calvin, FoundationDB, and Spanner. This high level of correctness is called **strict serializability**. 150 151 ### Classification of serializable systems 152 153 **Strong session serializable** systems guarantee strict serializability of transactions within the same session, but otherwise only one-copy serializability 154 - Implementation example: "Sticky session", all requests routed to the same node 155 156 **Strong write serializable** systems guarantee strict serializability for all transactions that insert or update data, but only one-copy serializability for read-only transactions 157 - Implementation example: Read-only replica systems where all update transactions go to the master replica which processes them with strict serializability 158 159 **Strong partition serializable** systems guarantee strict serializability only on a per-partition basis 160 - Data is divided into a number of disjoint partitions 161 - Within each partition, transactions that access data within that partition are guaranteed to be strictly serializable 162 - (!!!) But otherwise, the system only guarantees one-copy serializability 163 164 |System Guarantee|Dirty read|Non-repeatable read|Phantom Read|Write Skew|Immortal write|Stale read|Causal reverse| 165 |--- |--- |--- |--- |--- |--- |--- |--- | 166 |READ UNCOMMITTED|Possible|Possible|Possible|Possible|Possible|Possible|Possible| 167 |READ COMMITTED|-|Possible|Possible|Possible|Possible|Possible|Possible| 168 |REPEATABLE READ|-|-|Possible|Possible|Possible|Possible|Possible| 169 |SNAPSHOT ISOLATION|-|-|-|Possible|Possible|Possible|Possible| 170 |SERIALIZABLE / ONE COPY SERIALIZABLE / STRONG SESSION SERIALIZABLE|-|-|-|-|Possible|Possible|Possible| 171 |STRONG WRITE SERIALIZABLE|-|-|-|-|-|**Possible**|-| 172 |STRONG PARTITION SERIALIZABLE|-|-|-|-|-|-|**Possible**| 173 |STRICT SERIALIZABLE|-|-|-|-|-|-|-| 174 175 176 ## [CRIT] A Critique of ANSI SQL Isolation Levels 177 - https://arxiv.org/ftp/cs/papers/0701/0701157.pdf 178 179 ### 2. Isolation Definitions 180 181 ANSI SQL Isolation designers sought a definition that would admit many different implementations, not just locking. They defined isolation with the following three phenomena: 182 183 - **P1 (Dirty Read)**: Transaction T1 modifies a data item. Another transaction T2 then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed. 184 - **P2 (Non-repeatable or Fuzzy Read)**: Transaction T1 reads a data item. Another transaction T2 then modifies or deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified value or discovers that the data item has been deleted. 185 - **P3 (Phantom)**: Transaction T1 reads a set of data items satisfying some <search condition>. Transaction T2 then creates data items that satisfy T1’s <search condition> and commits. If T1 then repeats its read with the same <search condition>, it gets a set of data items different from the first read. 186 187 > Histories consisting of reads, writes, commits, and aborts can be written in a shorthand notation: “w1[x]” means a write by transaction 1 on data item x (which is how a data item is “modified’), and “r2[x]” represents a read of x by transaction 2. Transaction 1 reading and writing a set of records satisfying predicate P is denoted by r1[P] and w1[P] respectively. Transaction 1’s commit and abort (ROLLBACK) are written “c1” and “a1”, respectively. 188 189 Phenomenon P1 might be restated as disallowing the following scenario: 190 191 (2.1) w1[x]...r2[x]...(a1 and c2 in either order) 192 193 Some people reading P1 interpret it to mean: 194 195 (2.2) w1[x]...r2[x]...((c1 or a1) and (c2 or a2) in any order) 196 197 Interpretation (2.2) specifies a **phenomenon** that might lead to an **anomaly**, while (2.1) specifies an actual anomaly. Denote them as P1 and A1 respectively. Thus: 198 199 - P1: w1[x]...r2[x]...((c1 or a1) and (c2 or a2) in any order) 200 - A1: w1[x]...r2[x]...(a1 and c2 in any order) 201 202 Similarly, the English language phenomena P2 and P3 have strict and broad interpretations, and are denoted P2 and P3 for broad, and A2 and A3 for strict: 203 204 - P2: r1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order) 205 - A2: r1[x]...w2[x]...c2...r1[x]...c1 206 - P3: r1[P]...w2[y in P]...((c1 or a1) and (c2 or a2) any order) 207 - A3: r1[P]...w2[y in P]...c2...r1[P]...c1 208 209 > The fundamental serialization theorem is that well-formed two-phase locking guarantees serializability — each history arising under two-phase locking is equivalent to some 210 211 ### 3. Analyzing ANSI SQL Isolation Levels 212 213 214 - **P0 (Dirty Write)**: P0: w1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order) 215 216 > ANSI SQL isolation should be modified to require P0 for all isolation levels 217 218 We now argue that a broad interpretation of the three ANSI phenomena is required. Consider history H1, involving a $40 transfer between bank balance rows x and y: 219 220 By Table 1, histories under READ COMMITTED isolation forbid anomaly A1, REPEATABLE READ isolation forbids anomalies A1 and A2, and SERIALIZABLE isolation forbids anomalies A1, A2, and A3. Consider history H1, involving a $40 transfer between bank balance rows x and y: 221 222 `H1: r1[x=50]w1[x=10]r2[x=10]r2[y=50]c2 r1[y=50]w1[y=90]c1` 223 - T2 gets value (x=10) which never existed in commited state 224 - H1 is non-serializable 225 - None of A1, A2, A3 happen in H1 226 227 Consider instead taking the broad interpretation of A1, the phenomenon P1: 228 229 `P1: w1[x]...r2[x]...((c1 or a1) and (c2 or a2) in any order)` 230 231 H1 indeed violates P1. Thus, we should take the interpretation P1 for what was intended by ANSI rather than A1. 232 233 - **P0 (Dirty Write)**: w1[x]...w2[x]...(c1 or a1) 234 - **P1 (Dirty Read)**: w1[x]...r2[x]...(c1 or a1) 235 - Prevented by READ COMMITED 236 - **P2 (Fuzzy or Non-Repeatable Read)**: r1[x]...w2[x]...(c1 or a1) 237 - Prevented by REPEATABLE READ 238 - **P3 (Phantom)**: r1[P]...w2[y in P]...(c1 or a1) 239 - Prevented by SERIALIZABLE 240 241 242 ### 4. Other Isolation Types 243 244 #### 4.1 Cursor Stability 245 246 Cursor Stability is designed to prevent the lost update phenomenon. 247 248 - **P4 (Lost Update)**: r1[x]...w2[x]...w1[x]...c1 249 - Prevented by REPEATABLE READ 250 - **P4C (Cursor Lost Update)**: rc1[x]...w2[x]...w1[x]...c1 251 - Short lock 252 - Prevented by CURSOR STABILITY 253 254 READ COMMITTED << Cursor Stability << REPEATABLE READ 255 - `<<` means "weaker" 256 257 #### 4.2 Snapshot Isolation 258 259 > A transaction running in Snapshot Isolation is never blocked attempting a read as long as the snapshot data from its Start-Timestamp can be maintained. The transaction's writes (updates, inserts, and deletes) will also be reflected in this snapshot, to be read again if the transaction accesses (i.e., reads or updates) the data a second time. Updates by other transactions active after the transaction StartTimestamp are invisible to the transaction. 260 261 262 Constraint violation is a generic and important type of concurrency anomaly. 263 264 - **A5A (Read Skew)**: r1[x]...w2[x]...w2[y]...c2...r1[y]...(c1 or a1) 265 - Incosistent pair x,y read by T1 266 - **A5B (Write Skew)**: r1[x]...r2[y]...w1[y]...w2[x]...(c1 and c2 occur) 267 - Incosistent pair x,y written by T1 and T2 268 269 ### 5. Summary and Conclusions 270 271 > ANSI’s choice of the term Repeatable Read is doubly unfortunate: (1) repeatable reads do not give repeatable results, and (2) the industry had already used the term to mean exactly that: repeatable reads mean serializable in several products. We recommend that another term be found for this. 272 273 274 ## `hdm-stuttgart.de`: Isolation and Consistency in Databases 275 276 - https://blog.mi.hdm-stuttgart.de/index.php/2020/03/06/isolation-and-consistency-in-databases/ 277 - In fact this is a brief paraphrase of https://dbmsmusings.blogspot.com* articles (see the article References section) 278 279 ### Difference between Isolation levels vs. Consistency levels 280 281 (!!!) Since consistency levels were only designed for single operation actions (read or write not a transaction which might combine both atomically) 282 283 284 ### Isolation Level 285 286 - **Isolation** is the separation of individual transactions with the database so that they can be processed in parallel. This is to prevent the temporary values of one transaction from being used in another transaction 287 - **Perfect isolation** means that we can process each transaction in parallel but the result is the same as the result of series of transactions that have happened in sequence 288 - In the SQL standard this behavior is known as **Serializability** 289 290 ### Anomalies in Concurrent Systems 291 292 Various anomalies that can occur in databases where transactions are processed in parallel and do not have serializability as isolation level 293 294 #### lost-update anomaly 295 - Problem (T2 overwrites T1 data) 296 - T1 and T2 read V 297 - T1 increments V and commit 298 - T2 increments V and commit 299 - Result: 1 increment of V is lost 300 - Solution: ReadLock 301 302 ```javascript 303 T1 T2 V 304 start 0 305 start 0 306 read v = V read v = V 0 307 write V = v + 1 1 308 commit 1 309 write V = v + 1 1 310 commit 1 (expected 2) 311 ``` 312 313 314 #### dirty-write anomaly 315 - Problem (T2 overwrites dirty T1 data) 316 - T1 writes V 317 - T2 writes to V 318 - T2 aborts 319 - T1 commits 320 - Result: T1 losts its write 321 - Solution: WriteLock 322 323 ```javascript 324 T1 T2 V 325 start 0 326 start 0 327 write V = 1 1 328 write V = 2 2 329 abort 0 330 commit 0 (expected 2) 331 ``` 332 333 #### dirty-read anomaly 334 - Problem (read of the value which will be aborted) 335 - T1 writes V 336 - T2 reads V 337 - T1 aborts 338 - Result: T2 has a dirty V 339 - Solution 340 - WriteLock, but it is expensive in perfomance terms 341 - MVCC, cheap in perfomance but tricky 342 343 #### non-repeatable read anomaly 344 345 - Problem (second read of the same value gives different result) 346 - T1 reads V.v1 347 - T2 writes V.v2 and commits 348 - T1 reads V.v2 349 - Result: T1 reads two different versions of V 350 - Solution: ReadLock, MVCC 351 352 #### phantom read anomaly 353 354 - Problem (second read of the same view gives new items) 355 - T1 calculates max(V: [V1, V2, V3]) 356 - T2 writes V4 357 - T1 calculates average(V: [V1, V2, V3, V4]) 358 - Result: max(V) can be less than average(V) 359 - Solution: No easy solution, MVCC expensive 360 361 #### write skew anomaly 362 - Problem (transactions changes values which are used in their preconditions) 363 - V1 and V2 are both zero 364 - Preconditions: At least one must be zero 365 - T1 and T2 start 366 - T1 checks preconditions - ok 367 - T2 checks preconditions - ok 368 - T1 changes V1 to one 369 - T2 changes V1 to one 370 - Result: both V1 and V2 are non-zero, which is forbidden state 371 - Solution: ReadLock 372 373 #### read skew anomaly 374 - From [habr.com](https://habr.com/ru/company/otus/blog/501294/) 375 - Problem (T1 reads inconsistent versions of values V1.v1, V2.v2) 376 - V1.v1, V2.v1 377 - T1 reads V1.v1 378 - T2 changes V2.v2 379 - T1 reads V2.v2 380 - Solution: MVCC 381 382 ### Time-travel anomalies under serializability in distributed databases 383 384 #### immortal write 385 386 - Problem (two writes to the same value are processed by different database servers, in the global order the one transaction has travelled back in time) 387 - User updates username Hans -> Peeeter 388 - User updates username Hans -> Peter 389 - User sees Peeter 390 - Caused by unsynchronized clocks 391 392 #### stale read (чтение протухшего значения) 393 394 - Problem (data comes from a database server that is not yet synchronized with the other database servers) 395 - User has $2000 on his account 396 - User transfers $1000 397 - User still sees $2000 on his account 398 - (!!!) Stale reads do not violate serializability. The system is simply time travelling the read transaction to a point in time in the equivalent serial order of transactions before the new writes to this data item occur. 399 400 #### causal reverse (обратная причинность) 401 402 - Problem (two writes of two values are processed by different database servers, in the global order the one transaction has travelled back in time) 403 - User has 1000 on account1 and 0 on account2 404 - User gets 1000 cash from account1 405 - User puts 1000 cash to account2 406 - This enables a read (in CockroachDB’s case, this read has to be sent to the system before the two write transactions) to potentially see the write of the later transaction, but not the earlier one 407 - https://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html 408 409 ## See also 410 411 https://rsdn.org/forum/dictionary/1087023.all 412 The strongest correctness criteria for a replicated system is 1-copy-serializability: despite the existence of multiple copies, an object appears as one logical copy (1-copy-equivalence) and the execution of concurrent transactions is coordinated so that it is equivalent to a serial execution over the logical copy (serializability). 413 414 Наиболее строгим критерием корректности в реплицируемой системе является упорядоченность операций над эквивалентом единственного экземпляра: несмотря на существование нескольких экземпляров, объект ведет себя как логически единственный экземпляр (эквивалентность единственному экземпляру), и одновременное исполнение транзакций коррдинируется таким образом, чтобы быть эквивалентным последовательному исполнению над логическим экземпляром (упорядоченность). 415 416 417 ## Formatting experiments 418 419 | T1 | T2 | V | 420 | ----------- | ----------- | ----------- | 421 | start | start | 0 | 422 | V = 1 | | 1 | 423 | | V = 2 | 2 | 424 | abort | | 0 | 425 | | commit | 0 | 426 427 ```mermaid 428 flowchart TD 429 subgraph T1 430 T1_start[start] 431 T1_write_V[write V = 1] 432 T1_abort[abort] 433 end 434 subgraph T2 435 T2_start[start] 436 T2_write_V[write V = 2] 437 T2_commit[commit] 438 end 439 440 T1_start-->T2_start 441 T2_start-->T1_write_V 442 T1_write_V-->T2_write_V 443 T2_write_V-->T1_abort 444 T1_abort-->T2_commit 445 ```