github.com/jlmucb/cloudproxy@v0.0.0-20170830161738-b5aa0b619bc4/Doc/Replay.txt (about)

     1  Replay Resistant State
     2  ----------------------
     3  
     4  The current Tao interface does not offer support for protecting state against
     5  replay attacks or building any storage facilities that do so. That is, Tao
     6  itself doesn't have any storage API, and none of the other existing APIs would
     7  be useful for building one.
     8  
     9  Simple example:
    10  
    11  Suppose we want to build an CA that hands out X509 certificates for https. The
    12  CA needs to keep track of the serial numbers it has handed out so far. It can
    13  store the next SN in a file, and it can sign or encrypt that file using
    14  Tao-protected keys, but there is still no guarantee that the file is fresh.
    15  
    16  Prior Work
    17  ----------
    18  
    19  TPM offers monotonic counters that can posibly help with replay-resistance. But
    20  these are limited: Only a few exist (four or so), and only one can be "active"
    21  during any boot cycle. 
    22  
    23  Some prior work looks at using merkle hash trees and the TPM counters to
    24  bootstrap many more counters. This works without needing any trusted OS or
    25  trusted software beyond the TPM itself---essentially, the TPM is used as a
    26  trusted log, with the TPM counter providing replay resistance for the log.
    27  
    28  Other related work? To do...
    29  
    30  Replay resistance in Tao
    31  ------------------------
    32  
    33  Option 0: Do nothing. Assume there is a trusted replay-resistant mechanism
    34  elsewhere.
    35  
    36  Option 1: Implement hash-tree work outside Tao. The TPM implements the counter
    37  and small NV storage at the base. Some storage service on top of that,
    38  independent of Tao, implements the hash-tree approach. Applications at layers of
    39  Tao would talk to that same storage service. Tao API is left unchanged.
    40  
    41  Option 2: Implement hash-tree work inside Tao at a single level. The TPM
    42  implements the counter and small NV storage at the base. The first Tao host
    43  above that implements the hash-tree approach and exposes that interface to each
    44  hosted program. Subsequent stacked Tao hosts would either just re-expose the
    45  same interface. It's not clear what each level would provide beyond the first
    46  level, though. Perhaps each higher level would do authorization checks specific
    47  to that level while just passing the operation down when the auth check
    48  succeeds. Or perhaps it would just forward all calls downward and let
    49  authorization happen at the first level above the TPM. The Tao API would include
    50  interfaces for creating, managing, and manipulating counters.
    51  
    52  Option 3: Provide support at every Tao level for hash-tree or other approaches.
    53  Every Tao host, including the TPM, implements a set of counters and some NV
    54  storage. Presumably, every hosted program would get one dedicated counter and a
    55  small amount of NV storage. If a hosted program needs more than what the host
    56  Tao provides, then the hosted program can use a hash-tree approach or any other
    57  similar approach internally. In particular, a hosted Tao would presumably need
    58  to use hash-trees or something similar to multiplex the the counter provided by
    59  its own host Tao. The Tao API would include a few simple calls for hosted
    60  programs to access the limited counters provided by the host Tao.
    61  
    62  Option 4: Provide support for hash-trees at every level (except TPM). The TPM
    63  provides one counter and some NV storage. Every other Tao level provides a
    64  higher-level API for creating, managing, and manipulating counters and/or NV
    65  storage. A hosted application might use these counters directly, or just use a
    66  small number of them combined with something like the hash-tree approach.
    67  Similarly, a hosted Tao could either pass calls from its own hosted programs
    68  down to the underlying Tao (ala option 2), or the hosted Tao could locally
    69  implement a hash-tree approach using just one or two counters from the
    70  underlying Tao (ala option 3). 
    71  
    72  Option 5: Implement replay-resistant storage without counters. Don't use
    73  counters at all, rely instead on policies to control access to storage. Each Tao
    74  provides:
    75    void Put(policy, name, data) // creates new slot containing data
    76    data = Get(policy, name) // get data from previously created slot
    77    void Set(policy, name, data) // overwrite slot with new data
    78    void Delete(policy, name) // delete slot
    79  Note that policy is used in two ways: it defines a namespace to avoid
    80  unintentional collisions for the name parameter; and it governs access to the
    81  data. Each Tao level might have resources for storing only a few pieces of data,
    82  or for storing only small data. Hosted programs can avoid large data by storing
    83  the actual data elsewhere (e.g. in encrypted but replay-susceptible storage) and
    84  storing only hashes in the Tao. Hosted programs can avoid using too many slots
    85  by merging multiple data items into a single slot.
    86  
    87  Comments: Counters vs Non-volatile storage
    88  ------------------------------------------
    89  
    90  Monotonicic counters are a nice primitive, but
    91  they are not necessary if replay-resistant non-volatile storage with
    92  fine-grained authorization is available. Since Tao can authenticate and
    93  authorize at the level of individual hosted programs (or smaller), a hosted
    94  program can be sure that no other entity rolled back changes to its data, since
    95  no other entity has access to the data. Or, at least, that such changes would be
    96  detected.
    97  
    98  Some levels of Tao could just pass all Put/Get/Set/Delete calls down to lower
    99  layers. The TPM implements particularly restricted storage, so perhaps the first
   100  Tao layer (i.e. LinuxTao) should implement a version with more available space.
   101  
   102  Comments: Rate limiting and buffering
   103  -------------------------------------
   104  
   105  The TPM, at the lowest layer, simply can not perform efficient updates to
   106  non-volatile storage (or counters). A TPM NV update might take a second or more,
   107  and we may be rate-limited to around one update per five seconds. Simplistic
   108  buffering and write-aggregation ruin replay resistance.
   109  
   110  During write operations (Put, Set, and Delete), Tao should buffer the write and
   111  pass back a token T. The hosted process can subsequently invoke Commit(T) which
   112  will either force or wait for a buffer flush. When Commit(T) returns, the caller
   113  knows that the corresponding write, and all previous writes, have been
   114  committed. If the system crashes with dirty buffers, upon startup everything
   115  rolls back to the state at the last successful commit. (Perhaps we could attempt
   116  to roll forward some, e.g. if we have signed logs showing changes made since the
   117  last commit.) The system will then have to make some conservative estimate of
   118  how much the data may have changed since the last commit.
   119  
   120  Consider, for example, an app that issues certificates each marked with a unique
   121  serial number. The serial numbers need not be sequential, but they need to be
   122  unique and never reused. The app can store the last-issued serial number in the
   123  host Tao like, and limit bufferring to (say) N=5 writes like so:
   124    During installation:
   125      t = Put(self, "serialnum", 0)
   126  	WriteFile("state.txt", "0")
   127  	Commit(t)
   128    During each startup:
   129      x = Get(self, "serialnum")
   130  	y = ReadFile("state.txt")
   131  	if (x != y) {
   132        // y could be a little ahead or behind x due to buffering of file writes
   133        // and/or NV writes. Here, we can just take the NV value and ignore the
   134        // file value. Other apps might need to keep (or generate) multiple y
   135        // file values to find one that matches x.
   136        y = x
   137  	}
   138      // x is reasonably fresh, and there have been no more than N writes
   139      // since then, so start issueing from x+N
   140      last_issued_serial = x+N
   141    During normal operation:
   142      last_commit = last_issued_serial
   143      loop {
   144  	  ...
   145  	  cert.serial = ++last_issued_serial;
   146  	  WriteFile("state.txt", cert.serial)
   147  	  t[i] = Put(self, "serialnum", cert.serial)
   148  	  if (last_issued_serial >= last_commit + N) {
   149  	    Commit(t[i-N])
   150  	  }
   151  	  ...
   152  	}
   153