cuelang.org/go@v0.10.1/internal/core/adt/disjunct2.go (about)

     1  // Copyright 2024 CUE Authors
     2  //
     3  // Licensed under the Apache License, Version 2.0 (the "License");
     4  // you may not use this file except in compliance with the License.
     5  // You may obtain a copy of the License at
     6  //
     7  //     http://www.apache.org/licenses/LICENSE-2.0
     8  //
     9  // Unless required by applicable law or agreed to in writing, software
    10  // distributed under the License is distributed on an "AS IS" BASIS,
    11  // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    12  // See the License for the specific language governing permissions and
    13  // limitations under the License.
    14  
    15  package adt
    16  
    17  // # Overview
    18  //
    19  // This files contains the disjunction algorithm of the CUE evaluator. It works
    20  // in unison with the code in overlay.go.
    21  //
    22  // In principle, evaluating disjunctions is a matter of unifying each disjunct
    23  // with the non-disjunct values, eliminate those that fail and see what is left.
    24  // In case of multiple disjunctions it is a simple cross product of disjuncts.
    25  // The key is how to do this efficiently.
    26  //
    27  // # Classification of disjunction performance
    28  //
    29  // The key to an efficient disjunction algorithm is to minimize the impact of
    30  // taking cross product of disjunctions. This is especially pertinent if
    31  // disjunction expressions can be unified with themselves, as can be the case in
    32  // recursive definitions, as this can lead to exponential time complexity.
    33  //
    34  // We identify the following categories of importance for performance
    35  // optimization:
    36  //
    37  //  - Eliminate duplicates
    38  //      - For completed disjunctions
    39  //      - For partially computed disjuncts
    40  //  - Fail early / minimize work before failure
    41  //      - Filter disjuncts before unification (TODO)
    42  //          - Based on discriminator field
    43  //          - Based on a non-destructive unification of the disjunct and
    44  //            the current value computed so far
    45  //      - During the regular destructive unification
    46  //          - Traverse arcs where failure may occur
    47  //          - Copy on write (TODO)
    48  //
    49  // We discuss these aspects in more detail below.
    50  //
    51  // # Eliminating completed duplicates
    52  //
    53  // Eliminating completed duplicates can be achieved by comparing them for
    54  // equality. A disjunct can only be considered completed if all disjuncts have
    55  // been selected and evaluated, or at any time if processing fails.
    56  //
    57  // The following values should be recursively considered for equality:
    58  //
    59  //  - the value of the node,
    60  //  - the value of its arcs,
    61  //  - the key and value of the pattern constraints, and
    62  //  - the expression of the allowed fields.
    63  //
    64  // In some of these cases it may not be possible to detect if two nodes are
    65  // equal. For instance, two pattern constraints with two different regular
    66  // expressions as patterns, but that define an identical language, should be
    67  // considered equal. In practice, however, this is hard to distinguish.
    68  //
    69  // In the end this is mostly a matter of performance. As we noted, the biggest
    70  // concern is to avoid a combinatorial explosion when disjunctions are unified
    71  // with itself. The hope is that we can at least catch these cases, either
    72  // because they will evaluate to the same values, or because we can identify
    73  // that the underlying expressions are the same, or both.
    74  //
    75  // # Eliminating partially-computed duplicates
    76  //
    77  // We start with some observations and issues regarding partially evaluated
    78  // nodes.
    79  //
    80  // ## Issue: Closedness
    81  //
    82  // Two identical CUE values with identical field, values, and pattern
    83  // constraints, may still need to be consider as different, as they may exhibit
    84  // different closedness behavior. Consider, for instance, this example:
    85  //
    86  //  #def: {
    87  //      {} | {c: string} // D1
    88  //      {} | {a: string} // D2
    89  //  }
    90  //  x: #def
    91  //  x: c: "foo"
    92  //
    93  // Now, consider the case of the cross product that unifies the two empty
    94  // structs for `x`. Note that `x` already has a field `c`. After unifying the
    95  // first disjunction with `x`, both intermediate disjuncts will have the value
    96  // `{c: "foo"}`:
    97  //
    98  //         {c: "foo"} & ({} | {c: string})
    99  //       =>
   100  //         {c: "foo"} | {c: "foo"}
   101  //
   102  // One would think that one of these disjuncts can be eliminated. Nonetheless,
   103  // there is a difference. The second disjunct, which resulted from unifying
   104  //  `{c: "foo"}` with `{c: string}`, will remain valid. The first disjunct,
   105  // however, will fail after it is unified and completed with the `{}` of the
   106  // second disjunctions (D2): only at this point is it known that x was unified
   107  // with an empty closed struct, and that field `c` needs to be rejected.
   108  //
   109  // One possible solution would be to fully compute the cross product of `#def`
   110  // and use this expanded disjunction for unification, as this would mean that
   111  // full knowledge of closedness information is available.
   112  //
   113  // Although this is possible in some cases and can be a useful performance
   114  // optimization, it is not always possible to use the fully evaluated disjuncts
   115  // in such a precomputed cross product. For instance, if a disjunction relies on
   116  // a comprehension or a default value, it is not possible to fully evaluate the
   117  // disjunction, as merging it with another value may change the inputs for such
   118  // expressions later on. This means that we can only rely on partial evaluation
   119  // in some cases.
   120  //
   121  // ## Issue: Outstanding tasks in partial results
   122  //
   123  // Some tasks may not be completed until all conjuncts are known. For cross
   124  // products of disjunctions this may mean that such tasks cannot be completed
   125  // until all cross products are done. For instance, it is typically not possible
   126  // to evaluate a tasks that relies on taking a default value that may change as
   127  // more disjuncts are added. A similar argument holds for comprehensions on
   128  // values that may still be changed as more disjunctions come in.
   129  //
   130  // ## Evaluating equality of partially evaluated nodes
   131  //
   132  // Because unevaluated expressions may depend on results that have yet been
   133  // computed, we cannot reliably compare the results of a Vertex to determine
   134  // equality. We need a different strategy.
   135  //
   136  // The strategy we take is based on the observation that at the start of a cross
   137  // product, the base conjunct is the same for all disjuncts. We can factor these
   138  // inputs out and focus on the differences between the disjuncts. In other
   139  // words, we can focus solely on the differences that manifest at the insertion
   140  // points (or "disjunction holes") of the disjuncts.
   141  //
   142  // In short, two disjuncts are equal if:
   143  //
   144  //  1. the disjunction holes that were already processed are equal, and
   145  //  2. they have either no outstanding tasks, or the outstanding tasks are equal
   146  //
   147  // Coincidentally, analyzing the differences as discussed in this section is
   148  // very similar in nature to precomputing a disjunct and using that. The main
   149  // difference is that we potentially have more information to prematurely
   150  // evaluate expressions and thus to prematurely filter values. For instance, the
   151  // mixed in value may have fixed a value that previously was not fixed. This
   152  // means that any expression referencing this value may be evaluated early and
   153  // can cause a disjunct to fail and be eliminated earlier.
   154  //
   155  // A disadvantage of this approach, however, is that it is not fully precise: it
   156  // may not filter some disjuncts that are logically identical. There are
   157  // strategies to further optimize this. For instance, if all remaining holes do
   158  // not contribute to closedness, which can be determined by walking up the
   159  // closedness parent chain, we may be able to safely filter disjuncts with equal
   160  // results.
   161  //
   162  // # Invariants
   163  //
   164  // We use the following assumptions in the below implementation:
   165  //
   166  //  - No more conjuncts are added to a disjunct after its processing begins.
   167  //    If a disjunction results in a value that causes more fields to be added
   168  //    later, this may not influence the result of the disjunction, i.e., those
   169  //    changes must be idempotent.
   170  //  - TODO: consider if any other assumptions are made.
   171  //
   172  // # Algorithm
   173  //
   174  // The evaluator accumulates all disjuncts of a Vertex in the nodeContext along
   175  // with the closeContext at which each was defined. A single task is scheduled
   176  // to process them all at once upon the first encounter of a disjunction.
   177  //
   178  // The algorithm is as follows:
   179  //  - Initialize the current Vertex n with the result evaluated so far as a
   180  //    list of "previous disjuncts".
   181  //  - Iterate over each disjunction
   182  //    - For each previous disjunct x
   183  //      - For each disjunct y in the current disjunctions
   184  //        - Unify
   185  //        - Discard if error, store in the list of current disjunctions if
   186  //          it differs from all other disjunctions in this list.
   187  //  - Set n to the result of the disjunction.
   188  //
   189  // This algorithm is recursive: if a disjunction is encountered in a disjunct,
   190  // it is processed as part of the evaluation of that disjunct.
   191  //
   192  
   193  // A disjunct is the expanded form of the disjuncts of either an Disjunction or
   194  // a DisjunctionExpr.
   195  //
   196  // TODO(perf): encode ADT structures in the correct form so that we do not have to
   197  // compute these each time.
   198  type disjunct struct {
   199  	expr Expr
   200  	err  *Bottom
   201  
   202  	isDefault bool
   203  	mode      defaultMode
   204  }
   205  
   206  // disjunctHole associates a closeContext copy representing a disjunct hole with
   207  // the underlying closeContext from which it originally was branched.
   208  // We could include this information in the closeContext itself, but since this
   209  // is relatively rare, we keep it separate to avoid bloating the closeContext.
   210  type disjunctHole struct {
   211  	cc         *closeContext
   212  	underlying *closeContext
   213  }
   214  
   215  func (n *nodeContext) scheduleDisjunction(d envDisjunct) {
   216  	if len(n.disjunctions) == 0 {
   217  		// This processes all disjunctions in a single pass.
   218  		n.scheduleTask(handleDisjunctions, nil, nil, CloseInfo{})
   219  	}
   220  
   221  	// ccHole is the closeContext in which the individual disjuncts are
   222  	// scheduled.
   223  	ccHole := d.cloneID.cc
   224  
   225  	// This counter can be decremented after either a disjunct has been
   226  	// scheduled in the clone. Note that it will not be closed in the original
   227  	// as the result will either be an error, a single disjunct, in which
   228  	// case mergeVertex will override the original value, or multiple disjuncts,
   229  	// in which case the original is set to the disjunct itself.
   230  	ccHole.incDisjunct(n.ctx, DISJUNCT)
   231  
   232  	n.disjunctions = append(n.disjunctions, d)
   233  
   234  	n.disjunctCCs = append(n.disjunctCCs, disjunctHole{
   235  		cc:         ccHole, // this value is cloned in doDisjunct.
   236  		underlying: ccHole,
   237  	})
   238  }
   239  
   240  func initArcs(ctx *OpContext, v *Vertex) {
   241  	for _, a := range v.Arcs {
   242  		a.getState(ctx)
   243  		initArcs(ctx, a)
   244  	}
   245  }
   246  
   247  func (n *nodeContext) processDisjunctions() *Bottom {
   248  	defer func() {
   249  		// TODO:
   250  		// Clear the buffers.
   251  		// TODO: we may want to retain history of which disjunctions were
   252  		// processed. In that case we can set a disjunction position to end
   253  		// of the list and schedule new tasks if this position equals the
   254  		// disjunction list length.
   255  	}()
   256  
   257  	a := n.disjunctions
   258  	n.disjunctions = n.disjunctions[:0]
   259  
   260  	initArcs(n.ctx, n.node)
   261  
   262  	// TODO(perf): single pass for quick filter on all disjunctions.
   263  	// n.node.unify(n.ctx, allKnown, attemptOnly)
   264  
   265  	// Initially we compute the cross product of a disjunction with the
   266  	// nodeContext as it is processed so far.
   267  	cross := []*nodeContext{n}
   268  	results := []*nodeContext{} // TODO: use n.disjuncts as buffer.
   269  
   270  	// Slow path for processing all disjunctions. Do not use `range` in case
   271  	// evaluation adds more disjunctions.
   272  	for i := 0; i < len(a); i++ {
   273  		d := &a[i]
   274  
   275  		mode := attemptOnly
   276  		if i == len(a)-1 {
   277  			mode = finalize
   278  		}
   279  		results = n.crossProduct(results, cross, d, mode)
   280  
   281  		// TODO: do we unwind only at the end or also intermittently?
   282  		switch len(results) {
   283  		case 0:
   284  			// TODO: now we have disjunct counters, do we plug holes at all?
   285  
   286  			// We add a "top" value to disable closedness checking for this
   287  			// disjunction to avoid a spurious "field not allowed" error.
   288  			// We return the errors below, which will, in turn, be reported as
   289  			// the error.
   290  			// TODO: probably no longer needed:
   291  			for i++; i < len(a); i++ {
   292  				c := MakeConjunct(d.env, top, a[i].cloneID)
   293  				n.scheduleConjunct(c, d.cloneID)
   294  			}
   295  
   296  			// Empty intermediate result. Further processing will not result in
   297  			// any new result, so we can terminate here.
   298  			// TODO(errors): investigate remaining disjunctions for errors.
   299  			return n.collectErrors(d)
   300  
   301  		case 1:
   302  			// TODO: consider injecting the disjuncts into the main nodeContext
   303  			// here. This would allow other values that this disjunctions
   304  			// depends on to be evaluated. However, we should investigate
   305  			// whether this might lead to a situation where the order of
   306  			// evaluating disjunctions matters. So to be safe, we do not allow
   307  			// this for now.
   308  		}
   309  
   310  		// switch up buffers.
   311  		cross, results = results, cross[:0]
   312  	}
   313  
   314  	switch len(cross) {
   315  	case 0:
   316  		panic("unreachable: empty disjunction already handled above")
   317  
   318  	case 1:
   319  		d := cross[0].node
   320  		n.node.BaseValue = d
   321  		n.defaultMode = cross[0].defaultMode
   322  
   323  	default:
   324  		// append, rather than assign, to allow reusing the memory of
   325  		// a pre-existing slice.
   326  		n.disjuncts = append(n.disjuncts, cross...)
   327  	}
   328  
   329  	return nil
   330  }
   331  
   332  // crossProduct computes the cross product of the disjuncts of a disjunction
   333  // with an existing set of results.
   334  func (n *nodeContext) crossProduct(dst, cross []*nodeContext, dn *envDisjunct, mode runMode) []*nodeContext {
   335  	defer n.unmarkDepth(n.markDepth())
   336  	defer n.unmarkOptional(n.markOptional())
   337  
   338  	for _, p := range cross {
   339  		// TODO: use a partial unify instead
   340  		// p.completeNodeConjuncts()
   341  		initArcs(n.ctx, p.node)
   342  
   343  		for j, d := range dn.disjuncts {
   344  			c := MakeConjunct(dn.env, d.expr, dn.cloneID)
   345  			r, err := p.doDisjunct(c, d.mode, mode)
   346  
   347  			if err != nil {
   348  				// TODO: store more error context
   349  				dn.disjuncts[j].err = err
   350  				continue
   351  			}
   352  
   353  			// Unroll nested disjunctions.
   354  			switch len(r.disjuncts) {
   355  			case 0:
   356  				// r did not have a nested disjunction.
   357  				dst = appendDisjunct(n.ctx, dst, r)
   358  
   359  			case 1:
   360  				panic("unexpected number of disjuncts")
   361  
   362  			default:
   363  				for _, x := range r.disjuncts {
   364  					dst = appendDisjunct(n.ctx, dst, x)
   365  				}
   366  			}
   367  		}
   368  	}
   369  	return dst
   370  }
   371  
   372  // collectErrors collects errors from a failed disjunctions.
   373  func (n *nodeContext) collectErrors(dn *envDisjunct) (errs *Bottom) {
   374  	for _, d := range dn.disjuncts {
   375  		if d.err != nil {
   376  			errs = CombineErrors(dn.src.Source(), errs, d.err)
   377  		}
   378  	}
   379  	return errs
   380  }
   381  
   382  func (n *nodeContext) doDisjunct(c Conjunct, m defaultMode, mode runMode) (*nodeContext, *Bottom) {
   383  	if c.CloseInfo.cc == nil {
   384  		panic("nil closeContext during init")
   385  	}
   386  	n.ctx.stats.Disjuncts++
   387  
   388  	oc := newOverlayContext(n.ctx)
   389  
   390  	var ccHole *closeContext
   391  
   392  	// TODO(perf): resuse buffer, for instance by keeping a buffer handy in oc
   393  	// and then swapping it with disjunctCCs in the new nodeContext.
   394  	holes := make([]disjunctHole, 0, len(n.disjunctCCs))
   395  
   396  	// Clone the closeContexts of all open disjunctions and dependencies.
   397  	for _, d := range n.disjunctCCs {
   398  		// TODO: remove filled holes.
   399  
   400  		// Note that the root is already cloned as part of cloneVertex and that
   401  		// a closeContext corresponding to a disjunction always has a parent.
   402  		// We therefore do not need to check whether x.parent is nil.
   403  		o := oc.allocCC(d.cc)
   404  		if c.CloseInfo.cc == d.underlying {
   405  			ccHole = o
   406  		}
   407  		holes = append(holes, disjunctHole{o, d.underlying})
   408  	}
   409  
   410  	if ccHole == nil {
   411  		panic("expected non-nil overlay closeContext")
   412  	}
   413  
   414  	n.scheduler.blocking = n.scheduler.blocking[:0]
   415  
   416  	d := oc.cloneRoot(n)
   417  
   418  	d.defaultMode = combineDefault(m, n.defaultMode)
   419  
   420  	v := d.node
   421  
   422  	saved := n.node.BaseValue
   423  	n.node.BaseValue = v
   424  	defer func() { n.node.BaseValue = saved }()
   425  
   426  	// Clear relevant scheduler states.
   427  	// TODO: do something more principled: just ensure that a node that has
   428  	// not all holes filled out yet is not finalized. This may require
   429  	// a special mode, or evaluating more aggressively if finalize is not given.
   430  	v.status = unprocessed
   431  
   432  	d.overlays = n
   433  	d.disjunctCCs = append(d.disjunctCCs, holes...)
   434  	d.disjunct = c
   435  	c.CloseInfo.cc = ccHole
   436  	d.scheduleConjunct(c, c.CloseInfo)
   437  	ccHole.decDisjunct(n.ctx, DISJUNCT)
   438  
   439  	oc.unlinkOverlay()
   440  
   441  	v.unify(n.ctx, allKnown, mode)
   442  
   443  	if err := d.getError(); err != nil && !isCyclePlaceholder(err) {
   444  		d.free()
   445  		return nil, err
   446  	}
   447  
   448  	return d, nil
   449  }
   450  
   451  func (n *nodeContext) finalizeDisjunctions() {
   452  	if len(n.disjuncts) == 0 {
   453  		return
   454  	}
   455  
   456  	// TODO: we clear the Conjuncts to be compatible with the old evaluator.
   457  	// This is especially relevant for the API. Ideally, though, we should
   458  	// update Conjuncts to reflect the actual conjunct that went into the
   459  	// disjuncts.
   460  	for _, x := range n.disjuncts {
   461  		x.node.Conjuncts = nil
   462  	}
   463  
   464  	a := make([]Value, len(n.disjuncts))
   465  	p := 0
   466  	hasDefaults := false
   467  	for i, x := range n.disjuncts {
   468  		switch x.defaultMode {
   469  		case isDefault:
   470  			a[i] = a[p]
   471  			a[p] = x.node
   472  			p++
   473  			hasDefaults = true
   474  
   475  		case notDefault:
   476  			hasDefaults = true
   477  			fallthrough
   478  		case maybeDefault:
   479  			a[i] = x.node
   480  		}
   481  	}
   482  
   483  	d := &Disjunction{
   484  		Values:      a,
   485  		NumDefaults: p,
   486  		HasDefaults: hasDefaults,
   487  	}
   488  
   489  	v := n.node
   490  	v.BaseValue = d
   491  
   492  	// The conjuncts will have too much information. Better have no
   493  	// information than incorrect information.
   494  	v.Arcs = nil
   495  	v.ChildErrors = nil
   496  }
   497  
   498  func (n *nodeContext) getError() *Bottom {
   499  	if b := n.node.Bottom(); b != nil && !isCyclePlaceholder(b) {
   500  		return b
   501  	}
   502  	if n.node.ChildErrors != nil {
   503  		return n.node.ChildErrors
   504  	}
   505  	if errs := n.errs; errs != nil {
   506  		return errs
   507  	}
   508  	if n.ctx.errs != nil {
   509  		return n.ctx.errs
   510  	}
   511  	return nil
   512  }
   513  
   514  // appendDisjunct appends a disjunct x to a, if it is not a duplicate.
   515  func appendDisjunct(ctx *OpContext, a []*nodeContext, x *nodeContext) []*nodeContext {
   516  	if x == nil {
   517  		return a
   518  	}
   519  
   520  	nv := x.node.DerefValue()
   521  	nx := nv.BaseValue
   522  	if nx == nil || isCyclePlaceholder(nx) {
   523  		nx = x.getValidators(finalized)
   524  	}
   525  
   526  	// check uniqueness
   527  	// TODO: if a node is not finalized, we could check that the parent
   528  	// (overlayed) closeContexts are identical.
   529  outer:
   530  	for _, xn := range a {
   531  		xv := xn.node.DerefValue()
   532  		if xv.status != finalized || nv.status != finalized {
   533  			// Partial node
   534  
   535  			// TODO: we could consider supporting an option here to disable
   536  			// the filter. This way, if there is a bug, users could disable
   537  			// it, trading correctness for performance.
   538  			// If enabled, we would simply "continue" here.
   539  
   540  			for i, h := range xn.disjunctCCs { // TODO(perf): only iterate over completed
   541  				x, y := findIntersections(h.cc, x.disjunctCCs[i].cc)
   542  				if !equalPartialNode(xn.ctx, x, y) {
   543  					continue outer
   544  				}
   545  			}
   546  			if len(xn.tasks) != len(x.tasks) {
   547  				continue
   548  			}
   549  			for i, t := range xn.tasks {
   550  				s := x.tasks[i]
   551  				if s.x != t.x || s.id.cc != t.id.cc {
   552  					continue outer
   553  				}
   554  			}
   555  			vx, okx := nx.(Value)
   556  			ny := xv.BaseValue
   557  			if ny == nil || isCyclePlaceholder(ny) {
   558  				ny = x.getValidators(finalized)
   559  			}
   560  			vy, oky := ny.(Value)
   561  			if okx && oky && !Equal(ctx, vx, vy, CheckStructural) {
   562  				continue outer
   563  
   564  			}
   565  		} else {
   566  			// Complete nodes.
   567  			if !Equal(ctx, xn.node.DerefValue(), x.node.DerefValue(), CheckStructural) {
   568  				continue outer
   569  			}
   570  		}
   571  
   572  		// free vertex
   573  		if x.defaultMode == isDefault {
   574  			xn.defaultMode = isDefault
   575  		}
   576  		x.free()
   577  		return a
   578  	}
   579  
   580  	return append(a, x)
   581  }
   582  
   583  // isPartialNode reports whether a node must be evaluated as a partial node.
   584  func isPartialNode(d *nodeContext) bool {
   585  	if d.node.status == finalized {
   586  		return true
   587  	}
   588  	// TODO: further optimizations
   589  	return false
   590  }
   591  
   592  // findIntersections reports the closeContext, relative to the two given
   593  // disjunction holds, that should be used in comparing the arc set.
   594  // x and y MUST both be originating from the same disjunct hole. This ensures
   595  // that the depth of the parent chain is the same and that they have the
   596  // same underlying closeContext.
   597  //
   598  // Currently, we just take the parent. We should investigate if that is always
   599  // sufficient.
   600  //
   601  // Tradeoffs: if we do not go up enough, the two nodes may not be equal and we
   602  // miss the opportunity to filter. On the other hand, if we go up too far, we
   603  // end up comparing more arcs than potentially necessary.
   604  //
   605  // TODO: Add a unit test when this function is fully implemented.
   606  func findIntersections(x, y *closeContext) (cx, cy *closeContext) {
   607  	cx = x.parent
   608  	cy = y.parent
   609  
   610  	// TODO: why could this happen? Investigate. Note that it is okay to just
   611  	// return x and y. In the worst case we will just miss some possible
   612  	// deduplication.
   613  	if cx == nil || cy == nil {
   614  		return x, y
   615  	}
   616  
   617  	return cx, cy
   618  }
   619  
   620  func equalPartialNode(ctx *OpContext, x, y *closeContext) bool {
   621  	nx := x.src.getState(ctx)
   622  	ny := y.src.getState(ctx)
   623  
   624  	if nx == nil && ny == nil {
   625  		// Both nodes were finalized. We can compare them directly.
   626  		return Equal(ctx, x.src, y.src, CheckStructural)
   627  	}
   628  
   629  	// TODO: process the nodes with allKnown, attemptOnly.
   630  
   631  	if nx == nil || ny == nil {
   632  		return false
   633  	}
   634  
   635  	if !isEqualNodeValue(nx, ny) {
   636  		return false
   637  	}
   638  
   639  	if len(x.Patterns) != len(y.Patterns) {
   640  		return false
   641  	}
   642  	// Assume patterns are in the same order.
   643  	for i, p := range x.Patterns {
   644  		if !Equal(ctx, p, y.Patterns[i], 0) {
   645  			return false
   646  		}
   647  	}
   648  
   649  	if !Equal(ctx, x.Expr, y.Expr, 0) {
   650  		return false
   651  	}
   652  
   653  	if len(x.arcs) != len(y.arcs) {
   654  		return false
   655  	}
   656  
   657  	// TODO(perf): use merge sort
   658  outer:
   659  	for _, a := range x.arcs {
   660  		if a.kind != ARC {
   661  			continue outer
   662  		}
   663  		for _, b := range y.arcs {
   664  			if b.kind != ARC {
   665  				continue
   666  			}
   667  			if a.key.src.Label != b.key.src.Label {
   668  				continue
   669  			}
   670  			if !equalPartialNode(ctx, a.cc, b.cc) {
   671  				return false
   672  			}
   673  			continue outer
   674  		}
   675  		return false
   676  	}
   677  	return true
   678  }
   679  
   680  // isEqualNodeValue reports whether the two nodes are of the same type and have
   681  // the same value.
   682  //
   683  // TODO: this could be done much more cleanly if we are more deligent in early
   684  // evaluation.
   685  func isEqualNodeValue(x, y *nodeContext) bool {
   686  	xk := x.kind
   687  	yk := y.kind
   688  
   689  	// If a node is mid evaluation, the kind might not be actual if the type is
   690  	// a struct, as whether a struct is a struct kind or an embedded type is
   691  	// determined later. This is just a limitation of the current
   692  	// implementation, we should update the kind more directly so that this code
   693  	// is not necessary.
   694  	// TODO: verify that this is still necessary and if so fix it so that this
   695  	// can be removed.
   696  	if x.aStruct != nil {
   697  		xk &= StructKind
   698  	}
   699  	if y.aStruct != nil {
   700  		yk &= StructKind
   701  	}
   702  
   703  	if xk != yk {
   704  		return false
   705  	}
   706  	if x.hasTop != y.hasTop {
   707  		return false
   708  	}
   709  	if !isEqualValue(x.ctx, x.scalar, y.scalar) {
   710  		return false
   711  	}
   712  
   713  	// Do some quick checks first.
   714  	if len(x.checks) != len(y.checks) {
   715  		return false
   716  	}
   717  	if len(x.tasks) != len(y.tasks) {
   718  		return false
   719  	}
   720  
   721  	if !isEqualValue(x.ctx, x.lowerBound, y.lowerBound) {
   722  		return false
   723  	}
   724  	if !isEqualValue(x.ctx, x.upperBound, y.upperBound) {
   725  		return false
   726  	}
   727  
   728  	// Assume that checks are added in the same order.
   729  	for i, c := range x.checks {
   730  		d := y.checks[i]
   731  		if !Equal(x.ctx, c, d, CheckStructural) {
   732  			return false
   733  		}
   734  	}
   735  
   736  	for i, t := range x.tasks {
   737  		s := y.tasks[i]
   738  		if s.x != t.x {
   739  			return false
   740  		}
   741  		if s.id.cc != t.id.cc {
   742  			// FIXME: we should compare this too. For this to work we need to
   743  			// have access to the underlying closeContext, which we do not
   744  			// have at the moment.
   745  			// return false
   746  		}
   747  	}
   748  
   749  	return true
   750  }
   751  
   752  type ComparableValue interface {
   753  	comparable
   754  	Value
   755  }
   756  
   757  func isEqualValue[P ComparableValue](ctx *OpContext, x, y P) bool {
   758  	var zero P
   759  
   760  	if x == y {
   761  		return true
   762  	}
   763  	if x == zero || y == zero {
   764  		return false
   765  	}
   766  
   767  	return Equal(ctx, x, y, CheckStructural)
   768  }