github.com/balzaczyy/golucene@v0.0.0-20151210033525-d0be9ee89713/core/index/model/field.go (about) 1 package model 2 3 import ( 4 "github.com/balzaczyy/golucene/core/analysis" 5 "io" 6 ) 7 8 // index/IndexableField.java 9 10 // TODO: how to handle versioning here...? 11 12 // TODO: we need to break out separate StoredField... 13 14 /** Represents a single field for indexing. IndexWriter 15 * consumes Iterable<IndexableField> as a document. 16 * 17 * @lucene.experimental */ 18 type IndexableField interface { 19 /** Field name */ 20 Name() string 21 /** {@linkmodel.IndexableFieldType} describing the properties 22 * of this field. */ 23 FieldType() IndexableFieldType 24 /** 25 * Returns the field's index-time boost. 26 * <p> 27 * Only fields can have an index-time boost, if you want to simulate 28 * a "document boost", then you must pre-multiply it across all the 29 * relevant fields yourself. 30 * <p>The boost is used to compute the norm factor for the field. By 31 * default, in the {@link Similarity#computeNorm(FieldInvertState)} method, 32 * the boost value is multiplied by the length normalization factor and then 33 * rounded by {@link DefaultSimilarity#encodeNormValue(float)} before it is stored in the 34 * index. One should attempt to ensure that this product does not overflow 35 * the range of that encoding. 36 * <p> 37 * It is illegal to return a boost other than 1.0f for a field that is not 38 * indexed ({@linkmodel.IndexableFieldType#indexed()} is false) or omits normalization values 39 * ({@linkmodel.IndexableFieldType#omitNorms()} returns true). 40 * 41 * @see Similarity#computeNorm(FieldInvertState) 42 * @see DefaultSimilarity#encodeNormValue(float) 43 */ 44 Boost() float32 45 /** Non-null if this field has a binary value */ 46 BinaryValue() []byte 47 48 /** Non-null if this field has a string value */ 49 StringValue() string 50 51 /** Non-null if this field has a Reader value */ 52 ReaderValue() io.RuneReader 53 54 /** Non-null if this field has a numeric value */ 55 NumericValue() interface{} 56 57 // Creates the TokenStream used for indexing this field. If appropriate, 58 // implementations should use the given Analyzer to create the TokenStreams. 59 TokenStream(analysis.Analyzer, analysis.TokenStream) (analysis.TokenStream, error) 60 }