<html><body>
<style>

body, h1, h2, h3, div, span, p, pre, a {
  margin: 0;
  padding: 0;
  border: 0;
  font-weight: inherit;
  font-style: inherit;
  font-size: 100%;
  font-family: inherit;
  vertical-align: baseline;
}

body {
  font-size: 13px;
  padding: 1em;
}

h1 {
  font-size: 26px;
  margin-bottom: 1em;
}

h2 {
  font-size: 24px;
  margin-bottom: 1em;
}

h3 {
  font-size: 20px;
  margin-bottom: 1em;
  margin-top: 1em;
}

pre, code {
  line-height: 1.5;
  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}

pre {
  margin-top: 0.5em;
}

h1, h2, h3, p {
  font-family: Arial, sans serif;
}

h1, h2, h3 {
  border-bottom: solid #CCC 1px;
}

.toc_element {
  margin-top: 0.5em;
}

.firstline {
  margin-left: 2 em;
}

.method  {
  margin-top: 1em;
  border: solid 1px #CCC;
  padding: 1em;
  background: #EEE;
}

.details {
  font-weight: bold;
  font-size: 14px;
}

</style>

<h1><a href="documentai_v1beta2.html">Cloud Document AI API</a> . <a href="documentai_v1beta2.projects.html">projects</a> . <a href="documentai_v1beta2.projects.locations.html">locations</a> . <a href="documentai_v1beta2.projects.locations.documents.html">documents</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
  <code><a href="#batchProcess">batchProcess(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">LRO endpoint to batch process many documents. The output is written to Cloud Storage as JSON in the [Document] format.</p>
<p class="toc_element">
  <code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
  <code><a href="#process">process(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Processes a single document.</p>
<h3>Method Details</h3>
<div class="method">
    <code class="details" id="batchProcess">batchProcess(parent, body=None, x__xgafv=None)</code>
  <pre>LRO endpoint to batch process many documents. The output is written to Cloud Storage as JSON in the [Document] format.

Args:
  parent: string, Target project and location to make a call. Format: `projects/{project-id}/locations/{location-id}`. If no location is specified, a region will be chosen automatically. (required)
  body: object, The request body.
    The object takes the form of:

{ # Request to batch process documents as an asynchronous operation. The output is written to Cloud Storage as JSON in the [Document] format.
  &quot;requests&quot;: [ # Required. Individual requests for each document.
    { # Request to process one document.
      &quot;automlParams&quot;: { # Parameters to control AutoML model prediction behavior. # Controls AutoML model prediction behavior. AutoMlParams cannot be used together with other Params.
        &quot;model&quot;: &quot;A String&quot;, # Resource name of the AutoML model. Format: `projects/{project-id}/locations/{location-id}/models/{model-id}`.
      },
      &quot;documentType&quot;: &quot;A String&quot;, # Specifies a known document type for deeper structure detection. Valid values are currently &quot;general&quot; and &quot;invoice&quot;. If not provided, &quot;general&quot;\ is used as default. If any other value is given, the request is rejected.
      &quot;entityExtractionParams&quot;: { # Parameters to control entity extraction behavior. # Controls entity extraction behavior. If not specified, the system will decide reasonable defaults.
        &quot;enabled&quot;: True or False, # Whether to enable entity extraction.
        &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the entity extraction. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model.
      },
      &quot;formExtractionParams&quot;: { # Parameters to control form extraction behavior. # Controls form extraction behavior. If not specified, the system will decide reasonable defaults.
        &quot;enabled&quot;: True or False, # Whether to enable form extraction.
        &quot;keyValuePairHints&quot;: [ # Reserved for future use.
          { # Reserved for future use.
            &quot;key&quot;: &quot;A String&quot;, # The key text for the hint.
            &quot;valueTypes&quot;: [ # Type of the value. This is case-insensitive, and could be one of: ADDRESS, LOCATION, ORGANIZATION, PERSON, PHONE_NUMBER, ID, NUMBER, EMAIL, PRICE, TERMS, DATE, NAME. Types not in this list will be ignored.
              &quot;A String&quot;,
            ],
          },
        ],
        &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the form extraction system. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model. For custom form models, specify: &quot;custom/{model_name}&quot;. Model name format is &quot;bucket_name/path/to/modeldir&quot; corresponding to &quot;gs://bucket_name/path/to/modeldir&quot; where annotated examples are stored.
      },
      &quot;inputConfig&quot;: { # The desired input location and metadata. # Required. Information about the input file.
        &quot;contents&quot;: &quot;A String&quot;, # Content in bytes, represented as a stream of bytes. Note: As with all `bytes` fields, proto buffer messages use a pure binary representation, whereas JSON representations use base64. This field only works for synchronous ProcessDocument method.
        &quot;gcsSource&quot;: { # The Google Cloud Storage location where the input file will be read from. # The Google Cloud Storage location to read the input from. This must be a single file.
          &quot;uri&quot;: &quot;A String&quot;,
        },
        &quot;mimeType&quot;: &quot;A String&quot;, # Required. Mimetype of the input. Current supported mimetypes are application/pdf, image/tiff, and image/gif. In addition, application/json type is supported for requests with ProcessDocumentRequest.automl_params field set. The JSON file needs to be in Document format.
      },
      &quot;ocrParams&quot;: { # Parameters to control Optical Character Recognition (OCR) behavior. # Controls OCR behavior. If not specified, the system will decide reasonable defaults.
        &quot;languageHints&quot;: [ # List of languages to use for OCR. In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting `language_hints` is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Document processing returns an error if one or more of the specified languages is not one of the supported languages.
          &quot;A String&quot;,
        ],
      },
      &quot;outputConfig&quot;: { # The desired output location and metadata. # The desired output location. This field is only needed in BatchProcessDocumentsRequest.
        &quot;gcsDestination&quot;: { # The Google Cloud Storage location where the output file will be written to. # The Google Cloud Storage location to write the output to.
          &quot;uri&quot;: &quot;A String&quot;,
        },
        &quot;pagesPerShard&quot;: 42, # The max number of pages to include into each output Document shard JSON on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 parsed pages will be produced. If `pages_per_shard` = 20, then 5 Document shard JSON files each containing 20 parsed pages will be written under the prefix OutputConfig.gcs_destination.uri and suffix pages-x-to-y.json where x and y are 1-indexed page numbers. Example GCS outputs with 157 pages and pages_per_shard = 50: pages-001-to-050.json pages-051-to-100.json pages-101-to-150.json pages-151-to-157.json
      },
      &quot;parent&quot;: &quot;A String&quot;, # Target project and location to make a call. Format: `projects/{project-id}/locations/{location-id}`. If no location is specified, a region will be chosen automatically. This field is only populated when used in ProcessDocument method.
      &quot;tableExtractionParams&quot;: { # Parameters to control table extraction behavior. # Controls table extraction behavior. If not specified, the system will decide reasonable defaults.
        &quot;enabled&quot;: True or False, # Whether to enable table extraction.
        &quot;headerHints&quot;: [ # Optional. Reserved for future use.
          &quot;A String&quot;,
        ],
        &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the table extraction system. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model.
        &quot;tableBoundHints&quot;: [ # Optional. Table bounding box hints that can be provided to complex cases which our algorithm cannot locate the table(s) in.
          { # A hint for a table bounding box on the page for table parsing.
            &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # Bounding box hint for a table on this page. The coordinates must be normalized to [0,1] and the bounding box must be an axis-aligned rectangle.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;pageNumber&quot;: 42, # Optional. Page number for multi-paged inputs this hint applies to. If not provided, this hint will apply to all pages by default. This value is 1-based.
          },
        ],
      },
    },
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # This resource represents a long-running operation that is the result of a network API call.
  &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
  &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
    &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
    &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
      },
    ],
    &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
    &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
  },
  &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
  &quot;response&quot;: { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
    &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
  },
}</pre>
</div>

<div class="method">
    <code class="details" id="close">close()</code>
  <pre>Close httplib2 connections.</pre>
</div>

<div class="method">
    <code class="details" id="process">process(parent, body=None, x__xgafv=None)</code>
  <pre>Processes a single document.

Args:
  parent: string, Target project and location to make a call. Format: `projects/{project-id}/locations/{location-id}`. If no location is specified, a region will be chosen automatically. This field is only populated when used in ProcessDocument method. (required)
  body: object, The request body.
    The object takes the form of:

{ # Request to process one document.
  &quot;automlParams&quot;: { # Parameters to control AutoML model prediction behavior. # Controls AutoML model prediction behavior. AutoMlParams cannot be used together with other Params.
    &quot;model&quot;: &quot;A String&quot;, # Resource name of the AutoML model. Format: `projects/{project-id}/locations/{location-id}/models/{model-id}`.
  },
  &quot;documentType&quot;: &quot;A String&quot;, # Specifies a known document type for deeper structure detection. Valid values are currently &quot;general&quot; and &quot;invoice&quot;. If not provided, &quot;general&quot;\ is used as default. If any other value is given, the request is rejected.
  &quot;entityExtractionParams&quot;: { # Parameters to control entity extraction behavior. # Controls entity extraction behavior. If not specified, the system will decide reasonable defaults.
    &quot;enabled&quot;: True or False, # Whether to enable entity extraction.
    &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the entity extraction. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model.
  },
  &quot;formExtractionParams&quot;: { # Parameters to control form extraction behavior. # Controls form extraction behavior. If not specified, the system will decide reasonable defaults.
    &quot;enabled&quot;: True or False, # Whether to enable form extraction.
    &quot;keyValuePairHints&quot;: [ # Reserved for future use.
      { # Reserved for future use.
        &quot;key&quot;: &quot;A String&quot;, # The key text for the hint.
        &quot;valueTypes&quot;: [ # Type of the value. This is case-insensitive, and could be one of: ADDRESS, LOCATION, ORGANIZATION, PERSON, PHONE_NUMBER, ID, NUMBER, EMAIL, PRICE, TERMS, DATE, NAME. Types not in this list will be ignored.
          &quot;A String&quot;,
        ],
      },
    ],
    &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the form extraction system. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model. For custom form models, specify: &quot;custom/{model_name}&quot;. Model name format is &quot;bucket_name/path/to/modeldir&quot; corresponding to &quot;gs://bucket_name/path/to/modeldir&quot; where annotated examples are stored.
  },
  &quot;inputConfig&quot;: { # The desired input location and metadata. # Required. Information about the input file.
    &quot;contents&quot;: &quot;A String&quot;, # Content in bytes, represented as a stream of bytes. Note: As with all `bytes` fields, proto buffer messages use a pure binary representation, whereas JSON representations use base64. This field only works for synchronous ProcessDocument method.
    &quot;gcsSource&quot;: { # The Google Cloud Storage location where the input file will be read from. # The Google Cloud Storage location to read the input from. This must be a single file.
      &quot;uri&quot;: &quot;A String&quot;,
    },
    &quot;mimeType&quot;: &quot;A String&quot;, # Required. Mimetype of the input. Current supported mimetypes are application/pdf, image/tiff, and image/gif. In addition, application/json type is supported for requests with ProcessDocumentRequest.automl_params field set. The JSON file needs to be in Document format.
  },
  &quot;ocrParams&quot;: { # Parameters to control Optical Character Recognition (OCR) behavior. # Controls OCR behavior. If not specified, the system will decide reasonable defaults.
    &quot;languageHints&quot;: [ # List of languages to use for OCR. In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting `language_hints` is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Document processing returns an error if one or more of the specified languages is not one of the supported languages.
      &quot;A String&quot;,
    ],
  },
  &quot;outputConfig&quot;: { # The desired output location and metadata. # The desired output location. This field is only needed in BatchProcessDocumentsRequest.
    &quot;gcsDestination&quot;: { # The Google Cloud Storage location where the output file will be written to. # The Google Cloud Storage location to write the output to.
      &quot;uri&quot;: &quot;A String&quot;,
    },
    &quot;pagesPerShard&quot;: 42, # The max number of pages to include into each output Document shard JSON on Google Cloud Storage. The valid range is [1, 100]. If not specified, the default value is 20. For example, for one pdf file with 100 pages, 100 parsed pages will be produced. If `pages_per_shard` = 20, then 5 Document shard JSON files each containing 20 parsed pages will be written under the prefix OutputConfig.gcs_destination.uri and suffix pages-x-to-y.json where x and y are 1-indexed page numbers. Example GCS outputs with 157 pages and pages_per_shard = 50: pages-001-to-050.json pages-051-to-100.json pages-101-to-150.json pages-151-to-157.json
  },
  &quot;parent&quot;: &quot;A String&quot;, # Target project and location to make a call. Format: `projects/{project-id}/locations/{location-id}`. If no location is specified, a region will be chosen automatically. This field is only populated when used in ProcessDocument method.
  &quot;tableExtractionParams&quot;: { # Parameters to control table extraction behavior. # Controls table extraction behavior. If not specified, the system will decide reasonable defaults.
    &quot;enabled&quot;: True or False, # Whether to enable table extraction.
    &quot;headerHints&quot;: [ # Optional. Reserved for future use.
      &quot;A String&quot;,
    ],
    &quot;modelVersion&quot;: &quot;A String&quot;, # Model version of the table extraction system. Default is &quot;builtin/stable&quot;. Specify &quot;builtin/latest&quot; for the latest model.
    &quot;tableBoundHints&quot;: [ # Optional. Table bounding box hints that can be provided to complex cases which our algorithm cannot locate the table(s) in.
      { # A hint for a table bounding box on the page for table parsing.
        &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # Bounding box hint for a table on this page. The coordinates must be normalized to [0,1] and the bounding box must be an axis-aligned rectangle.
          &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
            { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
              &quot;x&quot;: 3.14, # X coordinate.
              &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
            },
          ],
          &quot;vertices&quot;: [ # The bounding polygon vertices.
            { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
              &quot;x&quot;: 42, # X coordinate.
              &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
            },
          ],
        },
        &quot;pageNumber&quot;: 42, # Optional. Page number for multi-paged inputs this hint applies to. If not provided, this hint will apply to all pages by default. This value is 1-based.
      },
    ],
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Document represents the canonical document resource in Document AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document AI to iterate and optimize for quality.
  &quot;chunkedDocument&quot;: { # Represents the chunks that the document is divided into. # Document chunked based on chunking config.
    &quot;chunks&quot;: [ # List of chunks.
      { # Represents a chunk.
        &quot;chunkId&quot;: &quot;A String&quot;, # ID of the chunk.
        &quot;content&quot;: &quot;A String&quot;, # Text content of the chunk.
        &quot;pageFooters&quot;: [ # Page footers associated with the chunk.
          { # Represents the page footer associated with the chunk.
            &quot;pageSpan&quot;: { # Represents where the chunk starts and ends in the document. # Page span of the footer.
              &quot;pageEnd&quot;: 42, # Page where chunk ends in the document.
              &quot;pageStart&quot;: 42, # Page where chunk starts in the document.
            },
            &quot;text&quot;: &quot;A String&quot;, # Footer in text format.
          },
        ],
        &quot;pageHeaders&quot;: [ # Page headers associated with the chunk.
          { # Represents the page header associated with the chunk.
            &quot;pageSpan&quot;: { # Represents where the chunk starts and ends in the document. # Page span of the header.
              &quot;pageEnd&quot;: 42, # Page where chunk ends in the document.
              &quot;pageStart&quot;: 42, # Page where chunk starts in the document.
            },
            &quot;text&quot;: &quot;A String&quot;, # Header in text format.
          },
        ],
        &quot;pageSpan&quot;: { # Represents where the chunk starts and ends in the document. # Page span of the chunk.
          &quot;pageEnd&quot;: 42, # Page where chunk ends in the document.
          &quot;pageStart&quot;: 42, # Page where chunk starts in the document.
        },
        &quot;sourceBlockIds&quot;: [ # Unused.
          &quot;A String&quot;,
        ],
      },
    ],
  },
  &quot;content&quot;: &quot;A String&quot;, # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
  &quot;documentLayout&quot;: { # Represents the parsed layout of a document as a collection of blocks that the document is divided into. # Parsed layout of the document.
    &quot;blocks&quot;: [ # List of blocks in the document.
      { # Represents a block. A block could be one of the various types (text, table, list) supported.
        &quot;blockId&quot;: &quot;A String&quot;, # ID of the block.
        &quot;listBlock&quot;: { # Represents a list type block. # Block consisting of list content/structure.
          &quot;listEntries&quot;: [ # List entries that constitute a list block.
            { # Represents an entry in the list.
              &quot;blocks&quot;: [ # A list entry is a list of blocks. Repeated blocks support further hierarchies and nested blocks.
                # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock
              ],
            },
          ],
          &quot;type&quot;: &quot;A String&quot;, # Type of the list_entries (if exist). Available options are `ordered` and `unordered`.
        },
        &quot;pageSpan&quot;: { # Represents where the block starts and ends in the document. # Page span of the block.
          &quot;pageEnd&quot;: 42, # Page where block ends in the document.
          &quot;pageStart&quot;: 42, # Page where block starts in the document.
        },
        &quot;tableBlock&quot;: { # Represents a table type block. # Block consisting of table content/structure.
          &quot;bodyRows&quot;: [ # Body rows containing main table content.
            { # Represents a row in a table.
              &quot;cells&quot;: [ # A table row is a list of table cells.
                { # Represents a cell in a table row.
                  &quot;blocks&quot;: [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.
                    # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock
                  ],
                  &quot;colSpan&quot;: 42, # How many columns this cell spans.
                  &quot;rowSpan&quot;: 42, # How many rows this cell spans.
                },
              ],
            },
          ],
          &quot;caption&quot;: &quot;A String&quot;, # Table caption/title.
          &quot;headerRows&quot;: [ # Header rows at the top of the table.
            { # Represents a row in a table.
              &quot;cells&quot;: [ # A table row is a list of table cells.
                { # Represents a cell in a table row.
                  &quot;blocks&quot;: [ # A table cell is a list of blocks. Repeated blocks support further hierarchies and nested blocks.
                    # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock
                  ],
                  &quot;colSpan&quot;: 42, # How many columns this cell spans.
                  &quot;rowSpan&quot;: 42, # How many rows this cell spans.
                },
              ],
            },
          ],
        },
        &quot;textBlock&quot;: { # Represents a text type block. # Block consisting of text content.
          &quot;blocks&quot;: [ # A text block could further have child blocks. Repeated blocks support further hierarchies and nested blocks.
            # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentDocumentLayoutDocumentLayoutBlock
          ],
          &quot;text&quot;: &quot;A String&quot;, # Text content stored in the block.
          &quot;type&quot;: &quot;A String&quot;, # Type of the text in the block. Available options are: `paragraph`, `subtitle`, `heading-1`, `heading-2`, `heading-3`, `heading-4`, `heading-5`, `header`, `footer`.
        },
      },
    ],
  },
  &quot;entities&quot;: [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.
    { # An entity that could be a phrase in the text or a property that belongs to the document. It is a known entity type, such as a person, an organization, or location.
      &quot;confidence&quot;: 3.14, # Optional. Confidence of detected Schema entity. Range `[0, 1]`.
      &quot;id&quot;: &quot;A String&quot;, # Optional. Canonical id. This will be a unique value in the entity list for this document.
      &quot;mentionId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use `id` field instead.
      &quot;mentionText&quot;: &quot;A String&quot;, # Optional. Text value of the entity e.g. `1600 Amphitheatre Pkwy`.
      &quot;normalizedValue&quot;: { # Parsed and normalized entity value. # Optional. Normalized entity value. Absent if the extracted value could not be converted or the type (e.g. address) is not supported for certain parsers. This field is also only populated for certain supported document types.
        &quot;addressValue&quot;: { # Represents a postal address, e.g. for postal delivery or payments addresses. Given a postal address, a postal service can deliver items to a premise, P.O. Box or similar. It is not intended to model geographical locations (roads, towns, mountains). In typical usage an address would be created via user input or from importing existing data, depending on the type of process. Advice on address input / editing: - Use an internationalization-ready address widget such as https://github.com/google/libaddressinput) - Users should not be presented with UI elements for input or editing of fields outside countries where that field is used. For more guidance on how to use this schema, please see: https://support.google.com/business/answer/6397478 # Postal address. See also: https://github.com/googleapis/googleapis/blob/master/google/type/postal_address.proto
          &quot;addressLines&quot;: [ # Unstructured address lines describing the lower levels of an address. Because values in address_lines do not have type information and may sometimes contain multiple values in a single field (e.g. &quot;Austin, TX&quot;), it is important that the line order is clear. The order of address lines should be &quot;envelope order&quot; for the country/region of the address. In places where this can vary (e.g. Japan), address_language is used to make it explicit (e.g. &quot;ja&quot; for large-to-small ordering and &quot;ja-Latn&quot; or &quot;en&quot; for small-to-large). This way, the most specific line of an address can be selected based on the language. The minimum permitted structural representation of an address consists of a region_code with all remaining information placed in the address_lines. It would be possible to format such an address very approximately without geocoding, but no semantic reasoning could be made about any of the address components until it was at least partially resolved. Creating an address only containing a region_code and address_lines, and then geocoding is the recommended way to handle completely unstructured addresses (as opposed to guessing which parts of the address should be localities or administrative areas).
            &quot;A String&quot;,
          ],
          &quot;administrativeArea&quot;: &quot;A String&quot;, # Optional. Highest administrative subdivision which is used for postal addresses of a country or region. For example, this can be a state, a province, an oblast, or a prefecture. Specifically, for Spain this is the province and not the autonomous community (e.g. &quot;Barcelona&quot; and not &quot;Catalonia&quot;). Many countries don&#x27;t use an administrative area in postal addresses. E.g. in Switzerland this should be left unpopulated.
          &quot;languageCode&quot;: &quot;A String&quot;, # Optional. BCP-47 language code of the contents of this address (if known). This is often the UI language of the input form or is expected to match one of the languages used in the address&#x27; country/region, or their transliterated equivalents. This can affect formatting in certain countries, but is not critical to the correctness of the data and will never affect any validation or other non-formatting related operations. If this value is not known, it should be omitted (rather than specifying a possibly incorrect default). Examples: &quot;zh-Hant&quot;, &quot;ja&quot;, &quot;ja-Latn&quot;, &quot;en&quot;.
          &quot;locality&quot;: &quot;A String&quot;, # Optional. Generally refers to the city/town portion of the address. Examples: US city, IT comune, UK post town. In regions of the world where localities are not well defined or do not fit into this structure well, leave locality empty and use address_lines.
          &quot;organization&quot;: &quot;A String&quot;, # Optional. The name of the organization at the address.
          &quot;postalCode&quot;: &quot;A String&quot;, # Optional. Postal code of the address. Not all countries use or require postal codes to be present, but where they are used, they may trigger additional validation with other parts of the address (e.g. state/zip validation in the U.S.A.).
          &quot;recipients&quot;: [ # Optional. The recipient at the address. This field may, under certain circumstances, contain multiline information. For example, it might contain &quot;care of&quot; information.
            &quot;A String&quot;,
          ],
          &quot;regionCode&quot;: &quot;A String&quot;, # Required. CLDR region code of the country/region of the address. This is never inferred and it is up to the user to ensure the value is correct. See https://cldr.unicode.org/ and https://www.unicode.org/cldr/charts/30/supplemental/territory_information.html for details. Example: &quot;CH&quot; for Switzerland.
          &quot;revision&quot;: 42, # The schema revision of the `PostalAddress`. This must be set to 0, which is the latest revision. All new revisions **must** be backward compatible with old revisions.
          &quot;sortingCode&quot;: &quot;A String&quot;, # Optional. Additional, country-specific, sorting code. This is not used in most regions. Where it is used, the value is either a string like &quot;CEDEX&quot;, optionally followed by a number (e.g. &quot;CEDEX 7&quot;), or just a number alone, representing the &quot;sector code&quot; (Jamaica), &quot;delivery area indicator&quot; (Malawi) or &quot;post office indicator&quot; (e.g. Côte d&#x27;Ivoire).
          &quot;sublocality&quot;: &quot;A String&quot;, # Optional. Sublocality of the address. For example, this can be neighborhoods, boroughs, districts.
        },
        &quot;booleanValue&quot;: True or False, # Boolean value. Can be used for entities with binary values, or for checkboxes.
        &quot;dateValue&quot;: { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Date value. Includes year, month, day. See also: https://github.com/googleapis/googleapis/blob/master/google/type/date.proto
          &quot;day&quot;: 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn&#x27;t significant.
          &quot;month&quot;: 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
          &quot;year&quot;: 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
        },
        &quot;datetimeValue&quot;: { # Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: * When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. * When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year, month, or day are 0, the DateTime is considered not to have a specific year, month, or day respectively. This type may also be used to represent a physical time if all the date and time fields are set and either case of the `time_offset` oneof is set. Consider using `Timestamp` message for physical time instead. If your use case also would like to store the user&#x27;s timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application&#x27;s limitations. # DateTime value. Includes date, time, and timezone. See also: https://github.com/googleapis/googleapis/blob/master/google/type/datetime.proto
          &quot;day&quot;: 42, # Optional. Day of month. Must be from 1 to 31 and valid for the year and month, or 0 if specifying a datetime without a day.
          &quot;hours&quot;: 42, # Optional. Hours of day in 24 hour format. Should be from 0 to 23, defaults to 0 (midnight). An API may choose to allow the value &quot;24:00:00&quot; for scenarios like business closing time.
          &quot;minutes&quot;: 42, # Optional. Minutes of hour of day. Must be from 0 to 59, defaults to 0.
          &quot;month&quot;: 42, # Optional. Month of year. Must be from 1 to 12, or 0 if specifying a datetime without a month.
          &quot;nanos&quot;: 42, # Optional. Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999, defaults to 0.
          &quot;seconds&quot;: 42, # Optional. Seconds of minutes of the time. Must normally be from 0 to 59, defaults to 0. An API may allow the value 60 if it allows leap-seconds.
          &quot;timeZone&quot;: { # Represents a time zone from the [IANA Time Zone Database](https://www.iana.org/time-zones). # Time zone.
            &quot;id&quot;: &quot;A String&quot;, # IANA Time Zone Database time zone, e.g. &quot;America/New_York&quot;.
            &quot;version&quot;: &quot;A String&quot;, # Optional. IANA Time Zone Database version number, e.g. &quot;2019a&quot;.
          },
          &quot;utcOffset&quot;: &quot;A String&quot;, # UTC offset. Must be whole seconds, between -18 hours and +18 hours. For example, a UTC offset of -4:00 would be represented as { seconds: -14400 }.
          &quot;year&quot;: 42, # Optional. Year of date. Must be from 1 to 9999, or 0 if specifying a datetime without a year.
        },
        &quot;floatValue&quot;: 3.14, # Float value.
        &quot;integerValue&quot;: 42, # Integer value.
        &quot;moneyValue&quot;: { # Represents an amount of money with its currency type. # Money value. See also: https://github.com/googleapis/googleapis/blob/master/google/type/money.proto
          &quot;currencyCode&quot;: &quot;A String&quot;, # The three-letter currency code defined in ISO 4217.
          &quot;nanos&quot;: 42, # Number of nano (10^-9) units of the amount. The value must be between -999,999,999 and +999,999,999 inclusive. If `units` is positive, `nanos` must be positive or zero. If `units` is zero, `nanos` can be positive, zero, or negative. If `units` is negative, `nanos` must be negative or zero. For example $-1.75 is represented as `units`=-1 and `nanos`=-750,000,000.
          &quot;units&quot;: &quot;A String&quot;, # The whole units of the amount. For example if `currencyCode` is `&quot;USD&quot;`, then 1 unit is one US dollar.
        },
        &quot;text&quot;: &quot;A String&quot;, # Optional. An optional field to store a normalized string. For some entity types, one of respective `structured_value` fields may also be populated. Also not all the types of `structured_value` will be normalized. For example, some processors may not generate `float` or `integer` normalized text by default. Below are sample formats mapped to structured values. - Money/Currency type (`money_value`) is in the ISO 4217 text format. - Date type (`date_value`) is in the ISO 8601 text format. - Datetime type (`datetime_value`) is in the ISO 8601 text format.
      },
      &quot;pageAnchor&quot;: { # Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types. # Optional. Represents the provenance of this entity wrt. the location on the page where it was found.
        &quot;pageRefs&quot;: [ # One or more references to visual page elements
          { # Represents a weak reference to a page element within a document.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Optional. Identifies the bounding polygon of a layout element on the page. If `layout_type` is set, the bounding polygon must be exactly the same to the layout element it&#x27;s referring to.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Optional. Confidence of detected page element, if applicable. Range `[0, 1]`.
            &quot;layoutId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use PageRef.bounding_poly instead.
            &quot;layoutType&quot;: &quot;A String&quot;, # Optional. The type of the layout element that is being referenced if any.
            &quot;page&quot;: &quot;A String&quot;, # Required. Index into the Document.pages element, for example using `Document.pages` to locate the related page element. This field is skipped when its value is the default `0`. See https://developers.google.com/protocol-buffers/docs/proto3#json.
          },
        ],
      },
      &quot;properties&quot;: [ # Optional. Entities can be nested to form a hierarchical data structure representing the content in the document.
        # Object with schema name: GoogleCloudDocumentaiV1beta2DocumentEntity
      ],
      &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # Optional. The history of this annotation.
        &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
        &quot;parents&quot;: [ # References to the original elements that are replaced.
          { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
            &quot;id&quot;: 42, # The id of the parent provenance.
            &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
            &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
          },
        ],
        &quot;revision&quot;: 42, # The index of the revision that produced this element.
        &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
      },
      &quot;redacted&quot;: True or False, # Optional. Whether the entity will be redacted for de-identification purposes.
      &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Optional. Provenance of the entity. Text anchor indexing into the Document.text.
        &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
        &quot;textSegments&quot;: [ # The text segments from the Document.text.
          { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
            &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
            &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
          },
        ],
      },
      &quot;type&quot;: &quot;A String&quot;, # Required. Entity type from a schema e.g. `Address`.
    },
  ],
  &quot;entityRelations&quot;: [ # Placeholder. Relationship among Document.entities.
    { # Relationship between Entities.
      &quot;objectId&quot;: &quot;A String&quot;, # Object entity id.
      &quot;relation&quot;: &quot;A String&quot;, # Relationship description.
      &quot;subjectId&quot;: &quot;A String&quot;, # Subject entity id.
    },
  ],
  &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Any error that occurred while processing this document.
    &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
    &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
      {
        &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
      },
    ],
    &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  },
  &quot;labels&quot;: [ # Labels for this document.
    { # Label attaches schema information and/or other metadata to segments within a Document. Multiple Labels on a single field can denote either different labels, different instances of the same label created at different times, or some combination of both.
      &quot;automlModel&quot;: &quot;A String&quot;, # Label is generated AutoML model. This field stores the full resource name of the AutoML model. Format: `projects/{project-id}/locations/{location-id}/models/{model-id}`
      &quot;confidence&quot;: 3.14, # Confidence score between 0 and 1 for label assignment.
      &quot;name&quot;: &quot;A String&quot;, # Name of the label. When the label is generated from AutoML Text Classification model, this field represents the name of the category.
    },
  ],
  &quot;mimeType&quot;: &quot;A String&quot;, # An IANA published [media type (MIME type)](https://www.iana.org/assignments/media-types/media-types.xhtml).
  &quot;pages&quot;: [ # Visual page layout for the Document.
    { # A page in a Document.
      &quot;blocks&quot;: [ # A list of visually detected text blocks on the page. A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
        { # A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Block.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
        },
      ],
      &quot;detectedBarcodes&quot;: [ # A list of detected barcodes.
        { # A detected barcode.
          &quot;barcode&quot;: { # Encodes the detailed information of a barcode. # Detailed barcode information of the DetectedBarcode.
            &quot;format&quot;: &quot;A String&quot;, # Format of a barcode. The supported formats are: - `CODE_128`: Code 128 type. - `CODE_39`: Code 39 type. - `CODE_93`: Code 93 type. - `CODABAR`: Codabar type. - `DATA_MATRIX`: 2D Data Matrix type. - `ITF`: ITF type. - `EAN_13`: EAN-13 type. - `EAN_8`: EAN-8 type. - `QR_CODE`: 2D QR code type. - `UPC_A`: UPC-A type. - `UPC_E`: UPC-E type. - `PDF417`: PDF417 type. - `AZTEC`: 2D Aztec code type. - `DATABAR`: GS1 DataBar code type.
            &quot;rawValue&quot;: &quot;A String&quot;, # Raw value encoded in the barcode. For example: `&#x27;MEBKM:TITLE:Google;URL:https://www.google.com;;&#x27;`.
            &quot;valueFormat&quot;: &quot;A String&quot;, # Value format describes the format of the value that a barcode encodes. The supported formats are: - `CONTACT_INFO`: Contact information. - `EMAIL`: Email address. - `ISBN`: ISBN identifier. - `PHONE`: Phone number. - `PRODUCT`: Product. - `SMS`: SMS message. - `TEXT`: Text string. - `URL`: URL address. - `WIFI`: Wifi information. - `GEO`: Geo-localization. - `CALENDAR_EVENT`: Calendar event. - `DRIVER_LICENSE`: Driver&#x27;s license.
          },
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for DetectedBarcode.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
        },
      ],
      &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
        { # Detected language for a structural component.
          &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
          &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
        },
      ],
      &quot;dimension&quot;: { # Dimension for the page. # Physical dimension of the page.
        &quot;height&quot;: 3.14, # Page height.
        &quot;unit&quot;: &quot;A String&quot;, # Dimension unit.
        &quot;width&quot;: 3.14, # Page width.
      },
      &quot;formFields&quot;: [ # A list of visually detected form fields on the page.
        { # A form field detected on the page.
          &quot;correctedKeyText&quot;: &quot;A String&quot;, # Created for Labeling UI to export key text. If corrections were made to the text identified by the `field_name.text_anchor`, this field will contain the correction.
          &quot;correctedValueText&quot;: &quot;A String&quot;, # Created for Labeling UI to export value text. If corrections were made to the text identified by the `field_value.text_anchor`, this field will contain the correction.
          &quot;fieldName&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField name. e.g. `Address`, `Email`, `Grand total`, `Phone number`, etc.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;fieldValue&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField value.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;nameDetectedLanguages&quot;: [ # A list of detected languages for name together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
          &quot;valueDetectedLanguages&quot;: [ # A list of detected languages for value together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;valueType&quot;: &quot;A String&quot;, # If the value is non-textual, this field represents the type. Current valid values are: - blank (this indicates the `field_value` is normal text) - `unfilled_checkbox` - `filled_checkbox`
        },
      ],
      &quot;image&quot;: { # Rendered image contents for this page. # Rendered image for this page. This image is preprocessed to remove any skew, rotation, and distortions such that the annotation bounding boxes can be upright and axis-aligned.
        &quot;content&quot;: &quot;A String&quot;, # Raw byte content of the image.
        &quot;height&quot;: 42, # Height of the image in pixels.
        &quot;mimeType&quot;: &quot;A String&quot;, # Encoding [media type (MIME type)](https://www.iana.org/assignments/media-types/media-types.xhtml) for the image.
        &quot;width&quot;: 42, # Width of the image in pixels.
      },
      &quot;imageQualityScores&quot;: { # Image quality scores for the page image. # Image quality scores.
        &quot;detectedDefects&quot;: [ # A list of detected defects.
          { # Image Quality Defects
            &quot;confidence&quot;: 3.14, # Confidence of detected defect. Range `[0, 1]` where `1` indicates strong confidence that the defect exists.
            &quot;type&quot;: &quot;A String&quot;, # Name of the defect type. Supported values are: - `quality/defect_blurry` - `quality/defect_noisy` - `quality/defect_dark` - `quality/defect_faint` - `quality/defect_text_too_small` - `quality/defect_document_cutoff` - `quality/defect_text_cutoff` - `quality/defect_glare`
          },
        ],
        &quot;qualityScore&quot;: 3.14, # The overall quality score. Range `[0, 1]` where `1` is perfect quality.
      },
      &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for the page.
        &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
          &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
            { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
              &quot;x&quot;: 3.14, # X coordinate.
              &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
            },
          ],
          &quot;vertices&quot;: [ # The bounding polygon vertices.
            { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
              &quot;x&quot;: 42, # X coordinate.
              &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
            },
          ],
        },
        &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
        &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
        &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
          &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
          &quot;textSegments&quot;: [ # The text segments from the Document.text.
            { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
              &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
              &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
            },
          ],
        },
      },
      &quot;lines&quot;: [ # A list of visually detected text lines on the page. A collection of tokens that a human would perceive as a line.
        { # A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Line.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
        },
      ],
      &quot;pageNumber&quot;: 42, # 1-based index for current Page in a parent Document. Useful when a page is taken out of a Document for individual processing.
      &quot;paragraphs&quot;: [ # A list of visually detected text paragraphs on the page. A collection of lines that a human would perceive as a paragraph.
        { # A collection of lines that a human would perceive as a paragraph.
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Paragraph.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
        },
      ],
      &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this page.
        &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
        &quot;parents&quot;: [ # References to the original elements that are replaced.
          { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
            &quot;id&quot;: 42, # The id of the parent provenance.
            &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
            &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
          },
        ],
        &quot;revision&quot;: 42, # The index of the revision that produced this element.
        &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
      },
      &quot;symbols&quot;: [ # A list of visually detected symbols on the page.
        { # A detected symbol.
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Symbol.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
        },
      ],
      &quot;tables&quot;: [ # A list of visually detected tables on the page.
        { # A table representation similar to HTML table structure.
          &quot;bodyRows&quot;: [ # Body rows of the table.
            { # A row of table cells.
              &quot;cells&quot;: [ # Cells that make up this row.
                { # A cell representation inside the table.
                  &quot;colSpan&quot;: 42, # How many columns this cell spans.
                  &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
                    { # Detected language for a structural component.
                      &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
                      &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
                    },
                  ],
                  &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
                    &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
                      &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                        { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                          &quot;x&quot;: 3.14, # X coordinate.
                          &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                        },
                      ],
                      &quot;vertices&quot;: [ # The bounding polygon vertices.
                        { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                          &quot;x&quot;: 42, # X coordinate.
                          &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                        },
                      ],
                    },
                    &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
                    &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
                    &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
                      &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
                      &quot;textSegments&quot;: [ # The text segments from the Document.text.
                        { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                          &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                          &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                        },
                      ],
                    },
                  },
                  &quot;rowSpan&quot;: 42, # How many rows this cell spans.
                },
              ],
            },
          ],
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;headerRows&quot;: [ # Header rows of the table.
            { # A row of table cells.
              &quot;cells&quot;: [ # Cells that make up this row.
                { # A cell representation inside the table.
                  &quot;colSpan&quot;: 42, # How many columns this cell spans.
                  &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
                    { # Detected language for a structural component.
                      &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
                      &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
                    },
                  ],
                  &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
                    &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
                      &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                        { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                          &quot;x&quot;: 3.14, # X coordinate.
                          &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                        },
                      ],
                      &quot;vertices&quot;: [ # The bounding polygon vertices.
                        { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                          &quot;x&quot;: 42, # X coordinate.
                          &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                        },
                      ],
                    },
                    &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
                    &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
                    &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
                      &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
                      &quot;textSegments&quot;: [ # The text segments from the Document.text.
                        { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                          &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                          &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                        },
                      ],
                    },
                  },
                  &quot;rowSpan&quot;: 42, # How many rows this cell spans.
                },
              ],
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Table.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this table.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
        },
      ],
      &quot;tokens&quot;: [ # A list of visually detected tokens on the page.
        { # A detected token.
          &quot;detectedBreak&quot;: { # Detected break at the end of a Token. # Detected break at the end of a Token.
            &quot;type&quot;: &quot;A String&quot;, # Detected break type.
          },
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Token.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
            &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
            &quot;parents&quot;: [ # References to the original elements that are replaced.
              { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
                &quot;id&quot;: 42, # The id of the parent provenance.
                &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
                &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
              },
            ],
            &quot;revision&quot;: 42, # The index of the revision that produced this element.
            &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
          },
          &quot;styleInfo&quot;: { # Font and other text style attributes. # Text style attributes.
            &quot;backgroundColor&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most `1e-5`. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Color of the background.
              &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
              &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
              &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
              &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
            },
            &quot;bold&quot;: True or False, # Whether the text is bold (equivalent to font_weight is at least `700`).
            &quot;fontSize&quot;: 42, # Font size in points (`1` point is `¹⁄₇₂` inches).
            &quot;fontType&quot;: &quot;A String&quot;, # Name or style of the font.
            &quot;fontWeight&quot;: 42, # TrueType weight on a scale `100` (thin) to `1000` (ultra-heavy). Normal is `400`, bold is `700`.
            &quot;handwritten&quot;: True or False, # Whether the text is handwritten.
            &quot;italic&quot;: True or False, # Whether the text is italic.
            &quot;letterSpacing&quot;: 3.14, # Letter spacing in points.
            &quot;pixelFontSize&quot;: 3.14, # Font size in pixels, equal to _unrounded font_size_ * _resolution_ ÷ `72.0`.
            &quot;smallcaps&quot;: True or False, # Whether the text is in small caps. This feature is not supported yet.
            &quot;strikeout&quot;: True or False, # Whether the text is strikethrough. This feature is not supported yet.
            &quot;subscript&quot;: True or False, # Whether the text is a subscript. This feature is not supported yet.
            &quot;superscript&quot;: True or False, # Whether the text is a superscript. This feature is not supported yet.
            &quot;textColor&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most `1e-5`. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Color of the text.
              &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
              &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
              &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
              &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
            },
            &quot;underlined&quot;: True or False, # Whether the text is underlined.
          },
        },
      ],
      &quot;transforms&quot;: [ # Transformation matrices that were applied to the original document image to produce Page.image.
        { # Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.
          &quot;cols&quot;: 42, # Number of columns in the matrix.
          &quot;data&quot;: &quot;A String&quot;, # The matrix data.
          &quot;rows&quot;: 42, # Number of rows in the matrix.
          &quot;type&quot;: 42, # This encodes information about what data type the matrix uses. For example, 0 (CV_8U) is an unsigned 8-bit image. For the full list of OpenCV primitive data types, please refer to https://docs.opencv.org/4.3.0/d1/d1b/group__core__hal__interface.html
        },
      ],
      &quot;visualElements&quot;: [ # A list of detected non-text visual elements e.g. checkbox, signature etc. on the page.
        { # Detected non-text visual elements e.g. checkbox, signature etc. on the page.
          &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
            { # Detected language for a structural component.
              &quot;confidence&quot;: 3.14, # Confidence of detected language. Range `[0, 1]`.
              &quot;languageCode&quot;: &quot;A String&quot;, # The [BCP-47 language code](https://www.unicode.org/reports/tr35/#Unicode_locale_identifier), such as `en-US` or `sr-Latn`.
            },
          ],
          &quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for VisualElement.
            &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
              &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
                { # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
                  &quot;x&quot;: 3.14, # X coordinate.
                  &quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
                },
              ],
              &quot;vertices&quot;: [ # The bounding polygon vertices.
                { # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
                  &quot;x&quot;: 42, # X coordinate.
                  &quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
                },
              ],
            },
            &quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range `[0, 1]`.
            &quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
            &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
              &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
              &quot;textSegments&quot;: [ # The text segments from the Document.text.
                { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
                  &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
                  &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
                },
              ],
            },
          },
          &quot;type&quot;: &quot;A String&quot;, # Type of the VisualElement.
        },
      ],
    },
  ],
  &quot;revisions&quot;: [ # Placeholder. Revision history of this document.
    { # Contains past or forward revisions of this document.
      &quot;agent&quot;: &quot;A String&quot;, # If the change was made by a person specify the name or id of that person.
      &quot;createTime&quot;: &quot;A String&quot;, # The time that the revision was created, internally generated by doc proto storage at the time of create.
      &quot;humanReview&quot;: { # Human Review information of the document. # Human Review information of this revision.
        &quot;state&quot;: &quot;A String&quot;, # Human review state. e.g. `requested`, `succeeded`, `rejected`.
        &quot;stateMessage&quot;: &quot;A String&quot;, # A message providing more details about the current state of processing. For example, the rejection reason when the state is `rejected`.
      },
      &quot;id&quot;: &quot;A String&quot;, # Id of the revision, internally generated by doc proto storage. Unique within the context of the document.
      &quot;parent&quot;: [ # The revisions that this revision is based on. This can include one or more parent (when documents are merged.) This field represents the index into the `revisions` field.
        42,
      ],
      &quot;parentIds&quot;: [ # The revisions that this revision is based on. Must include all the ids that have anything to do with this revision - eg. there are `provenance.parent.revision` fields that index into this field.
        &quot;A String&quot;,
      ],
      &quot;processor&quot;: &quot;A String&quot;, # If the annotation was made by processor identify the processor by its resource name.
    },
  ],
  &quot;shardInfo&quot;: { # For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is. # Information about the sharding if this document is sharded part of a larger document. If the document is not sharded, this message is not specified.
    &quot;shardCount&quot;: &quot;A String&quot;, # Total number of shards.
    &quot;shardIndex&quot;: &quot;A String&quot;, # The 0-based index of this shard.
    &quot;textOffset&quot;: &quot;A String&quot;, # The index of the first character in Document.text in the overall document global text.
  },
  &quot;text&quot;: &quot;A String&quot;, # Optional. UTF-8 encoded text in reading order from the document.
  &quot;textChanges&quot;: [ # Placeholder. A list of text corrections made to Document.text. This is usually used for annotating corrections to OCR mistakes. Text changes for a given revision may not overlap with each other.
    { # This message is used for text changes aka. OCR corrections.
      &quot;changedText&quot;: &quot;A String&quot;, # The text that replaces the text identified in the `text_anchor`.
      &quot;provenance&quot;: [ # The history of this annotation.
        { # Structure to identify provenance relationships between annotations in different revisions.
          &quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
          &quot;parents&quot;: [ # References to the original elements that are replaced.
            { # The parent element the current element is based on. Used for referencing/aligning, removal and replacement operations.
              &quot;id&quot;: 42, # The id of the parent provenance.
              &quot;index&quot;: 42, # The index of the parent item in the corresponding item list (eg. list of entities, properties within entities, etc.) in the parent revision.
              &quot;revision&quot;: 42, # The index of the index into current revision&#x27;s parent_ids list.
            },
          ],
          &quot;revision&quot;: 42, # The index of the revision that produced this element.
          &quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
        },
      ],
      &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Provenance of the correction. Text anchor indexing into the Document.text. There can only be a single `TextAnchor.text_segments` element. If the start and end index of the text segment are the same, the text change is inserted before that index.
        &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
        &quot;textSegments&quot;: [ # The text segments from the Document.text.
          { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
            &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
            &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
          },
        ],
      },
    },
  ],
  &quot;textStyles&quot;: [ # Styles for the Document.text.
    { # Annotation for common text style attributes. This adheres to CSS conventions as much as possible.
      &quot;backgroundColor&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most `1e-5`. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text background color.
        &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
        &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
        &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
        &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
      },
      &quot;color&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to and from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t have information about the absolute color space that should be used to interpret the RGB value—for example, sRGB, Adobe RGB, DCI-P3, and BT.2020. By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most `1e-5`. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text color.
        &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
        &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
        &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
        &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
      },
      &quot;fontFamily&quot;: &quot;A String&quot;, # Font family such as `Arial`, `Times New Roman`. https://www.w3schools.com/cssref/pr_font_font-family.asp
      &quot;fontSize&quot;: { # Font size with unit. # Font size.
        &quot;size&quot;: 3.14, # Font size for the text.
        &quot;unit&quot;: &quot;A String&quot;, # Unit for the font size. Follows CSS naming (such as `in`, `px`, and `pt`).
      },
      &quot;fontWeight&quot;: &quot;A String&quot;, # [Font weight](https://www.w3schools.com/cssref/pr_font_weight.asp). Possible values are `normal`, `bold`, `bolder`, and `lighter`.
      &quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
        &quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments. It is always populated for formFields.
        &quot;textSegments&quot;: [ # The text segments from the Document.text.
          { # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
            &quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
            &quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
          },
        ],
      },
      &quot;textDecoration&quot;: &quot;A String&quot;, # [Text decoration](https://www.w3schools.com/cssref/pr_text_text-decoration.asp). Follows CSS standard.
      &quot;textStyle&quot;: &quot;A String&quot;, # [Text style](https://www.w3schools.com/cssref/pr_font_font-style.asp). Possible values are `normal`, `italic`, and `oblique`.
    },
  ],
  &quot;uri&quot;: &quot;A String&quot;, # Optional. Currently supports Google Cloud Storage URI of the form `gs://bucket_name/object_name`. Object versioning is not supported. For more information, refer to [Google Cloud Storage Request URIs](https://cloud.google.com/storage/docs/reference-uris).
}</pre>
</div>

</body></html>