<html><body>
<style>

body, h1, h2, h3, div, span, p, pre, a {
  margin: 0;
  padding: 0;
  border: 0;
  font-weight: inherit;
  font-style: inherit;
  font-size: 100%;
  font-family: inherit;
  vertical-align: baseline;
}

body {
  font-size: 13px;
  padding: 1em;
}

h1 {
  font-size: 26px;
  margin-bottom: 1em;
}

h2 {
  font-size: 24px;
  margin-bottom: 1em;
}

h3 {
  font-size: 20px;
  margin-bottom: 1em;
  margin-top: 1em;
}

pre, code {
  line-height: 1.5;
  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}

pre {
  margin-top: 0.5em;
}

h1, h2, h3, p {
  font-family: Arial, sans serif;
}

h1, h2, h3 {
  border-bottom: solid #CCC 1px;
}

.toc_element {
  margin-top: 0.5em;
}

.firstline {
  margin-left: 2 em;
}

.method  {
  margin-top: 1em;
  border: solid 1px #CCC;
  padding: 1em;
  background: #EEE;
}

.details {
  font-weight: bold;
  font-size: 14px;
}

</style>

<h1><a href="discoveryengine_v1alpha.html">Discovery Engine API</a> . <a href="discoveryengine_v1alpha.projects.html">projects</a> . <a href="discoveryengine_v1alpha.projects.locations.html">locations</a> . <a href="discoveryengine_v1alpha.projects.locations.notebooks.html">notebooks</a> . <a href="discoveryengine_v1alpha.projects.locations.notebooks.sources.html">sources</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
  <code><a href="#batchCreate">batchCreate(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a list of Sources.</p>
<p class="toc_element">
  <code><a href="#batchDelete">batchDelete(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Deletes multiple sources</p>
<p class="toc_element">
  <code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
  <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets a Source.</p>
<p class="toc_element">
  <code><a href="#uploadFile">uploadFile(parent, sourceId, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Uploads a file for Notebook LM to use. Creates a Source.</p>
<h3>Method Details</h3>
<div class="method">
    <code class="details" id="batchCreate">batchCreate(parent, body=None, x__xgafv=None)</code>
  <pre>Creates a list of Sources.

Args:
  parent: string, Required. The parent resource where the sources will be created. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for SourceService.BatchCreateSources method.
  &quot;userContents&quot;: [ # Required. The UserContents to be uploaded.
    { # The &quot;Content&quot; messages refer to data the user wants to upload.
      &quot;agentspaceContent&quot;: { # Agentspace content uploaded as source. # Agentspace content uploaded as source.
        &quot;documentName&quot;: &quot;A String&quot;, # Optional. The full resource name of the Agentspace document. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}/branches/{branch}/documents/{document_id}`.
        &quot;engineName&quot;: &quot;A String&quot;, # Optional. Engine to verify the permission of the document. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}`.
        &quot;ideaforgeIdeaName&quot;: &quot;A String&quot;, # Optional. Resource name of the idea forge instance. Format: `projects/{project}/locations/{location}/collections/{collection}/engines/{engine}/sessions/{session}/ideaForgeInstances/{instance}`
      },
      &quot;googleDriveContent&quot;: { # The content from Google Drive. # The content from Google Drive.
        &quot;documentId&quot;: &quot;A String&quot;, # The document id of the selected document.
        &quot;mimeType&quot;: &quot;A String&quot;, # The mime type of the selected document. This can be used to differentiate type of content selected in the drive picker. Use application/vnd.google-apps.document for Google Docs or application/vnd.google-apps.presentation for Google Slides.
        &quot;sourceName&quot;: &quot;A String&quot;, # The name to be displayed for the source.
      },
      &quot;textContent&quot;: { # The text content uploaded as source. # The text content uploaded as source.
        &quot;content&quot;: &quot;A String&quot;, # The name to be displayed for the source.
        &quot;sourceName&quot;: &quot;A String&quot;, # The display name of the text source.
      },
      &quot;videoContent&quot;: { # Video content uploaded as source. # The video content uploaded as source.
        &quot;youtubeUrl&quot;: &quot;A String&quot;, # The youtube url of the video content.
      },
      &quot;webContent&quot;: { # The web content uploaded as source. # The web content uploaded as source.
        &quot;sourceName&quot;: &quot;A String&quot;, # The name to be displayed for the source.
        &quot;url&quot;: &quot;A String&quot;, # If URL is supplied, will fetch the webpage in the backend.
      },
    },
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response for SourceService.BatchCreateSources method.
  &quot;sources&quot;: [ # The Sources.
    { # Source represents a single source of content.
      &quot;metadata&quot;: { # Represents the metadata of a source and some additional information. # Output only. Metadata about the source.
        &quot;agentspaceMetadata&quot;: { # Metadata about an agentspace source. # Metadata for an agentspace source.
          &quot;documentName&quot;: &quot;A String&quot;, # Output only. The full document name in Agentspace.
          &quot;documentTitle&quot;: &quot;A String&quot;, # Output only. The title of the document.
        },
        &quot;googleDocsMetadata&quot;: { # Metadata about a google doc source. # Metadata for a google doc source.
          &quot;documentId&quot;: &quot;A String&quot;, # Output only. The document id of the google doc.
          &quot;revisionId&quot;: &quot;A String&quot;, # Output only. Revision id for the doc.
        },
        &quot;sourceAddedTimestamp&quot;: &quot;A String&quot;, # The timestamp the source was added.
        &quot;tokenCount&quot;: 42, # The number of tokens in the source.
        &quot;wordCount&quot;: 42, # The word count of the source.
        &quot;youtubeMetadata&quot;: { # Metadata about a youtube video source. # Metadata for a youtube video source.
          &quot;channelName&quot;: &quot;A String&quot;, # Output only. The channel name of the youtube video.
          &quot;videoId&quot;: &quot;A String&quot;, # Output only. The id of the youtube video.
        },
      },
      &quot;name&quot;: &quot;A String&quot;, # Identifier. The full resource name of the source. Format: `projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
      &quot;settings&quot;: { # Allows extension of Source Settings in the BatchCreateSources (Formerly AddSource request). # Output only. Status of the source, and any failure reasons.
        &quot;failureReason&quot;: { # Failure reason containing details about why a source failed to ingest. # Failure reason containing details about why a source failed to ingest.
          &quot;audioTranscriptionError&quot;: { # An audio file transcription specific error. # An audio file transcription specific error.
            &quot;languageDetectionFailed&quot;: { # Could not detect language of the file (it may not be speech). # Could not detect language of the file (it may not be speech).
            },
            &quot;noAudioDetected&quot;: { # No audio was detected in the input file. # No audio was detected in the input file (it may have been a video).
            },
          },
          &quot;domainBlocked&quot;: { # Error to indicate that the source was removed because the domain was blocked. # Error if the user tries to add a source from a blocked domain.
          },
          &quot;googleDriveError&quot;: { # A google drive specific error. # A google drive specific error.
            &quot;downloadPrevented&quot;: { # The user was prevented from downloading the file. # The user was prevented from downloading the file.
            },
          },
          &quot;ingestionError&quot;: { # Indicates an error occurred while ingesting the source. # Indicates an error occurred while ingesting the source.
          },
          &quot;paywallError&quot;: { # Indicates that the source is paywalled and cannot be ingested. # Indicates that the source is paywalled and cannot be ingested.
          },
          &quot;sourceEmpty&quot;: { # Indicates that the source is empty. # Indicates that the source is empty.
          },
          &quot;sourceLimitExceeded&quot;: { # Indicates that the user does not have space for this source. # Error if the user tries to update beyond their limits.
          },
          &quot;sourceTooLong&quot;: { # Indicates source word count exceeded the user&#x27;s limit. # Indicates source word count exceeded the user&#x27;s limit.
            &quot;wordCount&quot;: 42, # The number of words in the source.
            &quot;wordLimit&quot;: 42, # The word count limit for the current user at the time of the upload.
          },
          &quot;sourceUnreachable&quot;: { # Indicates that the source is unreachable. This is primarily used for sources that are added via URL. # Indicates that the source is unreachable.
            &quot;errorDetails&quot;: &quot;A String&quot;, # Describes why the source is unreachable.
          },
          &quot;unknown&quot;: { # Indicates an unknown error occurred. # Indicates an unknown error occurred.
          },
          &quot;uploadError&quot;: { # Indicates an error occurred while uploading the source. # Indicates an error occurred while uploading the source.
          },
          &quot;youtubeError&quot;: { # A youtube specific error. # A youtube specific error.
            &quot;videoDeleted&quot;: { # Error to indicate that the source was removed because the video was deleted. # Error to indicate that the source was removed because the video was deleted.
            },
          },
        },
        &quot;status&quot;: &quot;A String&quot;, # Status of the source.
      },
      &quot;sourceId&quot;: { # SourceId is the last segment of the source&#x27;s resource name. # Optional. Output only. Source id, which is the last segment of the source&#x27;s resource name.
        &quot;id&quot;: &quot;A String&quot;, # The id of the source.
      },
      &quot;title&quot;: &quot;A String&quot;, # Optional. Title of the source.
    },
  ],
}</pre>
</div>

<div class="method">
    <code class="details" id="batchDelete">batchDelete(parent, body=None, x__xgafv=None)</code>
  <pre>Deletes multiple sources

Args:
  parent: string, Required. The parent resource where the sources will be deleted. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for SourceService.BatchDeleteSourcesRequest method.
  &quot;names&quot;: [ # Required. Names of sources to be deleted. Format: projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source}
    &quot;A String&quot;,
  ],
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}</pre>
</div>

<div class="method">
    <code class="details" id="close">close()</code>
  <pre>Close httplib2 connections.</pre>
</div>

<div class="method">
    <code class="details" id="get">get(name, x__xgafv=None)</code>
  <pre>Gets a Source.

Args:
  name: string, Required. The resource name for source Format: projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source} (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Source represents a single source of content.
  &quot;metadata&quot;: { # Represents the metadata of a source and some additional information. # Output only. Metadata about the source.
    &quot;agentspaceMetadata&quot;: { # Metadata about an agentspace source. # Metadata for an agentspace source.
      &quot;documentName&quot;: &quot;A String&quot;, # Output only. The full document name in Agentspace.
      &quot;documentTitle&quot;: &quot;A String&quot;, # Output only. The title of the document.
    },
    &quot;googleDocsMetadata&quot;: { # Metadata about a google doc source. # Metadata for a google doc source.
      &quot;documentId&quot;: &quot;A String&quot;, # Output only. The document id of the google doc.
      &quot;revisionId&quot;: &quot;A String&quot;, # Output only. Revision id for the doc.
    },
    &quot;sourceAddedTimestamp&quot;: &quot;A String&quot;, # The timestamp the source was added.
    &quot;tokenCount&quot;: 42, # The number of tokens in the source.
    &quot;wordCount&quot;: 42, # The word count of the source.
    &quot;youtubeMetadata&quot;: { # Metadata about a youtube video source. # Metadata for a youtube video source.
      &quot;channelName&quot;: &quot;A String&quot;, # Output only. The channel name of the youtube video.
      &quot;videoId&quot;: &quot;A String&quot;, # Output only. The id of the youtube video.
    },
  },
  &quot;name&quot;: &quot;A String&quot;, # Identifier. The full resource name of the source. Format: `projects/{project}/locations/{location}/notebooks/{notebook}/sources/{source_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
  &quot;settings&quot;: { # Allows extension of Source Settings in the BatchCreateSources (Formerly AddSource request). # Output only. Status of the source, and any failure reasons.
    &quot;failureReason&quot;: { # Failure reason containing details about why a source failed to ingest. # Failure reason containing details about why a source failed to ingest.
      &quot;audioTranscriptionError&quot;: { # An audio file transcription specific error. # An audio file transcription specific error.
        &quot;languageDetectionFailed&quot;: { # Could not detect language of the file (it may not be speech). # Could not detect language of the file (it may not be speech).
        },
        &quot;noAudioDetected&quot;: { # No audio was detected in the input file. # No audio was detected in the input file (it may have been a video).
        },
      },
      &quot;domainBlocked&quot;: { # Error to indicate that the source was removed because the domain was blocked. # Error if the user tries to add a source from a blocked domain.
      },
      &quot;googleDriveError&quot;: { # A google drive specific error. # A google drive specific error.
        &quot;downloadPrevented&quot;: { # The user was prevented from downloading the file. # The user was prevented from downloading the file.
        },
      },
      &quot;ingestionError&quot;: { # Indicates an error occurred while ingesting the source. # Indicates an error occurred while ingesting the source.
      },
      &quot;paywallError&quot;: { # Indicates that the source is paywalled and cannot be ingested. # Indicates that the source is paywalled and cannot be ingested.
      },
      &quot;sourceEmpty&quot;: { # Indicates that the source is empty. # Indicates that the source is empty.
      },
      &quot;sourceLimitExceeded&quot;: { # Indicates that the user does not have space for this source. # Error if the user tries to update beyond their limits.
      },
      &quot;sourceTooLong&quot;: { # Indicates source word count exceeded the user&#x27;s limit. # Indicates source word count exceeded the user&#x27;s limit.
        &quot;wordCount&quot;: 42, # The number of words in the source.
        &quot;wordLimit&quot;: 42, # The word count limit for the current user at the time of the upload.
      },
      &quot;sourceUnreachable&quot;: { # Indicates that the source is unreachable. This is primarily used for sources that are added via URL. # Indicates that the source is unreachable.
        &quot;errorDetails&quot;: &quot;A String&quot;, # Describes why the source is unreachable.
      },
      &quot;unknown&quot;: { # Indicates an unknown error occurred. # Indicates an unknown error occurred.
      },
      &quot;uploadError&quot;: { # Indicates an error occurred while uploading the source. # Indicates an error occurred while uploading the source.
      },
      &quot;youtubeError&quot;: { # A youtube specific error. # A youtube specific error.
        &quot;videoDeleted&quot;: { # Error to indicate that the source was removed because the video was deleted. # Error to indicate that the source was removed because the video was deleted.
        },
      },
    },
    &quot;status&quot;: &quot;A String&quot;, # Status of the source.
  },
  &quot;sourceId&quot;: { # SourceId is the last segment of the source&#x27;s resource name. # Optional. Output only. Source id, which is the last segment of the source&#x27;s resource name.
    &quot;id&quot;: &quot;A String&quot;, # The id of the source.
  },
  &quot;title&quot;: &quot;A String&quot;, # Optional. Title of the source.
}</pre>
</div>

<div class="method">
    <code class="details" id="uploadFile">uploadFile(parent, sourceId, body=None, x__xgafv=None)</code>
  <pre>Uploads a file for Notebook LM to use. Creates a Source.

Args:
  parent: string, Required. The parent resource where the sources will be created. Format: projects/{project}/locations/{location}/notebooks/{notebook} (required)
  sourceId: string, The source id of the associated file. If not set, a source id will be generated and a new tentative source will be created. (required)
  body: object, The request body.
    The object takes the form of:

{ # Request for the SourceService.UploadSourceFile method.
  &quot;blob&quot;: { # A reference to data stored on the filesystem, on GFS or in blobstore. # Information about the file being uploaded.
    &quot;algorithm&quot;: &quot;A String&quot;, # Deprecated, use one of explicit hash type fields instead. Algorithm used for calculating the hash. As of 2011/01/21, &quot;MD5&quot; is the only possible value for this field. New values may be added at any time.
    &quot;bigstoreObjectRef&quot;: &quot;A String&quot;, # Use object_id instead.
    &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
    &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
      &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
      &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
      &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
      &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
      &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
    },
    &quot;compositeMedia&quot;: [ # A composite media composed of one or more media objects, set if reference_type is COMPOSITE_MEDIA. The media length field must be set to the sum of the lengths of all composite media objects. Note: All composite media must have length specified.
      { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
    ],
    &quot;contentType&quot;: &quot;A String&quot;, # MIME type of the data
    &quot;contentTypeInfo&quot;: { # Detailed Content-Type information from Scotty. The Content-Type of the media will typically be filled in by the header or Scotty&#x27;s best_guess, but this extended information provides the backend with more information so that it can make a better decision if needed. This is only used on media upload requests from Scotty. # Extended content type information provided for Scotty uploads.
      &quot;bestGuess&quot;: &quot;A String&quot;, # Scotty&#x27;s best guess of what the content type of the file is.
      &quot;fromBytes&quot;: &quot;A String&quot;, # The content type of the file derived by looking at specific bytes (i.e. &quot;magic bytes&quot;) of the actual file.
      &quot;fromFileName&quot;: &quot;A String&quot;, # The content type of the file derived from the file extension of the original file name used by the client.
      &quot;fromHeader&quot;: &quot;A String&quot;, # The content type of the file as specified in the request headers, multipart headers, or RUPIO start request.
      &quot;fromUrlPath&quot;: &quot;A String&quot;, # The content type of the file derived from the file extension of the URL path. The URL path is assumed to represent a file name (which is typically only true for agents that are providing a REST API).
    },
    &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
    &quot;crc32cHash&quot;: 42, # For Scotty Uploads: Scotty-provided hashes for uploads For Scotty Downloads: (WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.) A Hash provided by the agent to be used to verify the data being downloaded. Currently only supported for inline payloads. Further, only crc32c_hash is currently supported.
    &quot;diffChecksumsResponse&quot;: { # Backend response for a Diff get checksums response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_CHECKSUMS_RESPONSE.
      &quot;checksumsLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # Exactly one of these fields must be populated. If checksums_location is filled, the server will return the corresponding contents to the user. If object_location is filled, the server will calculate the checksums based on the content there and return that to the user. For details on the format of the checksums, see http://go/scotty-diff-protocol.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
      &quot;chunkSizeBytes&quot;: &quot;A String&quot;, # The chunk size of checksums. Must be a multiple of 256KB.
      &quot;objectLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # If set, calculate the checksums based on the contents and return them to the caller.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
      &quot;objectSizeBytes&quot;: &quot;A String&quot;, # The total size of the server object.
      &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object the checksums are being returned for.
    },
    &quot;diffDownloadResponse&quot;: { # Backend response for a Diff download response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_DOWNLOAD_RESPONSE.
      &quot;objectLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The original object location.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
    },
    &quot;diffUploadRequest&quot;: { # A Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_REQUEST.
      &quot;checksumsInfo&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the checksums for the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received. For details on the format of the checksums, see http://go/scotty-diff-protocol.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
      &quot;objectInfo&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
      &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object that is the base version the incoming diff script will be applied to. This field will always be filled in.
    },
    &quot;diffUploadResponse&quot;: { # Backend response for a Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_RESPONSE.
      &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object at the server. Must be included in the end notification response. The version in the end notification response must correspond to the new version of the object that is now stored at the server, after the upload.
      &quot;originalObject&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the original file for a diff upload request. Must be filled in if responding to an upload start notification.
        &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
        &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
          &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
          &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
          &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
          &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
          &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
        },
        &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
        &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
        &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
        &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
        &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
        &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
          &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
          &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
          &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
        },
        &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
        &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
        &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
      },
    },
    &quot;diffVersionResponse&quot;: { # Backend response for a Diff get version response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_VERSION_RESPONSE.
      &quot;objectSizeBytes&quot;: &quot;A String&quot;, # The total size of the server object.
      &quot;objectVersion&quot;: &quot;A String&quot;, # The version of the object stored at the server.
    },
    &quot;downloadParameters&quot;: { # Parameters specific to media downloads. # Parameters for a media download.
      &quot;allowGzipCompression&quot;: True or False, # A boolean to be returned in the response to Scotty. Allows/disallows gzip encoding of the payload content when the server thinks it&#x27;s advantageous (hence, does not guarantee compression) which allows Scotty to GZip the response to the client.
      &quot;ignoreRange&quot;: True or False, # Determining whether or not Apiary should skip the inclusion of any Content-Range header on its response to Scotty.
    },
    &quot;filename&quot;: &quot;A String&quot;, # Original file name
    &quot;hash&quot;: &quot;A String&quot;, # Deprecated, use one of explicit hash type fields instead. These two hash related fields will only be populated on Scotty based media uploads and will contain the content of the hash group in the NotificationRequest: http://cs/#google3/blobstore2/api/scotty/service/proto/upload_listener.proto&amp;q=class:Hash Hex encoded hash value of the uploaded media.
    &quot;hashVerified&quot;: True or False, # For Scotty uploads only. If a user sends a hash code and the backend has requested that Scotty verify the upload against the client hash, Scotty will perform the check on behalf of the backend and will reject it if the hashes don&#x27;t match. This is set to true if Scotty performed this verification.
    &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
    &quot;isPotentialRetry&quot;: True or False, # |is_potential_retry| is set false only when Scotty is certain that it has not sent the request before. When a client resumes an upload, this field must be set true in agent calls, because Scotty cannot be certain that it has never sent the request before due to potential failure in the session state persistence.
    &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
    &quot;md5Hash&quot;: &quot;A String&quot;, # Scotty-provided MD5 hash for an upload.
    &quot;mediaId&quot;: &quot;A String&quot;, # Media id to forward to the operation GetMedia. Can be set if reference_type is GET_MEDIA.
    &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
      &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
      &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
      &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
    },
    &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
    &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
    &quot;sha1Hash&quot;: &quot;A String&quot;, # Scotty-provided SHA1 hash for an upload.
    &quot;sha256Hash&quot;: &quot;A String&quot;, # Scotty-provided SHA256 hash for an upload.
    &quot;timestamp&quot;: &quot;A String&quot;, # Time at which the media data was last updated, in milliseconds since UNIX epoch
    &quot;token&quot;: &quot;A String&quot;, # A unique fingerprint/version id for the media data
  },
  &quot;mediaRequestInfo&quot;: { # Extra information added to operations that support Scotty media requests. # Media upload request metadata.
    &quot;currentBytes&quot;: &quot;A String&quot;, # The number of current bytes uploaded or downloaded.
    &quot;customData&quot;: &quot;A String&quot;, # Data to be copied to backend requests. Custom data is returned to Scotty in the agent_state field, which Scotty will then provide in subsequent upload notifications.
    &quot;diffObjectVersion&quot;: &quot;A String&quot;, # Set if the http request info is diff encoded. The value of this field is the version number of the base revision. This is corresponding to Apiary&#x27;s mediaDiffObjectVersion (//depot/google3/java/com/google/api/server/media/variable/DiffObjectVersionVariable.java). See go/esf-scotty-diff-upload for more information.
    &quot;finalStatus&quot;: 42, # The existence of the final_status field indicates that this is the last call to the agent for this request_id. http://google3/uploader/agent/scotty_agent.proto?l=737&amp;rcl=347601929
    &quot;notificationType&quot;: &quot;A String&quot;, # The type of notification received from Scotty.
    &quot;physicalHeaders&quot;: &quot;A String&quot;, # The physical headers provided by RequestReceivedParameters in Scotty request. type is uploader_service.KeyValuePairs.
    &quot;requestId&quot;: &quot;A String&quot;, # The Scotty request ID.
    &quot;requestReceivedParamsServingInfo&quot;: &quot;A String&quot;, # The partition of the Scotty server handling this request. type is uploader_service.RequestReceivedParamsServingInfo LINT.IfChange(request_received_params_serving_info_annotations) LINT.ThenChange()
    &quot;totalBytes&quot;: &quot;A String&quot;, # The total size of the file.
    &quot;totalBytesIsEstimated&quot;: True or False, # Whether the total bytes field contains an estimated data.
  },
  &quot;sourceId&quot;: &quot;A String&quot;, # The source id of the associated file. If not set, a source id will be generated and a new tentative source will be created.
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response for the SourceService.UploadSourceFile method.
  &quot;mediaResponseInfo&quot;: { # This message is for backends to pass their scotty media specific fields to ESF. Backend will include this in their response message to ESF. Example: ExportFile is an rpc defined for upload using scotty from ESF. rpc ExportFile(ExportFileRequest) returns (ExportFileResponse) Message ExportFileResponse will include apiserving.MediaResponseInfo to tell ESF about data like dynamic_dropzone it needs to pass to Scotty. message ExportFileResponse { optional gdata.Media blob = 1; optional apiserving.MediaResponseInfo media_response_info = 2 } # Media upload response metadata.
    &quot;customData&quot;: &quot;A String&quot;, # Data to copy from backend response to the next backend requests. Custom data is returned to Scotty in the agent_state field, which Scotty will then provide in subsequent upload notifications.
    &quot;dataStorageTransform&quot;: &quot;A String&quot;, # Specifies any transformation to be applied to data before persisting it or retrieving from storage. E.g., encryption options for blobstore2. This should be of the form uploader_service.DataStorageTransform.
    &quot;destinationBlobMintIndex&quot;: 42, # For the first notification of a |diff_encoded| HttpRequestInfo, this is the index of the blob mint that Scotty should use when writing the resulting blob. This field is optional. It&#x27;s not required ever, even if `original_object_blob_mint_index` is set. In situations like that, we will use the destination blob&#x27;s mint for the destination blob and regular blob ACL checks for the original object. Note: This field is only for use by Drive API for diff uploads.
    &quot;dynamicDropTarget&quot;: &quot;A String&quot;, # Specifies the Scotty Drop Target to use for uploads. If present in a media response, Scotty does not upload to a standard drop zone. Instead, Scotty saves the upload directly to the location specified in this drop target. Unlike drop zones, the drop target is the final storage location for an upload. So, the agent does not need to clone the blob at the end of the upload. The agent is responsible for garbage collecting any orphaned blobs that may occur due to aborted uploads. For more information, see the drop target design doc here: http://goto/ScottyDropTarget This field will be preferred to dynamicDropzone. If provided, the identified field in the response must be of the type uploader.agent.DropTarget.
    &quot;dynamicDropzone&quot;: &quot;A String&quot;, # Specifies the Scotty dropzone to use for uploads.
    &quot;mediaForDiff&quot;: { # A reference to data stored on the filesystem, on GFS or in blobstore. # Diff Updates must respond to a START notification with this Media proto to tell Scotty to decode the diff encoded payload and apply the diff against this field. If the request was diff encoded, but this field is not set, Scotty will treat the encoding as identity. This is corresponding to Apiary&#x27;s DiffUploadResponse.original_object (//depot/google3/gdata/rosy/proto/data.proto?l=413). See go/esf-scotty-diff-upload for more information.
      &quot;algorithm&quot;: &quot;A String&quot;, # Deprecated, use one of explicit hash type fields instead. Algorithm used for calculating the hash. As of 2011/01/21, &quot;MD5&quot; is the only possible value for this field. New values may be added at any time.
      &quot;bigstoreObjectRef&quot;: &quot;A String&quot;, # Use object_id instead.
      &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
      &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
        &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
        &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
        &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
        &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
        &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
      },
      &quot;compositeMedia&quot;: [ # A composite media composed of one or more media objects, set if reference_type is COMPOSITE_MEDIA. The media length field must be set to the sum of the lengths of all composite media objects. Note: All composite media must have length specified.
        { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
      ],
      &quot;contentType&quot;: &quot;A String&quot;, # MIME type of the data
      &quot;contentTypeInfo&quot;: { # Detailed Content-Type information from Scotty. The Content-Type of the media will typically be filled in by the header or Scotty&#x27;s best_guess, but this extended information provides the backend with more information so that it can make a better decision if needed. This is only used on media upload requests from Scotty. # Extended content type information provided for Scotty uploads.
        &quot;bestGuess&quot;: &quot;A String&quot;, # Scotty&#x27;s best guess of what the content type of the file is.
        &quot;fromBytes&quot;: &quot;A String&quot;, # The content type of the file derived by looking at specific bytes (i.e. &quot;magic bytes&quot;) of the actual file.
        &quot;fromFileName&quot;: &quot;A String&quot;, # The content type of the file derived from the file extension of the original file name used by the client.
        &quot;fromHeader&quot;: &quot;A String&quot;, # The content type of the file as specified in the request headers, multipart headers, or RUPIO start request.
        &quot;fromUrlPath&quot;: &quot;A String&quot;, # The content type of the file derived from the file extension of the URL path. The URL path is assumed to represent a file name (which is typically only true for agents that are providing a REST API).
      },
      &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
      &quot;crc32cHash&quot;: 42, # For Scotty Uploads: Scotty-provided hashes for uploads For Scotty Downloads: (WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.) A Hash provided by the agent to be used to verify the data being downloaded. Currently only supported for inline payloads. Further, only crc32c_hash is currently supported.
      &quot;diffChecksumsResponse&quot;: { # Backend response for a Diff get checksums response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_CHECKSUMS_RESPONSE.
        &quot;checksumsLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # Exactly one of these fields must be populated. If checksums_location is filled, the server will return the corresponding contents to the user. If object_location is filled, the server will calculate the checksums based on the content there and return that to the user. For details on the format of the checksums, see http://go/scotty-diff-protocol.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
        &quot;chunkSizeBytes&quot;: &quot;A String&quot;, # The chunk size of checksums. Must be a multiple of 256KB.
        &quot;objectLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # If set, calculate the checksums based on the contents and return them to the caller.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
        &quot;objectSizeBytes&quot;: &quot;A String&quot;, # The total size of the server object.
        &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object the checksums are being returned for.
      },
      &quot;diffDownloadResponse&quot;: { # Backend response for a Diff download response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_DOWNLOAD_RESPONSE.
        &quot;objectLocation&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The original object location.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
      },
      &quot;diffUploadRequest&quot;: { # A Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_REQUEST.
        &quot;checksumsInfo&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the checksums for the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received. For details on the format of the checksums, see http://go/scotty-diff-protocol.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
        &quot;objectInfo&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the new object. Agents must clone the object located here, as the upload server will delete the contents once a response is received.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
        &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object that is the base version the incoming diff script will be applied to. This field will always be filled in.
      },
      &quot;diffUploadResponse&quot;: { # Backend response for a Diff upload request. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_UPLOAD_RESPONSE.
        &quot;objectVersion&quot;: &quot;A String&quot;, # The object version of the object at the server. Must be included in the end notification response. The version in the end notification response must correspond to the new version of the object that is now stored at the server, after the upload.
        &quot;originalObject&quot;: { # A sequence of media data references representing composite data. Introduced to support Bigstore composite objects. For details, visit http://go/bigstore-composites. # The location of the original file for a diff upload request. Must be filled in if responding to an upload start notification.
          &quot;blobRef&quot;: &quot;A String&quot;, # Blobstore v1 reference, set if reference_type is BLOBSTORE_REF This should be the byte representation of a blobstore.BlobRef. Since Blobstore is deprecating v1, use blobstore2_info instead. For now, any v2 blob will also be represented in this field as v1 BlobRef.
          &quot;blobstore2Info&quot;: { # Information to read/write to blobstore2. # Blobstore v2 info, set if reference_type is BLOBSTORE_REF and it refers to a v2 blob.
            &quot;blobGeneration&quot;: &quot;A String&quot;, # The blob generation id.
            &quot;blobId&quot;: &quot;A String&quot;, # The blob id, e.g., /blobstore/prod/playground/scotty
            &quot;downloadReadHandle&quot;: &quot;A String&quot;, # Read handle passed from Bigstore -&gt; Scotty for a GCS download. This is a signed, serialized blobstore2.ReadHandle proto which must never be set outside of Bigstore, and is not applicable to non-GCS media downloads.
            &quot;readToken&quot;: &quot;A String&quot;, # The blob read token. Needed to read blobs that have not been replicated. Might not be available until the final call.
            &quot;uploadMetadataContainer&quot;: &quot;A String&quot;, # Metadata passed from Blobstore -&gt; Scotty for a new GCS upload. This is a signed, serialized blobstore2.BlobMetadataContainer proto which must never be consumed outside of Bigstore, and is not applicable to non-GCS media uploads.
          },
          &quot;cosmoBinaryReference&quot;: &quot;A String&quot;, # A binary data reference for a media download. Serves as a technology-agnostic binary reference in some Google infrastructure. This value is a serialized storage_cosmo.BinaryReference proto. Storing it as bytes is a hack to get around the fact that the cosmo proto (as well as others it includes) doesn&#x27;t support JavaScript. This prevents us from including the actual type of this field.
          &quot;crc32cHash&quot;: 42, # crc32.c hash for the payload.
          &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
          &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
          &quot;md5Hash&quot;: &quot;A String&quot;, # MD5 hash for the payload.
          &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
            &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
            &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
            &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
          },
          &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
          &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
          &quot;sha1Hash&quot;: &quot;A String&quot;, # SHA-1 hash for the payload.
        },
      },
      &quot;diffVersionResponse&quot;: { # Backend response for a Diff get version response. For details on the Scotty Diff protocol, visit http://go/scotty-diff-protocol. # Set if reference_type is DIFF_VERSION_RESPONSE.
        &quot;objectSizeBytes&quot;: &quot;A String&quot;, # The total size of the server object.
        &quot;objectVersion&quot;: &quot;A String&quot;, # The version of the object stored at the server.
      },
      &quot;downloadParameters&quot;: { # Parameters specific to media downloads. # Parameters for a media download.
        &quot;allowGzipCompression&quot;: True or False, # A boolean to be returned in the response to Scotty. Allows/disallows gzip encoding of the payload content when the server thinks it&#x27;s advantageous (hence, does not guarantee compression) which allows Scotty to GZip the response to the client.
        &quot;ignoreRange&quot;: True or False, # Determining whether or not Apiary should skip the inclusion of any Content-Range header on its response to Scotty.
      },
      &quot;filename&quot;: &quot;A String&quot;, # Original file name
      &quot;hash&quot;: &quot;A String&quot;, # Deprecated, use one of explicit hash type fields instead. These two hash related fields will only be populated on Scotty based media uploads and will contain the content of the hash group in the NotificationRequest: http://cs/#google3/blobstore2/api/scotty/service/proto/upload_listener.proto&amp;q=class:Hash Hex encoded hash value of the uploaded media.
      &quot;hashVerified&quot;: True or False, # For Scotty uploads only. If a user sends a hash code and the backend has requested that Scotty verify the upload against the client hash, Scotty will perform the check on behalf of the backend and will reject it if the hashes don&#x27;t match. This is set to true if Scotty performed this verification.
      &quot;inline&quot;: &quot;A String&quot;, # Media data, set if reference_type is INLINE
      &quot;isPotentialRetry&quot;: True or False, # |is_potential_retry| is set false only when Scotty is certain that it has not sent the request before. When a client resumes an upload, this field must be set true in agent calls, because Scotty cannot be certain that it has never sent the request before due to potential failure in the session state persistence.
      &quot;length&quot;: &quot;A String&quot;, # Size of the data, in bytes
      &quot;md5Hash&quot;: &quot;A String&quot;, # Scotty-provided MD5 hash for an upload.
      &quot;mediaId&quot;: &quot;A String&quot;, # Media id to forward to the operation GetMedia. Can be set if reference_type is GET_MEDIA.
      &quot;objectId&quot;: { # This is a copy of the tech.blob.ObjectId proto, which could not be used directly here due to transitive closure issues with JavaScript support; see http://b/8801763. # Reference to a TI Blob, set if reference_type is BIGSTORE_REF.
        &quot;bucketName&quot;: &quot;A String&quot;, # The name of the bucket to which this object belongs.
        &quot;generation&quot;: &quot;A String&quot;, # Generation of the object. Generations are monotonically increasing across writes, allowing them to be be compared to determine which generation is newer. If this is omitted in a request, then you are requesting the live object. See http://go/bigstore-versions
        &quot;objectName&quot;: &quot;A String&quot;, # The name of the object.
      },
      &quot;path&quot;: &quot;A String&quot;, # Path to the data, set if reference_type is PATH
      &quot;referenceType&quot;: &quot;A String&quot;, # Describes what the field reference contains.
      &quot;sha1Hash&quot;: &quot;A String&quot;, # Scotty-provided SHA1 hash for an upload.
      &quot;sha256Hash&quot;: &quot;A String&quot;, # Scotty-provided SHA256 hash for an upload.
      &quot;timestamp&quot;: &quot;A String&quot;, # Time at which the media data was last updated, in milliseconds since UNIX epoch
      &quot;token&quot;: &quot;A String&quot;, # A unique fingerprint/version id for the media data
    },
    &quot;originalObjectBlobMintIndex&quot;: 42, # For the first notification of a |diff_encoded| HttpRequestInfo, this is the index of the blob mint that Scotty should use when reading the original blob. This field is optional. It&#x27;s not required ever, even if `destination_blob_mint_index` is set. In situations like that, we will use the destination blob&#x27;s mint for the destination blob and regular blob ACL checks for the original object. Note: This field is only for use by Drive API for diff uploads.
    &quot;requestClass&quot;: &quot;A String&quot;, # Request class to use for all Blobstore operations for this request.
    &quot;scottyAgentUserId&quot;: &quot;A String&quot;, # Requester ID passed along to be recorded in the Scotty logs
    &quot;scottyCustomerLog&quot;: &quot;A String&quot;, # Customer-specific data to be recorded in the Scotty logs type is logs_proto_scotty.CustomerLog
    &quot;trafficClassField&quot;: &quot;A String&quot;, # Specifies the TrafficClass that Scotty should use for any RPCs to fetch the response bytes. Will override the traffic class GTOS of the incoming http request. This is a temporary field to facilitate whitelisting and experimentation by the bigstore agent only. For instance, this does not apply to RTMP reads. WARNING: DO NOT USE WITHOUT PERMISSION FROM THE SCOTTY TEAM.
    &quot;verifyHashFromHeader&quot;: True or False, # Tells Scotty to verify hashes on the agent&#x27;s behalf by parsing out the X-Goog-Hash header.
  },
  &quot;sourceId&quot;: { # SourceId is the last segment of the source&#x27;s resource name. # The source id of the uploaded source.
    &quot;id&quot;: &quot;A String&quot;, # The id of the source.
  },
}</pre>
</div>

</body></html>