BigQuery API v2 (revision 87)



com.google.api.services.bigquery.model
Class JobConfigurationLoad

java.lang.Object
  extended by java.util.AbstractMap<String,Object>
      extended by com.google.api.client.util.GenericData
          extended by com.google.api.client.json.GenericJson
              extended by com.google.api.services.bigquery.model.JobConfigurationLoad
All Implemented Interfaces:
Cloneable, Map<String,Object>

public final class JobConfigurationLoad
extends GenericJson

Model definition for JobConfigurationLoad.

This is the Java data model class that specifies how to parse/serialize into the JSON that is transmitted over HTTP when working with the BigQuery API. For a detailed explanation see: http://code.google.com/p/google-http-java-client/wiki/JSON

Author:
Google, Inc.

Nested Class Summary
 
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
GenericData.Flags
 
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
 
Nested classes/interfaces inherited from interface java.util.Map
Map.Entry<K,V>
 
Constructor Summary
JobConfigurationLoad()
           
 
Method Summary
 JobConfigurationLoad clone()
           
 Boolean getAllowQuotedNewlines()
          Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
 String getCreateDisposition()
          [Optional] Specifies whether the job is allowed to create new tables.
 TableReference getDestinationTable()
          [Required] The destination table to load the data into.
 String getEncoding()
          [Optional] The character encoding of the data.
 String getFieldDelimiter()
          [Optional] The separator for fields in a CSV file.
 Integer getMaxBadRecords()
          [Optional] The maximum number of bad records that BigQuery can ignore when running the job.
 String getQuote()
          [Optional] The value that is used to quote data sections in a CSV file.
 TableSchema getSchema()
          [Optional] The schema for the destination table.
 String getSchemaInline()
          [Deprecated] The inline schema.
 String getSchemaInlineFormat()
          [Deprecated] The format of the schemaInline property.
 Integer getSkipLeadingRows()
          [Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
 String getSourceFormat()
          [Optional] The format of the data files.
 List<String> getSourceUris()
          [Required] The fully-qualified URIs that point to your data on Google Cloud Storage.
 String getWriteDisposition()
          [Optional] Specifies the action that occurs if the destination table already exists.
 JobConfigurationLoad set(String fieldName, Object value)
           
 JobConfigurationLoad setAllowQuotedNewlines(Boolean allowQuotedNewlines)
          Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
 JobConfigurationLoad setCreateDisposition(String createDisposition)
          [Optional] Specifies whether the job is allowed to create new tables.
 JobConfigurationLoad setDestinationTable(TableReference destinationTable)
          [Required] The destination table to load the data into.
 JobConfigurationLoad setEncoding(String encoding)
          [Optional] The character encoding of the data.
 JobConfigurationLoad setFieldDelimiter(String fieldDelimiter)
          [Optional] The separator for fields in a CSV file.
 JobConfigurationLoad setMaxBadRecords(Integer maxBadRecords)
          [Optional] The maximum number of bad records that BigQuery can ignore when running the job.
 JobConfigurationLoad setQuote(String quote)
          [Optional] The value that is used to quote data sections in a CSV file.
 JobConfigurationLoad setSchema(TableSchema schema)
          [Optional] The schema for the destination table.
 JobConfigurationLoad setSchemaInline(String schemaInline)
          [Deprecated] The inline schema.
 JobConfigurationLoad setSchemaInlineFormat(String schemaInlineFormat)
          [Deprecated] The format of the schemaInline property.
 JobConfigurationLoad setSkipLeadingRows(Integer skipLeadingRows)
          [Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
 JobConfigurationLoad setSourceFormat(String sourceFormat)
          [Optional] The format of the data files.
 JobConfigurationLoad setSourceUris(List<String> sourceUris)
          [Required] The fully-qualified URIs that point to your data on Google Cloud Storage.
 JobConfigurationLoad setWriteDisposition(String writeDisposition)
          [Optional] Specifies the action that occurs if the destination table already exists.
 
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
 
Methods inherited from class com.google.api.client.util.GenericData
entrySet, get, getClassInfo, getUnknownKeys, put, putAll, remove, setUnknownKeys
 
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, equals, hashCode, isEmpty, keySet, size, values
 
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
 

Constructor Detail

JobConfigurationLoad

public JobConfigurationLoad()
Method Detail

getAllowQuotedNewlines

public Boolean getAllowQuotedNewlines()
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Returns:
value or null for none

setAllowQuotedNewlines

public JobConfigurationLoad setAllowQuotedNewlines(Boolean allowQuotedNewlines)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Parameters:
allowQuotedNewlines - allowQuotedNewlines or null for none

getCreateDisposition

public String getCreateDisposition()
[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.

Returns:
value or null for none

setCreateDisposition

public JobConfigurationLoad setCreateDisposition(String createDisposition)
[Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.

Parameters:
createDisposition - createDisposition or null for none

getDestinationTable

public TableReference getDestinationTable()
[Required] The destination table to load the data into.

Returns:
value or null for none

setDestinationTable

public JobConfigurationLoad setDestinationTable(TableReference destinationTable)
[Required] The destination table to load the data into.

Parameters:
destinationTable - destinationTable or null for none

getEncoding

public String getEncoding()
[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

Returns:
value or null for none

setEncoding

public JobConfigurationLoad setEncoding(String encoding)
[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

Parameters:
encoding - encoding or null for none

getFieldDelimiter

public String getFieldDelimiter()
[Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

Returns:
value or null for none

setFieldDelimiter

public JobConfigurationLoad setFieldDelimiter(String fieldDelimiter)
[Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').

Parameters:
fieldDelimiter - fieldDelimiter or null for none

getMaxBadRecords

public Integer getMaxBadRecords()
[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an 'invalid' error is returned in the job result and the job fails. The default value is 0, which requires that all records are valid.

Returns:
value or null for none

setMaxBadRecords

public JobConfigurationLoad setMaxBadRecords(Integer maxBadRecords)
[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an 'invalid' error is returned in the job result and the job fails. The default value is 0, which requires that all records are valid.

Parameters:
maxBadRecords - maxBadRecords or null for none

getQuote

public String getQuote()
[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

Returns:
value or null for none

setQuote

public JobConfigurationLoad setQuote(String quote)
[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

Parameters:
quote - quote or null for none

getSchema

public TableSchema getSchema()
[Optional] The schema for the destination table. The schema can be omitted if the destination table already exists or if the schema can be inferred from the loaded data.

Returns:
value or null for none

setSchema

public JobConfigurationLoad setSchema(TableSchema schema)
[Optional] The schema for the destination table. The schema can be omitted if the destination table already exists or if the schema can be inferred from the loaded data.

Parameters:
schema - schema or null for none

getSchemaInline

public String getSchemaInline()
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".

Returns:
value or null for none

setSchemaInline

public JobConfigurationLoad setSchemaInline(String schemaInline)
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".

Parameters:
schemaInline - schemaInline or null for none

getSchemaInlineFormat

public String getSchemaInlineFormat()
[Deprecated] The format of the schemaInline property.

Returns:
value or null for none

setSchemaInlineFormat

public JobConfigurationLoad setSchemaInlineFormat(String schemaInlineFormat)
[Deprecated] The format of the schemaInline property.

Parameters:
schemaInlineFormat - schemaInlineFormat or null for none

getSkipLeadingRows

public Integer getSkipLeadingRows()
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

Returns:
value or null for none

setSkipLeadingRows

public JobConfigurationLoad setSkipLeadingRows(Integer skipLeadingRows)
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

Parameters:
skipLeadingRows - skipLeadingRows or null for none

getSourceFormat

public String getSourceFormat()
[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". The default value is CSV.

Returns:
value or null for none

setSourceFormat

public JobConfigurationLoad setSourceFormat(String sourceFormat)
[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". The default value is CSV.

Parameters:
sourceFormat - sourceFormat or null for none

getSourceUris

public List<String> getSourceUris()
[Required] The fully-qualified URIs that point to your data on Google Cloud Storage.

Returns:
value or null for none

setSourceUris

public JobConfigurationLoad setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data on Google Cloud Storage.

Parameters:
sourceUris - sourceUris or null for none

getWriteDisposition

public String getWriteDisposition()
[Optional] Specifies the action that occurs if the destination table already exists. Each action is atomic and only occurs if BigQuery is able to fully load the data and the load job completes without error. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists, a 'duplicate' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion.

Returns:
value or null for none

setWriteDisposition

public JobConfigurationLoad setWriteDisposition(String writeDisposition)
[Optional] Specifies the action that occurs if the destination table already exists. Each action is atomic and only occurs if BigQuery is able to fully load the data and the load job completes without error. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists, a 'duplicate' error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion.

Parameters:
writeDisposition - writeDisposition or null for none

set

public JobConfigurationLoad set(String fieldName,
                                Object value)
Overrides:
set in class GenericJson

clone

public JobConfigurationLoad clone()
Overrides:
clone in class GenericJson