BQ

gcp4zio.bq.BQ$
See theBQ companion trait
object BQ

Attributes

Companion:
trait
Graph
Supertypes
class Object
trait Matchable
class Any
Self type
BQ.type

Members list

Concise view

Value members

Concrete methods

def executeQuery(query: String): RIO[BQ, Job]

Execute SQL query on BigQuery, this API does not returns any data. So it can be used to run any DML/DDL queries

Execute SQL query on BigQuery, this API does not returns any data. So it can be used to run any DML/DDL queries

Attributes

query

SQL query(INSERT, CREATE) to execute

def exportTable(sourceDataset: String, sourceTable: String, sourceProject: Option[String], targetPath: String, targetFormat: FileType, targetFileName: Option[String], targetCompressionType: String): RIO[BQ, Unit]

Export data from BigQuery to GCS

Export data from BigQuery to GCS

Attributes

sourceDataset

Source Dataset name

sourceProject

Source Google Project ID

sourceTable

Source Table name

targetCompressionType

Compression for destination files

targetFileName

Filename in case we want to create single file in target

targetFormat

File format for target GCS location

targetPath

Target GCS path

def fetchResults[T](query: String)(fn: FieldValueList => T): RIO[BQ, Iterable[T]]

This API can be used to run any SQL(SELECT) query on BigQuery to fetch rows

This API can be used to run any SQL(SELECT) query on BigQuery to fetch rows

Attributes

T

Scala Type for output rows

fn

function to convert FieldValueList to Scala Type T

query

SQL query(SELECT) to execute

def live(credentials: Option[String]): TaskLayer[BQ]
def loadPartitionedTable(sourcePathsPartitions: Seq[(String, String)], sourceFormat: FileType, targetProject: Option[String], targetDataset: String, targetTable: String, writeDisposition: WriteDisposition, createDisposition: CreateDisposition, schema: Option[Schema], parallelism: Int): RIO[BQ, Map[String, Long]]

Load data into BigQuery from GCS

Load data into BigQuery from GCS

Attributes

createDisposition

Create Disposition for table

parallelism

Runs with the specified maximum number of fibers for parallel loading into BigQuery.

schema

Schema for source files(Useful in case of CSV and JSON)

sourceFormat

File format of source data in GCS

sourcePathsPartitions

List of source GCS path and partition combination from which we need to load data into BigQuery parallelly

targetDataset

Target Dataset name

targetProject

Target Google Project ID

targetTable

Target Table name

writeDisposition

Write Disposition for table

def loadTable(sourcePath: String, sourceFormat: FileType, targetProject: Option[String], targetDataset: String, targetTable: String, writeDisposition: WriteDisposition, createDisposition: CreateDisposition, schema: Option[Schema]): RIO[BQ, Map[String, Long]]

Load data into BigQuery from GCS

Load data into BigQuery from GCS

Attributes

createDisposition

Create Disposition for table

schema

Schema for source files(Useful in case of CSV and JSON)

sourceFormat

File format of source data in GCS

sourcePath

Source GCS path from which we need to load data into BigQuery

targetDataset

Target Dataset name

targetProject

Target Google Project ID

targetTable

Target Table name

writeDisposition

Write Disposition for table