Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.bigquery/v2.Routine
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new routine in the dataset. Auto-naming is currently not supported for this resource.
Create Routine Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Routine(name: string, args: RoutineArgs, opts?: CustomResourceOptions);@overload
def Routine(resource_name: str,
            args: RoutineArgs,
            opts: Optional[ResourceOptions] = None)
@overload
def Routine(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            routine_reference: Optional[RoutineReferenceArgs] = None,
            routine_type: Optional[RoutineRoutineType] = None,
            dataset_id: Optional[str] = None,
            definition_body: Optional[str] = None,
            description: Optional[str] = None,
            determinism_level: Optional[RoutineDeterminismLevel] = None,
            imported_libraries: Optional[Sequence[str]] = None,
            language: Optional[RoutineLanguage] = None,
            project: Optional[str] = None,
            remote_function_options: Optional[RemoteFunctionOptionsArgs] = None,
            return_table_type: Optional[StandardSqlTableTypeArgs] = None,
            return_type: Optional[StandardSqlDataTypeArgs] = None,
            arguments: Optional[Sequence[ArgumentArgs]] = None,
            data_governance_type: Optional[RoutineDataGovernanceType] = None,
            security_mode: Optional[RoutineSecurityMode] = None,
            spark_options: Optional[SparkOptionsArgs] = None,
            strict_mode: Optional[bool] = None)func NewRoutine(ctx *Context, name string, args RoutineArgs, opts ...ResourceOption) (*Routine, error)public Routine(string name, RoutineArgs args, CustomResourceOptions? opts = null)
public Routine(String name, RoutineArgs args)
public Routine(String name, RoutineArgs args, CustomResourceOptions options)
type: google-native:bigquery/v2:Routine
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var routineResource = new GoogleNative.BigQuery.V2.Routine("routineResource", new()
{
    RoutineReference = new GoogleNative.BigQuery.V2.Inputs.RoutineReferenceArgs
    {
        DatasetId = "string",
        Project = "string",
        RoutineId = "string",
    },
    RoutineType = GoogleNative.BigQuery.V2.RoutineRoutineType.RoutineTypeUnspecified,
    DatasetId = "string",
    DefinitionBody = "string",
    Description = "string",
    DeterminismLevel = GoogleNative.BigQuery.V2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
    ImportedLibraries = new[]
    {
        "string",
    },
    Language = GoogleNative.BigQuery.V2.RoutineLanguage.LanguageUnspecified,
    Project = "string",
    RemoteFunctionOptions = new GoogleNative.BigQuery.V2.Inputs.RemoteFunctionOptionsArgs
    {
        Connection = "string",
        Endpoint = "string",
        MaxBatchingRows = "string",
        UserDefinedContext = 
        {
            { "string", "string" },
        },
    },
    ReturnTableType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlTableTypeArgs
    {
        Columns = new[]
        {
            new GoogleNative.BigQuery.V2.Inputs.StandardSqlFieldArgs
            {
                Name = "string",
                Type = new GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeArgs
                {
                    TypeKind = GoogleNative.BigQuery.V2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
                    ArrayElementType = standardSqlDataType,
                    RangeElementType = standardSqlDataType,
                    StructType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlStructTypeArgs
                    {
                        Fields = new[]
                        {
                            standardSqlField,
                        },
                    },
                },
            },
        },
    },
    ReturnType = standardSqlDataType,
    Arguments = new[]
    {
        new GoogleNative.BigQuery.V2.Inputs.ArgumentArgs
        {
            ArgumentKind = GoogleNative.BigQuery.V2.ArgumentArgumentKind.ArgumentKindUnspecified,
            DataType = standardSqlDataType,
            IsAggregate = false,
            Mode = GoogleNative.BigQuery.V2.ArgumentMode.ModeUnspecified,
            Name = "string",
        },
    },
    DataGovernanceType = GoogleNative.BigQuery.V2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
    SecurityMode = GoogleNative.BigQuery.V2.RoutineSecurityMode.SecurityModeUnspecified,
    SparkOptions = new GoogleNative.BigQuery.V2.Inputs.SparkOptionsArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        Connection = "string",
        ContainerImage = "string",
        FileUris = new[]
        {
            "string",
        },
        JarUris = new[]
        {
            "string",
        },
        MainClass = "string",
        MainFileUri = "string",
        Properties = 
        {
            { "string", "string" },
        },
        PyFileUris = new[]
        {
            "string",
        },
        RuntimeVersion = "string",
    },
    StrictMode = false,
});
example, err := bigquery.NewRoutine(ctx, "routineResource", &bigquery.RoutineArgs{
	RoutineReference: &bigquery.RoutineReferenceArgs{
		DatasetId: pulumi.String("string"),
		Project:   pulumi.String("string"),
		RoutineId: pulumi.String("string"),
	},
	RoutineType:      bigquery.RoutineRoutineTypeRoutineTypeUnspecified,
	DatasetId:        pulumi.String("string"),
	DefinitionBody:   pulumi.String("string"),
	Description:      pulumi.String("string"),
	DeterminismLevel: bigquery.RoutineDeterminismLevelDeterminismLevelUnspecified,
	ImportedLibraries: pulumi.StringArray{
		pulumi.String("string"),
	},
	Language: bigquery.RoutineLanguageLanguageUnspecified,
	Project:  pulumi.String("string"),
	RemoteFunctionOptions: &bigquery.RemoteFunctionOptionsArgs{
		Connection:      pulumi.String("string"),
		Endpoint:        pulumi.String("string"),
		MaxBatchingRows: pulumi.String("string"),
		UserDefinedContext: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	ReturnTableType: &bigquery.StandardSqlTableTypeArgs{
		Columns: bigquery.StandardSqlFieldArray{
			&bigquery.StandardSqlFieldArgs{
				Name: pulumi.String("string"),
				Type: &bigquery.StandardSqlDataTypeArgs{
					TypeKind:         bigquery.StandardSqlDataTypeTypeKindTypeKindUnspecified,
					ArrayElementType: pulumi.Any(standardSqlDataType),
					RangeElementType: pulumi.Any(standardSqlDataType),
					StructType: &bigquery.StandardSqlStructTypeArgs{
						Fields: bigquery.StandardSqlFieldArray{
							standardSqlField,
						},
					},
				},
			},
		},
	},
	ReturnType: pulumi.Any(standardSqlDataType),
	Arguments: bigquery.ArgumentArray{
		&bigquery.ArgumentArgs{
			ArgumentKind: bigquery.ArgumentArgumentKindArgumentKindUnspecified,
			DataType:     pulumi.Any(standardSqlDataType),
			IsAggregate:  pulumi.Bool(false),
			Mode:         bigquery.ArgumentModeModeUnspecified,
			Name:         pulumi.String("string"),
		},
	},
	DataGovernanceType: bigquery.RoutineDataGovernanceTypeDataGovernanceTypeUnspecified,
	SecurityMode:       bigquery.RoutineSecurityModeSecurityModeUnspecified,
	SparkOptions: &bigquery.SparkOptionsArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Connection:     pulumi.String("string"),
		ContainerImage: pulumi.String("string"),
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		MainClass:   pulumi.String("string"),
		MainFileUri: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		PyFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		RuntimeVersion: pulumi.String("string"),
	},
	StrictMode: pulumi.Bool(false),
})
var routineResource = new Routine("routineResource", RoutineArgs.builder()
    .routineReference(RoutineReferenceArgs.builder()
        .datasetId("string")
        .project("string")
        .routineId("string")
        .build())
    .routineType("ROUTINE_TYPE_UNSPECIFIED")
    .datasetId("string")
    .definitionBody("string")
    .description("string")
    .determinismLevel("DETERMINISM_LEVEL_UNSPECIFIED")
    .importedLibraries("string")
    .language("LANGUAGE_UNSPECIFIED")
    .project("string")
    .remoteFunctionOptions(RemoteFunctionOptionsArgs.builder()
        .connection("string")
        .endpoint("string")
        .maxBatchingRows("string")
        .userDefinedContext(Map.of("string", "string"))
        .build())
    .returnTableType(StandardSqlTableTypeArgs.builder()
        .columns(StandardSqlFieldArgs.builder()
            .name("string")
            .type(StandardSqlDataTypeArgs.builder()
                .typeKind("TYPE_KIND_UNSPECIFIED")
                .arrayElementType(standardSqlDataType)
                .rangeElementType(standardSqlDataType)
                .structType(StandardSqlStructTypeArgs.builder()
                    .fields(standardSqlField)
                    .build())
                .build())
            .build())
        .build())
    .returnType(standardSqlDataType)
    .arguments(ArgumentArgs.builder()
        .argumentKind("ARGUMENT_KIND_UNSPECIFIED")
        .dataType(standardSqlDataType)
        .isAggregate(false)
        .mode("MODE_UNSPECIFIED")
        .name("string")
        .build())
    .dataGovernanceType("DATA_GOVERNANCE_TYPE_UNSPECIFIED")
    .securityMode("SECURITY_MODE_UNSPECIFIED")
    .sparkOptions(SparkOptionsArgs.builder()
        .archiveUris("string")
        .connection("string")
        .containerImage("string")
        .fileUris("string")
        .jarUris("string")
        .mainClass("string")
        .mainFileUri("string")
        .properties(Map.of("string", "string"))
        .pyFileUris("string")
        .runtimeVersion("string")
        .build())
    .strictMode(false)
    .build());
routine_resource = google_native.bigquery.v2.Routine("routineResource",
    routine_reference={
        "dataset_id": "string",
        "project": "string",
        "routine_id": "string",
    },
    routine_type=google_native.bigquery.v2.RoutineRoutineType.ROUTINE_TYPE_UNSPECIFIED,
    dataset_id="string",
    definition_body="string",
    description="string",
    determinism_level=google_native.bigquery.v2.RoutineDeterminismLevel.DETERMINISM_LEVEL_UNSPECIFIED,
    imported_libraries=["string"],
    language=google_native.bigquery.v2.RoutineLanguage.LANGUAGE_UNSPECIFIED,
    project="string",
    remote_function_options={
        "connection": "string",
        "endpoint": "string",
        "max_batching_rows": "string",
        "user_defined_context": {
            "string": "string",
        },
    },
    return_table_type={
        "columns": [{
            "name": "string",
            "type": {
                "type_kind": google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TYPE_KIND_UNSPECIFIED,
                "array_element_type": standard_sql_data_type,
                "range_element_type": standard_sql_data_type,
                "struct_type": {
                    "fields": [standard_sql_field],
                },
            },
        }],
    },
    return_type=standard_sql_data_type,
    arguments=[{
        "argument_kind": google_native.bigquery.v2.ArgumentArgumentKind.ARGUMENT_KIND_UNSPECIFIED,
        "data_type": standard_sql_data_type,
        "is_aggregate": False,
        "mode": google_native.bigquery.v2.ArgumentMode.MODE_UNSPECIFIED,
        "name": "string",
    }],
    data_governance_type=google_native.bigquery.v2.RoutineDataGovernanceType.DATA_GOVERNANCE_TYPE_UNSPECIFIED,
    security_mode=google_native.bigquery.v2.RoutineSecurityMode.SECURITY_MODE_UNSPECIFIED,
    spark_options={
        "archive_uris": ["string"],
        "connection": "string",
        "container_image": "string",
        "file_uris": ["string"],
        "jar_uris": ["string"],
        "main_class": "string",
        "main_file_uri": "string",
        "properties": {
            "string": "string",
        },
        "py_file_uris": ["string"],
        "runtime_version": "string",
    },
    strict_mode=False)
const routineResource = new google_native.bigquery.v2.Routine("routineResource", {
    routineReference: {
        datasetId: "string",
        project: "string",
        routineId: "string",
    },
    routineType: google_native.bigquery.v2.RoutineRoutineType.RoutineTypeUnspecified,
    datasetId: "string",
    definitionBody: "string",
    description: "string",
    determinismLevel: google_native.bigquery.v2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
    importedLibraries: ["string"],
    language: google_native.bigquery.v2.RoutineLanguage.LanguageUnspecified,
    project: "string",
    remoteFunctionOptions: {
        connection: "string",
        endpoint: "string",
        maxBatchingRows: "string",
        userDefinedContext: {
            string: "string",
        },
    },
    returnTableType: {
        columns: [{
            name: "string",
            type: {
                typeKind: google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
                arrayElementType: standardSqlDataType,
                rangeElementType: standardSqlDataType,
                structType: {
                    fields: [standardSqlField],
                },
            },
        }],
    },
    returnType: standardSqlDataType,
    arguments: [{
        argumentKind: google_native.bigquery.v2.ArgumentArgumentKind.ArgumentKindUnspecified,
        dataType: standardSqlDataType,
        isAggregate: false,
        mode: google_native.bigquery.v2.ArgumentMode.ModeUnspecified,
        name: "string",
    }],
    dataGovernanceType: google_native.bigquery.v2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
    securityMode: google_native.bigquery.v2.RoutineSecurityMode.SecurityModeUnspecified,
    sparkOptions: {
        archiveUris: ["string"],
        connection: "string",
        containerImage: "string",
        fileUris: ["string"],
        jarUris: ["string"],
        mainClass: "string",
        mainFileUri: "string",
        properties: {
            string: "string",
        },
        pyFileUris: ["string"],
        runtimeVersion: "string",
    },
    strictMode: false,
});
type: google-native:bigquery/v2:Routine
properties:
    arguments:
        - argumentKind: ARGUMENT_KIND_UNSPECIFIED
          dataType: ${standardSqlDataType}
          isAggregate: false
          mode: MODE_UNSPECIFIED
          name: string
    dataGovernanceType: DATA_GOVERNANCE_TYPE_UNSPECIFIED
    datasetId: string
    definitionBody: string
    description: string
    determinismLevel: DETERMINISM_LEVEL_UNSPECIFIED
    importedLibraries:
        - string
    language: LANGUAGE_UNSPECIFIED
    project: string
    remoteFunctionOptions:
        connection: string
        endpoint: string
        maxBatchingRows: string
        userDefinedContext:
            string: string
    returnTableType:
        columns:
            - name: string
              type:
                arrayElementType: ${standardSqlDataType}
                rangeElementType: ${standardSqlDataType}
                structType:
                    fields:
                        - ${standardSqlField}
                typeKind: TYPE_KIND_UNSPECIFIED
    returnType: ${standardSqlDataType}
    routineReference:
        datasetId: string
        project: string
        routineId: string
    routineType: ROUTINE_TYPE_UNSPECIFIED
    securityMode: SECURITY_MODE_UNSPECIFIED
    sparkOptions:
        archiveUris:
            - string
        connection: string
        containerImage: string
        fileUris:
            - string
        jarUris:
            - string
        mainClass: string
        mainFileUri: string
        properties:
            string: string
        pyFileUris:
            - string
        runtimeVersion: string
    strictMode: false
Routine Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Routine resource accepts the following input properties:
- DatasetId string
- DefinitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- RoutineReference Pulumi.Google Native. Big Query. V2. Inputs. Routine Reference 
- Reference describing the ID of this routine.
- RoutineType Pulumi.Google Native. Big Query. V2. Routine Routine Type 
- The type of routine.
- Arguments
List<Pulumi.Google Native. Big Query. V2. Inputs. Argument> 
- Optional.
- DataGovernance Pulumi.Type Google Native. Big Query. V2. Routine Data Governance Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- Description string
- Optional. The description of the routine, if defined.
- DeterminismLevel Pulumi.Google Native. Big Query. V2. Routine Determinism Level 
- Optional. The determinism level of the JavaScript UDF, if defined.
- ImportedLibraries List<string>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
Pulumi.Google Native. Big Query. V2. Routine Language 
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- RemoteFunction Pulumi.Options Google Native. Big Query. V2. Inputs. Remote Function Options 
- Optional. Remote function specific options.
- ReturnTable Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Table Type 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- ReturnType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- SecurityMode Pulumi.Google Native. Big Query. V2. Routine Security Mode 
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- SparkOptions Pulumi.Google Native. Big Query. V2. Inputs. Spark Options 
- Optional. Spark specific options.
- StrictMode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- DatasetId string
- DefinitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- RoutineReference RoutineReference Args 
- Reference describing the ID of this routine.
- RoutineType RoutineRoutine Type 
- The type of routine.
- Arguments
[]ArgumentArgs 
- Optional.
- DataGovernance RoutineType Data Governance Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- Description string
- Optional. The description of the routine, if defined.
- DeterminismLevel RoutineDeterminism Level 
- Optional. The determinism level of the JavaScript UDF, if defined.
- ImportedLibraries []string
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
RoutineLanguage 
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- RemoteFunction RemoteOptions Function Options Args 
- Optional. Remote function specific options.
- ReturnTable StandardType Sql Table Type Args 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- ReturnType StandardSql Data Type Args 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- SecurityMode RoutineSecurity Mode 
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- SparkOptions SparkOptions Args 
- Optional. Spark specific options.
- StrictMode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- datasetId String
- definitionBody String
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- routineReference RoutineReference 
- Reference describing the ID of this routine.
- routineType RoutineRoutine Type 
- The type of routine.
- arguments List<Argument>
- Optional.
- dataGovernance RoutineType Data Governance Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- description String
- Optional. The description of the routine, if defined.
- determinismLevel RoutineDeterminism Level 
- Optional. The determinism level of the JavaScript UDF, if defined.
- importedLibraries List<String>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
RoutineLanguage 
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remoteFunction RemoteOptions Function Options 
- Optional. Remote function specific options.
- returnTable StandardType Sql Table Type 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType StandardSql Data Type 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- securityMode RoutineSecurity Mode 
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions SparkOptions 
- Optional. Spark specific options.
- strictMode Boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- datasetId string
- definitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- routineReference RoutineReference 
- Reference describing the ID of this routine.
- routineType RoutineRoutine Type 
- The type of routine.
- arguments Argument[]
- Optional.
- dataGovernance RoutineType Data Governance Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- description string
- Optional. The description of the routine, if defined.
- determinismLevel RoutineDeterminism Level 
- Optional. The determinism level of the JavaScript UDF, if defined.
- importedLibraries string[]
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
RoutineLanguage 
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project string
- remoteFunction RemoteOptions Function Options 
- Optional. Remote function specific options.
- returnTable StandardType Sql Table Type 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType StandardSql Data Type 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- securityMode RoutineSecurity Mode 
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions SparkOptions 
- Optional. Spark specific options.
- strictMode boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset_id str
- definition_body str
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- routine_reference RoutineReference Args 
- Reference describing the ID of this routine.
- routine_type RoutineRoutine Type 
- The type of routine.
- arguments
Sequence[ArgumentArgs] 
- Optional.
- data_governance_ Routinetype Data Governance Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- description str
- Optional. The description of the routine, if defined.
- determinism_level RoutineDeterminism Level 
- Optional. The determinism level of the JavaScript UDF, if defined.
- imported_libraries Sequence[str]
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
RoutineLanguage 
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project str
- remote_function_ Remoteoptions Function Options Args 
- Optional. Remote function specific options.
- return_table_ Standardtype Sql Table Type Args 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return_type StandardSql Data Type Args 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- security_mode RoutineSecurity Mode 
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark_options SparkOptions Args 
- Optional. Spark specific options.
- strict_mode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- datasetId String
- definitionBody String
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- routineReference Property Map
- Reference describing the ID of this routine.
- routineType "ROUTINE_TYPE_UNSPECIFIED" | "SCALAR_FUNCTION" | "PROCEDURE" | "TABLE_VALUED_FUNCTION" | "AGGREGATE_FUNCTION"
- The type of routine.
- arguments List<Property Map>
- Optional.
- dataGovernance "DATA_GOVERNANCE_TYPE_UNSPECIFIED" | "DATA_MASKING"Type 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- description String
- Optional. The description of the routine, if defined.
- determinismLevel "DETERMINISM_LEVEL_UNSPECIFIED" | "DETERMINISTIC" | "NOT_DETERMINISTIC"
- Optional. The determinism level of the JavaScript UDF, if defined.
- importedLibraries List<String>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language "LANGUAGE_UNSPECIFIED" | "SQL" | "JAVASCRIPT" | "PYTHON" | "JAVA" | "SCALA"
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remoteFunction Property MapOptions 
- Optional. Remote function specific options.
- returnTable Property MapType 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType Property Map
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- securityMode "SECURITY_MODE_UNSPECIFIED" | "DEFINER" | "INVOKER"
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions Property Map
- Optional. Spark specific options.
- strictMode Boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
Outputs
All input properties are implicitly available as output properties. Additionally, the Routine resource produces the following output properties:
- CreationTime string
- The time when this routine was created, in milliseconds since the epoch.
- Etag string
- A hash of this resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- LastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- CreationTime string
- The time when this routine was created, in milliseconds since the epoch.
- Etag string
- A hash of this resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- LastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- creationTime String
- The time when this routine was created, in milliseconds since the epoch.
- etag String
- A hash of this resource.
- id String
- The provider-assigned unique ID for this managed resource.
- lastModified StringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- creationTime string
- The time when this routine was created, in milliseconds since the epoch.
- etag string
- A hash of this resource.
- id string
- The provider-assigned unique ID for this managed resource.
- lastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- creation_time str
- The time when this routine was created, in milliseconds since the epoch.
- etag str
- A hash of this resource.
- id str
- The provider-assigned unique ID for this managed resource.
- last_modified_ strtime 
- The time when this routine was last modified, in milliseconds since the epoch.
- creationTime String
- The time when this routine was created, in milliseconds since the epoch.
- etag String
- A hash of this resource.
- id String
- The provider-assigned unique ID for this managed resource.
- lastModified StringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
Supporting Types
Argument, ArgumentArgs  
- ArgumentKind Pulumi.Google Native. Big Query. V2. Argument Argument Kind 
- Optional. Defaults to FIXED_TYPE.
- DataType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
Pulumi.Google Native. Big Query. V2. Argument Mode 
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- ArgumentKind ArgumentArgument Kind 
- Optional. Defaults to FIXED_TYPE.
- DataType StandardSql Data Type 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
ArgumentMode 
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind ArgumentArgument Kind 
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type 
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
ArgumentMode 
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind ArgumentArgument Kind 
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type 
- Required unless argument_kind = ANY_TYPE.
- isAggregate boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
ArgumentMode 
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument_kind ArgumentArgument Kind 
- Optional. Defaults to FIXED_TYPE.
- data_type StandardSql Data Type 
- Required unless argument_kind = ANY_TYPE.
- is_aggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
ArgumentMode 
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind "ARGUMENT_KIND_UNSPECIFIED" | "FIXED_TYPE" | "ANY_TYPE"
- Optional. Defaults to FIXED_TYPE.
- dataType Property Map
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode "MODE_UNSPECIFIED" | "IN" | "OUT" | "INOUT"
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
ArgumentArgumentKind, ArgumentArgumentKindArgs      
- ArgumentKind Unspecified 
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- FixedType 
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- AnyType 
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ArgumentArgument Kind Argument Kind Unspecified 
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- ArgumentArgument Kind Fixed Type 
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- ArgumentArgument Kind Any Type 
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ArgumentKind Unspecified 
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- FixedType 
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- AnyType 
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ArgumentKind Unspecified 
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- FixedType 
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- AnyType 
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ARGUMENT_KIND_UNSPECIFIED
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- FIXED_TYPE
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- ANY_TYPE
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- "ARGUMENT_KIND_UNSPECIFIED"
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- "FIXED_TYPE"
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- "ANY_TYPE"
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentMode, ArgumentModeArgs    
- ModeUnspecified 
- MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- ArgumentMode Mode Unspecified 
- MODE_UNSPECIFIEDDefault value.
- ArgumentMode In 
- INThe argument is input-only.
- ArgumentMode Out 
- OUTThe argument is output-only.
- ArgumentMode Inout 
- INOUTThe argument is both an input and an output.
- ModeUnspecified 
- MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- ModeUnspecified 
- MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- MODE_UNSPECIFIED
- MODE_UNSPECIFIEDDefault value.
- IN_
- INThe argument is input-only.
- OUT
- OUTThe argument is output-only.
- INOUT
- INOUTThe argument is both an input and an output.
- "MODE_UNSPECIFIED"
- MODE_UNSPECIFIEDDefault value.
- "IN"
- INThe argument is input-only.
- "OUT"
- OUTThe argument is output-only.
- "INOUT"
- INOUTThe argument is both an input and an output.
ArgumentResponse, ArgumentResponseArgs    
- ArgumentKind string
- Optional. Defaults to FIXED_TYPE.
- DataType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- ArgumentKind string
- Optional. Defaults to FIXED_TYPE.
- DataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind String
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind string
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- isAggregate boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument_kind str
- Optional. Defaults to FIXED_TYPE.
- data_type StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- is_aggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode str
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind String
- Optional. Defaults to FIXED_TYPE.
- dataType Property Map
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
RemoteFunctionOptions, RemoteFunctionOptionsArgs      
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined Dictionary<string, string>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined map[string]stringContext 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String,String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined {[key: string]: string}Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_batching_ strrows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_defined_ Mapping[str, str]context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RemoteFunctionOptionsResponse, RemoteFunctionOptionsResponseArgs        
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined Dictionary<string, string>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined map[string]stringContext 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String,String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined {[key: string]: string}Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_batching_ strrows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_defined_ Mapping[str, str]context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RoutineDataGovernanceType, RoutineDataGovernanceTypeArgs        
- DataGovernance Type Unspecified 
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- DataMasking 
- DATA_MASKINGThe data governance type is data masking.
- RoutineData Governance Type Data Governance Type Unspecified 
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- RoutineData Governance Type Data Masking 
- DATA_MASKINGThe data governance type is data masking.
- DataGovernance Type Unspecified 
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- DataMasking 
- DATA_MASKINGThe data governance type is data masking.
- DataGovernance Type Unspecified 
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- DataMasking 
- DATA_MASKINGThe data governance type is data masking.
- DATA_GOVERNANCE_TYPE_UNSPECIFIED
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- DATA_MASKING
- DATA_MASKINGThe data governance type is data masking.
- "DATA_GOVERNANCE_TYPE_UNSPECIFIED"
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- "DATA_MASKING"
- DATA_MASKINGThe data governance type is data masking.
RoutineDeterminismLevel, RoutineDeterminismLevelArgs      
- DeterminismLevel Unspecified 
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NotDeterministic 
- NOT_DETERMINISTICThe UDF is not deterministic.
- RoutineDeterminism Level Determinism Level Unspecified 
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- RoutineDeterminism Level Deterministic 
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- RoutineDeterminism Level Not Deterministic 
- NOT_DETERMINISTICThe UDF is not deterministic.
- DeterminismLevel Unspecified 
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NotDeterministic 
- NOT_DETERMINISTICThe UDF is not deterministic.
- DeterminismLevel Unspecified 
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NotDeterministic 
- NOT_DETERMINISTICThe UDF is not deterministic.
- DETERMINISM_LEVEL_UNSPECIFIED
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- DETERMINISTIC
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NOT_DETERMINISTIC
- NOT_DETERMINISTICThe UDF is not deterministic.
- "DETERMINISM_LEVEL_UNSPECIFIED"
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- "DETERMINISTIC"
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- "NOT_DETERMINISTIC"
- NOT_DETERMINISTICThe UDF is not deterministic.
RoutineLanguage, RoutineLanguageArgs    
- LanguageUnspecified 
- LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- RoutineLanguage Language Unspecified 
- LANGUAGE_UNSPECIFIEDDefault value.
- RoutineLanguage Sql 
- SQLSQL language.
- RoutineLanguage Javascript 
- JAVASCRIPTJavaScript language.
- RoutineLanguage Python 
- PYTHONPython language.
- RoutineLanguage Java 
- JAVAJava language.
- RoutineLanguage Scala 
- SCALAScala language.
- LanguageUnspecified 
- LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- LanguageUnspecified 
- LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- LANGUAGE_UNSPECIFIED
- LANGUAGE_UNSPECIFIEDDefault value.
- SQL
- SQLSQL language.
- JAVASCRIPT
- JAVASCRIPTJavaScript language.
- PYTHON
- PYTHONPython language.
- JAVA
- JAVAJava language.
- SCALA
- SCALAScala language.
- "LANGUAGE_UNSPECIFIED"
- LANGUAGE_UNSPECIFIEDDefault value.
- "SQL"
- SQLSQL language.
- "JAVASCRIPT"
- JAVASCRIPTJavaScript language.
- "PYTHON"
- PYTHONPython language.
- "JAVA"
- JAVAJava language.
- "SCALA"
- SCALAScala language.
RoutineReference, RoutineReferenceArgs    
- dataset_id str
- The ID of the dataset containing this routine.
- project str
- The ID of the project containing this routine.
- routine_id str
- The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineReferenceResponse, RoutineReferenceResponseArgs      
- dataset_id str
- The ID of the dataset containing this routine.
- project str
- The ID of the project containing this routine.
- routine_id str
- The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineRoutineType, RoutineRoutineTypeArgs      
- RoutineType Unspecified 
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- ScalarFunction 
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- TableValued Function 
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- AggregateFunction 
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- RoutineRoutine Type Routine Type Unspecified 
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- RoutineRoutine Type Scalar Function 
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- RoutineRoutine Type Procedure 
- PROCEDUREStored procedure.
- RoutineRoutine Type Table Valued Function 
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- RoutineRoutine Type Aggregate Function 
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- RoutineType Unspecified 
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- ScalarFunction 
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- TableValued Function 
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- AggregateFunction 
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- RoutineType Unspecified 
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- ScalarFunction 
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- TableValued Function 
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- AggregateFunction 
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- ROUTINE_TYPE_UNSPECIFIED
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- SCALAR_FUNCTION
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- PROCEDURE
- PROCEDUREStored procedure.
- TABLE_VALUED_FUNCTION
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- AGGREGATE_FUNCTION
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- "ROUTINE_TYPE_UNSPECIFIED"
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- "SCALAR_FUNCTION"
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- "PROCEDURE"
- PROCEDUREStored procedure.
- "TABLE_VALUED_FUNCTION"
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- "AGGREGATE_FUNCTION"
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
RoutineSecurityMode, RoutineSecurityModeArgs      
- SecurityMode Unspecified 
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- RoutineSecurity Mode Security Mode Unspecified 
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- RoutineSecurity Mode Definer 
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- RoutineSecurity Mode Invoker 
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- SecurityMode Unspecified 
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- SecurityMode Unspecified 
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- SECURITY_MODE_UNSPECIFIED
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- DEFINER
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- INVOKER
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- "SECURITY_MODE_UNSPECIFIED"
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- "DEFINER"
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- "INVOKER"
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
SparkOptions, SparkOptionsArgs    
- ArchiveUris List<string>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris List<string>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris List<string>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile List<string>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- ArchiveUris []string
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris []string
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris []string
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile []stringUris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
- archiveUris string[]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage string
- Custom container image for the runtime environment.
- fileUris string[]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris string[]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile string[]Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archive_uris Sequence[str]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_image str
- Custom container image for the runtime environment.
- file_uris Sequence[str]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_uris Sequence[str]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_class str
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_file_ struri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_file_ Sequence[str]uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtime_version str
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
SparkOptionsResponse, SparkOptionsResponseArgs      
- ArchiveUris List<string>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris List<string>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris List<string>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile List<string>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- ArchiveUris []string
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris []string
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris []string
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile []stringUris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
- archiveUris string[]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage string
- Custom container image for the runtime environment.
- fileUris string[]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris string[]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile string[]Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archive_uris Sequence[str]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_image str
- Custom container image for the runtime environment.
- file_uris Sequence[str]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_uris Sequence[str]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_class str
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_file_ struri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_file_ Sequence[str]uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtime_version str
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
StandardSqlDataType, StandardSqlDataTypeArgs        
- TypeKind Pulumi.Google Native. Big Query. V2. Standard Sql Data Type Type Kind 
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type 
- The type of the range's elements, if type_kind = "RANGE".
- StructType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type 
- The fields of this struct, in order, if type_kind = "STRUCT".
- TypeKind StandardSql Data Type Type Kind 
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement StandardType Sql Data Type 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement StandardType Sql Data Type 
- The type of the range's elements, if type_kind = "RANGE".
- StructType StandardSql Struct Type 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind StandardSql Data Type Type Kind 
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind StandardSql Data Type Type Kind 
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type 
- The fields of this struct, in order, if type_kind = "STRUCT".
- type_kind StandardSql Data Type Type Kind 
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_element_ Standardtype Sql Data Type 
- The type of the array's elements, if type_kind = "ARRAY".
- range_element_ Standardtype Sql Data Type 
- The type of the range's elements, if type_kind = "RANGE".
- struct_type StandardSql Struct Type 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind "TYPE_KIND_UNSPECIFIED" | "INT64" | "BOOL" | "FLOAT64" | "STRING" | "BYTES" | "TIMESTAMP" | "DATE" | "TIME" | "DATETIME" | "INTERVAL" | "GEOGRAPHY" | "NUMERIC" | "BIGNUMERIC" | "JSON" | "ARRAY" | "STRUCT" | "RANGE"
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement Property MapType 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement Property MapType 
- The type of the range's elements, if type_kind = "RANGE".
- structType Property Map
- The fields of this struct, in order, if type_kind = "STRUCT".
StandardSqlDataTypeResponse, StandardSqlDataTypeResponseArgs          
- StructType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- TypeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- StructType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- TypeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind String
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- struct_type StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- type_kind str
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_element_ Standardtype Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- range_element_ Standardtype Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType Property Map
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind String
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement Property MapType 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement Property MapType 
- The type of the range's elements, if type_kind = "RANGE".
StandardSqlDataTypeTypeKind, StandardSqlDataTypeTypeKindArgs            
- TypeKind Unspecified 
- TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- StandardSql Data Type Type Kind Type Kind Unspecified 
- TYPE_KIND_UNSPECIFIEDInvalid type.
- StandardSql Data Type Type Kind Int64 
- INT64Encoded as a string in decimal format.
- StandardSql Data Type Type Kind Bool 
- BOOLEncoded as a boolean "false" or "true".
- StandardSql Data Type Type Kind Float64 
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- StandardSql Data Type Type Kind String 
- STRINGEncoded as a string value.
- StandardSql Data Type Type Kind Bytes 
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- StandardSql Data Type Type Kind Timestamp 
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- StandardSql Data Type Type Kind Date 
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- StandardSql Data Type Type Kind Time 
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- StandardSql Data Type Type Kind Datetime 
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- StandardSql Data Type Type Kind Interval 
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- StandardSql Data Type Type Kind Geography 
- GEOGRAPHYEncoded as WKT
- StandardSql Data Type Type Kind Numeric 
- NUMERICEncoded as a decimal string.
- StandardSql Data Type Type Kind Bignumeric 
- BIGNUMERICEncoded as a decimal string.
- StandardSql Data Type Type Kind Json 
- JSONEncoded as a string.
- StandardSql Data Type Type Kind Array 
- ARRAYEncoded as a list with types matching Type.array_type.
- StandardSql Data Type Type Kind Struct 
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- StandardSql Data Type Type Kind Range 
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- TypeKind Unspecified 
- TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- TypeKind Unspecified 
- TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- TYPE_KIND_UNSPECIFIED
- TYPE_KIND_UNSPECIFIEDInvalid type.
- INT64
- INT64Encoded as a string in decimal format.
- BOOL
- BOOLEncoded as a boolean "false" or "true".
- FLOAT64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- STRING
- STRINGEncoded as a string value.
- BYTES
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- TIMESTAMP
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- DATE
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- TIME
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- DATETIME
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- INTERVAL
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- GEOGRAPHY
- GEOGRAPHYEncoded as WKT
- NUMERIC
- NUMERICEncoded as a decimal string.
- BIGNUMERIC
- BIGNUMERICEncoded as a decimal string.
- JSON
- JSONEncoded as a string.
- ARRAY
- ARRAYEncoded as a list with types matching Type.array_type.
- STRUCT
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- RANGE
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- "TYPE_KIND_UNSPECIFIED"
- TYPE_KIND_UNSPECIFIEDInvalid type.
- "INT64"
- INT64Encoded as a string in decimal format.
- "BOOL"
- BOOLEncoded as a boolean "false" or "true".
- "FLOAT64"
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- "STRING"
- STRINGEncoded as a string value.
- "BYTES"
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- "TIMESTAMP"
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- "DATE"
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- "TIME"
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- "DATETIME"
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- "INTERVAL"
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- "GEOGRAPHY"
- GEOGRAPHYEncoded as WKT
- "NUMERIC"
- NUMERICEncoded as a decimal string.
- "BIGNUMERIC"
- BIGNUMERICEncoded as a decimal string.
- "JSON"
- JSONEncoded as a string.
- "ARRAY"
- ARRAYEncoded as a list with types matching Type.array_type.
- "STRUCT"
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- "RANGE"
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
StandardSqlField, StandardSqlFieldArgs      
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
StandardSql Data Type 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type Property Map
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlFieldResponse, StandardSqlFieldResponseArgs        
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type Property Map
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlStructType, StandardSqlStructTypeArgs        
- Fields
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field> 
- Fields within the struct.
- Fields
[]StandardSql Field 
- Fields within the struct.
- fields
List<StandardSql Field> 
- Fields within the struct.
- fields
StandardSql Field[] 
- Fields within the struct.
- fields
Sequence[StandardSql Field] 
- Fields within the struct.
- fields List<Property Map>
- Fields within the struct.
StandardSqlStructTypeResponse, StandardSqlStructTypeResponseArgs          
- Fields
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field Response> 
- Fields within the struct.
- Fields
[]StandardSql Field Response 
- Fields within the struct.
- fields
List<StandardSql Field Response> 
- Fields within the struct.
- fields
StandardSql Field Response[] 
- Fields within the struct.
- fields
Sequence[StandardSql Field Response] 
- Fields within the struct.
- fields List<Property Map>
- Fields within the struct.
StandardSqlTableType, StandardSqlTableTypeArgs        
- Columns
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field> 
- The columns in this table type
- Columns
[]StandardSql Field 
- The columns in this table type
- columns
List<StandardSql Field> 
- The columns in this table type
- columns
StandardSql Field[] 
- The columns in this table type
- columns
Sequence[StandardSql Field] 
- The columns in this table type
- columns List<Property Map>
- The columns in this table type
StandardSqlTableTypeResponse, StandardSqlTableTypeResponseArgs          
- Columns
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field Response> 
- The columns in this table type
- Columns
[]StandardSql Field Response 
- The columns in this table type
- columns
List<StandardSql Field Response> 
- The columns in this table type
- columns
StandardSql Field Response[] 
- The columns in this table type
- columns
Sequence[StandardSql Field Response] 
- The columns in this table type
- columns List<Property Map>
- The columns in this table type
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.