Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.bigquery/v2.getRoutine
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets the specified routine resource by routine ID.
Using getRoutine
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getRoutine(args: GetRoutineArgs, opts?: InvokeOptions): Promise<GetRoutineResult>
function getRoutineOutput(args: GetRoutineOutputArgs, opts?: InvokeOptions): Output<GetRoutineResult>def get_routine(dataset_id: Optional[str] = None,
                project: Optional[str] = None,
                read_mask: Optional[str] = None,
                routine_id: Optional[str] = None,
                opts: Optional[InvokeOptions] = None) -> GetRoutineResult
def get_routine_output(dataset_id: Optional[pulumi.Input[str]] = None,
                project: Optional[pulumi.Input[str]] = None,
                read_mask: Optional[pulumi.Input[str]] = None,
                routine_id: Optional[pulumi.Input[str]] = None,
                opts: Optional[InvokeOptions] = None) -> Output[GetRoutineResult]func LookupRoutine(ctx *Context, args *LookupRoutineArgs, opts ...InvokeOption) (*LookupRoutineResult, error)
func LookupRoutineOutput(ctx *Context, args *LookupRoutineOutputArgs, opts ...InvokeOption) LookupRoutineResultOutput> Note: This function is named LookupRoutine in the Go SDK.
public static class GetRoutine 
{
    public static Task<GetRoutineResult> InvokeAsync(GetRoutineArgs args, InvokeOptions? opts = null)
    public static Output<GetRoutineResult> Invoke(GetRoutineInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetRoutineResult> getRoutine(GetRoutineArgs args, InvokeOptions options)
public static Output<GetRoutineResult> getRoutine(GetRoutineArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:bigquery/v2:getRoutine
  arguments:
    # arguments dictionaryThe following arguments are supported:
- dataset_id str
- routine_id str
- project str
- read_mask str
getRoutine Result
The following output properties are available:
- Arguments
List<Pulumi.Google Native. Big Query. V2. Outputs. Argument Response> 
- Optional.
- CreationTime string
- The time when this routine was created, in milliseconds since the epoch.
- DataGovernance stringType 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- DefinitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- Description string
- Optional. The description of the routine, if defined.
- DeterminismLevel string
- Optional. The determinism level of the JavaScript UDF, if defined.
- Etag string
- A hash of this resource.
- ImportedLibraries List<string>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language string
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- LastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- RemoteFunction Pulumi.Options Google Native. Big Query. V2. Outputs. Remote Function Options Response 
- Optional. Remote function specific options.
- ReturnTable Pulumi.Type Google Native. Big Query. V2. Outputs. Standard Sql Table Type Response 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- ReturnType Pulumi.Google Native. Big Query. V2. Outputs. Standard Sql Data Type Response 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- RoutineReference Pulumi.Google Native. Big Query. V2. Outputs. Routine Reference Response 
- Reference describing the ID of this routine.
- RoutineType string
- The type of routine.
- SecurityMode string
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- SparkOptions Pulumi.Google Native. Big Query. V2. Outputs. Spark Options Response 
- Optional. Spark specific options.
- StrictMode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- Arguments
[]ArgumentResponse 
- Optional.
- CreationTime string
- The time when this routine was created, in milliseconds since the epoch.
- DataGovernance stringType 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- DefinitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- Description string
- Optional. The description of the routine, if defined.
- DeterminismLevel string
- Optional. The determinism level of the JavaScript UDF, if defined.
- Etag string
- A hash of this resource.
- ImportedLibraries []string
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language string
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- LastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- RemoteFunction RemoteOptions Function Options Response 
- Optional. Remote function specific options.
- ReturnTable StandardType Sql Table Type Response 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- ReturnType StandardSql Data Type Response 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- RoutineReference RoutineReference Response 
- Reference describing the ID of this routine.
- RoutineType string
- The type of routine.
- SecurityMode string
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- SparkOptions SparkOptions Response 
- Optional. Spark specific options.
- StrictMode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- arguments
List<ArgumentResponse> 
- Optional.
- creationTime String
- The time when this routine was created, in milliseconds since the epoch.
- dataGovernance StringType 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- definitionBody String
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- description String
- Optional. The description of the routine, if defined.
- determinismLevel String
- Optional. The determinism level of the JavaScript UDF, if defined.
- etag String
- A hash of this resource.
- importedLibraries List<String>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language String
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- lastModified StringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- remoteFunction RemoteOptions Function Options Response 
- Optional. Remote function specific options.
- returnTable StandardType Sql Table Type Response 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType StandardSql Data Type Response 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- routineReference RoutineReference Response 
- Reference describing the ID of this routine.
- routineType String
- The type of routine.
- securityMode String
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions SparkOptions Response 
- Optional. Spark specific options.
- strictMode Boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- arguments
ArgumentResponse[] 
- Optional.
- creationTime string
- The time when this routine was created, in milliseconds since the epoch.
- dataGovernance stringType 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- definitionBody string
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- description string
- Optional. The description of the routine, if defined.
- determinismLevel string
- Optional. The determinism level of the JavaScript UDF, if defined.
- etag string
- A hash of this resource.
- importedLibraries string[]
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language string
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- lastModified stringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- remoteFunction RemoteOptions Function Options Response 
- Optional. Remote function specific options.
- returnTable StandardType Sql Table Type Response 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType StandardSql Data Type Response 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- routineReference RoutineReference Response 
- Reference describing the ID of this routine.
- routineType string
- The type of routine.
- securityMode string
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions SparkOptions Response 
- Optional. Spark specific options.
- strictMode boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- arguments
Sequence[ArgumentResponse] 
- Optional.
- creation_time str
- The time when this routine was created, in milliseconds since the epoch.
- data_governance_ strtype 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- definition_body str
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- description str
- Optional. The description of the routine, if defined.
- determinism_level str
- Optional. The determinism level of the JavaScript UDF, if defined.
- etag str
- A hash of this resource.
- imported_libraries Sequence[str]
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language str
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- last_modified_ strtime 
- The time when this routine was last modified, in milliseconds since the epoch.
- remote_function_ Remoteoptions Function Options Response 
- Optional. Remote function specific options.
- return_table_ Standardtype Sql Table Type Response 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return_type StandardSql Data Type Response 
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- routine_reference RoutineReference Response 
- Reference describing the ID of this routine.
- routine_type str
- The type of routine.
- security_mode str
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark_options SparkOptions Response 
- Optional. Spark specific options.
- strict_mode bool
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- arguments List<Property Map>
- Optional.
- creationTime String
- The time when this routine was created, in milliseconds since the epoch.
- dataGovernance StringType 
- Optional. If set to DATA_MASKING, the function is validated and made available as a masking function. For more information, see Create custom masking routines.
- definitionBody String
- The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))The definition_body isconcat(x, "\n", y)(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'The definition_body isreturn "\n";\nNote that both \n are replaced with linebreaks.
- description String
- Optional. The description of the routine, if defined.
- determinismLevel String
- Optional. The determinism level of the JavaScript UDF, if defined.
- etag String
- A hash of this resource.
- importedLibraries List<String>
- Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language String
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- lastModified StringTime 
- The time when this routine was last modified, in milliseconds since the epoch.
- remoteFunction Property MapOptions 
- Optional. Remote function specific options.
- returnTable Property MapType 
- Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- returnType Property Map
- Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: * CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));The return_type is{type_kind: "FLOAT64"}forAddandDecrement, and is absent forIncrement(inferred as FLOAT64 at query time). Suppose the functionAddis replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);Then the inferred return type ofIncrementis automatically changed to INT64 at query time, while the return type ofDecrementremains FLOAT64.
- routineReference Property Map
- Reference describing the ID of this routine.
- routineType String
- The type of routine.
- securityMode String
- Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- sparkOptions Property Map
- Optional. Spark specific options.
- strictMode Boolean
- Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
Supporting Types
ArgumentResponse 
- ArgumentKind string
- Optional. Defaults to FIXED_TYPE.
- DataType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- ArgumentKind string
- Optional. Defaults to FIXED_TYPE.
- DataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- IsAggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind String
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind string
- Optional. Defaults to FIXED_TYPE.
- dataType StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- isAggregate boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument_kind str
- Optional. Defaults to FIXED_TYPE.
- data_type StandardSql Data Type Response 
- Required unless argument_kind = ANY_TYPE.
- is_aggregate bool
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode str
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
- Optional. The name of this argument. Can be absent for function return argument.
- argumentKind String
- Optional. Defaults to FIXED_TYPE.
- dataType Property Map
- Required unless argument_kind = ANY_TYPE.
- isAggregate Boolean
- Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
RemoteFunctionOptionsResponse   
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined Dictionary<string, string>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- MaxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- UserDefined map[string]stringContext 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String,String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching stringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined {[key: string]: string}Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_batching_ strrows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_defined_ Mapping[str, str]context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format: "projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g. https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- maxBatching StringRows 
- Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- userDefined Map<String>Context 
- User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RoutineReferenceResponse  
- dataset_id str
- The ID of the dataset containing this routine.
- project str
- The ID of the project containing this routine.
- routine_id str
- The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
SparkOptionsResponse  
- ArchiveUris List<string>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris List<string>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris List<string>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile List<string>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- ArchiveUris []string
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- ContainerImage string
- Custom container image for the runtime environment.
- FileUris []string
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- JarUris []string
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- MainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- MainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- PyFile []stringUris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- RuntimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
- archiveUris string[]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage string
- Custom container image for the runtime environment.
- fileUris string[]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris string[]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass string
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile stringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile string[]Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion string
- Runtime version. If not specified, the default runtime version is used.
- archive_uris Sequence[str]
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_image str
- Custom container image for the runtime environment.
- file_uris Sequence[str]
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_uris Sequence[str]
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_class str
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_file_ struri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_file_ Sequence[str]uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtime_version str
- Runtime version. If not specified, the default runtime version is used.
- archiveUris List<String>
- Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format: "projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- containerImage String
- Custom container image for the runtime environment.
- fileUris List<String>
- Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jarUris List<String>
- JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- mainClass String
- The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- mainFile StringUri 
- The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- pyFile List<String>Uris 
- Python files to be placed on the PYTHONPATH for PySpark application. Supported file types: .py,.egg, and.zip. For more information about Apache Spark, see Apache Spark.
- runtimeVersion String
- Runtime version. If not specified, the default runtime version is used.
StandardSqlDataTypeResponse    
- StructType Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- TypeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- StructType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- TypeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- ArrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- RangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind String
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind string
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement StandardType Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement StandardType Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- struct_type StandardSql Struct Type Response 
- The fields of this struct, in order, if type_kind = "STRUCT".
- type_kind str
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_element_ Standardtype Sql Data Type Response 
- The type of the array's elements, if type_kind = "ARRAY".
- range_element_ Standardtype Sql Data Type Response 
- The type of the range's elements, if type_kind = "RANGE".
- structType Property Map
- The fields of this struct, in order, if type_kind = "STRUCT".
- typeKind String
- The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- arrayElement Property MapType 
- The type of the array's elements, if type_kind = "ARRAY".
- rangeElement Property MapType 
- The type of the range's elements, if type_kind = "RANGE".
StandardSqlFieldResponse   
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
- Optional. The name of this field. Can be absent for struct fields.
- type
StandardSql Data Type Response 
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type Property Map
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlStructTypeResponse    
- Fields
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field Response> 
- Fields within the struct.
- Fields
[]StandardSql Field Response 
- Fields within the struct.
- fields
List<StandardSql Field Response> 
- Fields within the struct.
- fields
StandardSql Field Response[] 
- Fields within the struct.
- fields
Sequence[StandardSql Field Response] 
- Fields within the struct.
- fields List<Property Map>
- Fields within the struct.
StandardSqlTableTypeResponse    
- Columns
List<Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Field Response> 
- The columns in this table type
- Columns
[]StandardSql Field Response 
- The columns in this table type
- columns
List<StandardSql Field Response> 
- The columns in this table type
- columns
StandardSql Field Response[] 
- The columns in this table type
- columns
Sequence[StandardSql Field Response] 
- The columns in this table type
- columns List<Property Map>
- The columns in this table type
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi